Jan 01 08:26:29 crc systemd[1]: Starting Kubernetes Kubelet... Jan 01 08:26:29 crc restorecon[4748]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 01 08:26:29 crc restorecon[4748]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 01 08:26:29 crc restorecon[4748]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 01 08:26:29 crc restorecon[4748]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 01 08:26:29 crc restorecon[4748]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 01 08:26:29 crc restorecon[4748]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 01 08:26:29 crc restorecon[4748]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 01 08:26:29 crc restorecon[4748]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 01 08:26:29 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 01 08:26:29 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 01 08:26:29 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 01 08:26:29 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 01 08:26:29 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 01 08:26:29 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 01 08:26:29 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 01 08:26:29 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 01 08:26:29 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 01 08:26:29 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 01 08:26:29 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 01 08:26:29 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 01 08:26:29 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 01 08:26:29 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 01 08:26:29 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 01 08:26:30 crc restorecon[4748]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 01 08:26:30 crc restorecon[4748]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 01 08:26:30 crc kubenswrapper[4867]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 01 08:26:30 crc kubenswrapper[4867]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 01 08:26:30 crc kubenswrapper[4867]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 01 08:26:30 crc kubenswrapper[4867]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 01 08:26:30 crc kubenswrapper[4867]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 01 08:26:30 crc kubenswrapper[4867]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.965640 4867 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968604 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968624 4867 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968628 4867 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968632 4867 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968636 4867 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968641 4867 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968647 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968651 4867 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968655 4867 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968658 4867 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968662 4867 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968665 4867 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968669 4867 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968672 4867 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968675 4867 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968679 4867 feature_gate.go:330] unrecognized feature gate: Example Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968683 4867 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968687 4867 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968692 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968695 4867 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968699 4867 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968703 4867 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968707 4867 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968711 4867 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968715 4867 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968718 4867 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968722 4867 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968725 4867 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968729 4867 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968732 4867 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968735 4867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968739 4867 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968742 4867 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968746 4867 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968750 4867 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968753 4867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968756 4867 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968760 4867 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968763 4867 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968767 4867 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968770 4867 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968776 4867 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968780 4867 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968783 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968787 4867 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968792 4867 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968796 4867 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968800 4867 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968804 4867 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968807 4867 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968811 4867 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968815 4867 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968818 4867 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968822 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968826 4867 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968829 4867 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968833 4867 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968836 4867 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968839 4867 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968843 4867 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968847 4867 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968850 4867 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968853 4867 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968858 4867 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968861 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968864 4867 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968868 4867 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968871 4867 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968875 4867 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968900 4867 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.968904 4867 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969130 4867 flags.go:64] FLAG: --address="0.0.0.0" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969141 4867 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969149 4867 flags.go:64] FLAG: --anonymous-auth="true" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969155 4867 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969161 4867 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969165 4867 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969170 4867 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969176 4867 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969180 4867 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969184 4867 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969189 4867 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969193 4867 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969197 4867 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969201 4867 flags.go:64] FLAG: --cgroup-root="" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969206 4867 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969210 4867 flags.go:64] FLAG: --client-ca-file="" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969214 4867 flags.go:64] FLAG: --cloud-config="" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969218 4867 flags.go:64] FLAG: --cloud-provider="" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969222 4867 flags.go:64] FLAG: --cluster-dns="[]" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969228 4867 flags.go:64] FLAG: --cluster-domain="" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969232 4867 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969236 4867 flags.go:64] FLAG: --config-dir="" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969240 4867 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969245 4867 flags.go:64] FLAG: --container-log-max-files="5" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969251 4867 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969255 4867 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969259 4867 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969263 4867 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969267 4867 flags.go:64] FLAG: --contention-profiling="false" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969271 4867 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969275 4867 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969279 4867 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969284 4867 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969289 4867 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969293 4867 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969298 4867 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969302 4867 flags.go:64] FLAG: --enable-load-reader="false" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969306 4867 flags.go:64] FLAG: --enable-server="true" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969310 4867 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969316 4867 flags.go:64] FLAG: --event-burst="100" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969320 4867 flags.go:64] FLAG: --event-qps="50" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969324 4867 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969328 4867 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969332 4867 flags.go:64] FLAG: --eviction-hard="" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969336 4867 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969340 4867 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969344 4867 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969348 4867 flags.go:64] FLAG: --eviction-soft="" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969352 4867 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969356 4867 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969360 4867 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969364 4867 flags.go:64] FLAG: --experimental-mounter-path="" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969371 4867 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969375 4867 flags.go:64] FLAG: --fail-swap-on="true" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969379 4867 flags.go:64] FLAG: --feature-gates="" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969384 4867 flags.go:64] FLAG: --file-check-frequency="20s" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969388 4867 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969392 4867 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969396 4867 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969400 4867 flags.go:64] FLAG: --healthz-port="10248" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969404 4867 flags.go:64] FLAG: --help="false" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969408 4867 flags.go:64] FLAG: --hostname-override="" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969412 4867 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969416 4867 flags.go:64] FLAG: --http-check-frequency="20s" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969421 4867 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969425 4867 flags.go:64] FLAG: --image-credential-provider-config="" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969429 4867 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969433 4867 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969436 4867 flags.go:64] FLAG: --image-service-endpoint="" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969440 4867 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969444 4867 flags.go:64] FLAG: --kube-api-burst="100" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969448 4867 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969452 4867 flags.go:64] FLAG: --kube-api-qps="50" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969456 4867 flags.go:64] FLAG: --kube-reserved="" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969460 4867 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969464 4867 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969468 4867 flags.go:64] FLAG: --kubelet-cgroups="" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969472 4867 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969476 4867 flags.go:64] FLAG: --lock-file="" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969479 4867 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969483 4867 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969488 4867 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969493 4867 flags.go:64] FLAG: --log-json-split-stream="false" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969497 4867 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969504 4867 flags.go:64] FLAG: --log-text-split-stream="false" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969507 4867 flags.go:64] FLAG: --logging-format="text" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969511 4867 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969516 4867 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969520 4867 flags.go:64] FLAG: --manifest-url="" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969523 4867 flags.go:64] FLAG: --manifest-url-header="" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969529 4867 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969533 4867 flags.go:64] FLAG: --max-open-files="1000000" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969538 4867 flags.go:64] FLAG: --max-pods="110" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969542 4867 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969546 4867 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969550 4867 flags.go:64] FLAG: --memory-manager-policy="None" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969556 4867 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969560 4867 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969564 4867 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969568 4867 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969578 4867 flags.go:64] FLAG: --node-status-max-images="50" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969582 4867 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969586 4867 flags.go:64] FLAG: --oom-score-adj="-999" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969590 4867 flags.go:64] FLAG: --pod-cidr="" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969594 4867 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969600 4867 flags.go:64] FLAG: --pod-manifest-path="" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969604 4867 flags.go:64] FLAG: --pod-max-pids="-1" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969608 4867 flags.go:64] FLAG: --pods-per-core="0" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969612 4867 flags.go:64] FLAG: --port="10250" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969616 4867 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969620 4867 flags.go:64] FLAG: --provider-id="" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969624 4867 flags.go:64] FLAG: --qos-reserved="" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969628 4867 flags.go:64] FLAG: --read-only-port="10255" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969632 4867 flags.go:64] FLAG: --register-node="true" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969636 4867 flags.go:64] FLAG: --register-schedulable="true" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969640 4867 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969649 4867 flags.go:64] FLAG: --registry-burst="10" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969653 4867 flags.go:64] FLAG: --registry-qps="5" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969656 4867 flags.go:64] FLAG: --reserved-cpus="" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969660 4867 flags.go:64] FLAG: --reserved-memory="" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969665 4867 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969669 4867 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969673 4867 flags.go:64] FLAG: --rotate-certificates="false" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969677 4867 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969681 4867 flags.go:64] FLAG: --runonce="false" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969685 4867 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969689 4867 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969694 4867 flags.go:64] FLAG: --seccomp-default="false" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969697 4867 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969703 4867 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969707 4867 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969711 4867 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969715 4867 flags.go:64] FLAG: --storage-driver-password="root" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969719 4867 flags.go:64] FLAG: --storage-driver-secure="false" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969723 4867 flags.go:64] FLAG: --storage-driver-table="stats" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969727 4867 flags.go:64] FLAG: --storage-driver-user="root" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969731 4867 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969735 4867 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969739 4867 flags.go:64] FLAG: --system-cgroups="" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969743 4867 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969749 4867 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969753 4867 flags.go:64] FLAG: --tls-cert-file="" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969757 4867 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969762 4867 flags.go:64] FLAG: --tls-min-version="" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969766 4867 flags.go:64] FLAG: --tls-private-key-file="" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969770 4867 flags.go:64] FLAG: --topology-manager-policy="none" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969774 4867 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969778 4867 flags.go:64] FLAG: --topology-manager-scope="container" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969789 4867 flags.go:64] FLAG: --v="2" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969794 4867 flags.go:64] FLAG: --version="false" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969800 4867 flags.go:64] FLAG: --vmodule="" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969804 4867 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.969808 4867 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.969918 4867 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.969924 4867 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.969928 4867 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.969933 4867 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.969936 4867 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.969940 4867 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.969944 4867 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.969947 4867 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.969953 4867 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.969956 4867 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.969961 4867 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.969964 4867 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.969967 4867 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.969971 4867 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.969974 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.969978 4867 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.969982 4867 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.969987 4867 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.969990 4867 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.969994 4867 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.969998 4867 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970001 4867 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970006 4867 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970010 4867 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970014 4867 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970017 4867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970021 4867 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970026 4867 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970030 4867 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970034 4867 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970037 4867 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970041 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970044 4867 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970048 4867 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970051 4867 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970055 4867 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970058 4867 feature_gate.go:330] unrecognized feature gate: Example Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970061 4867 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970065 4867 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970068 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970073 4867 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970077 4867 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970080 4867 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970084 4867 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970087 4867 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970091 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970094 4867 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970098 4867 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970102 4867 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970106 4867 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970110 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970114 4867 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970117 4867 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970121 4867 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970124 4867 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970127 4867 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970131 4867 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970134 4867 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970138 4867 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970144 4867 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970149 4867 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970153 4867 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970157 4867 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970160 4867 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970164 4867 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970168 4867 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970171 4867 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970175 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970178 4867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970182 4867 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.970185 4867 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.970191 4867 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.980959 4867 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.980999 4867 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981130 4867 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981146 4867 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981158 4867 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981168 4867 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981176 4867 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981185 4867 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981195 4867 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981207 4867 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981217 4867 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981226 4867 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981235 4867 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981243 4867 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981250 4867 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981258 4867 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981266 4867 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981276 4867 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981286 4867 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981295 4867 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981304 4867 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981314 4867 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981325 4867 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981335 4867 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981344 4867 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981352 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981360 4867 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981367 4867 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981375 4867 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981384 4867 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981394 4867 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981404 4867 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981414 4867 feature_gate.go:330] unrecognized feature gate: Example Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981427 4867 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981436 4867 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981447 4867 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981456 4867 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981466 4867 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981477 4867 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981488 4867 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981502 4867 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981521 4867 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981532 4867 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981542 4867 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981553 4867 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981563 4867 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981573 4867 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981582 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981592 4867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981603 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981613 4867 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981624 4867 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981634 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981644 4867 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981651 4867 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981659 4867 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981667 4867 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981675 4867 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981683 4867 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981690 4867 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981699 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981706 4867 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981714 4867 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981722 4867 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981729 4867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981737 4867 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981745 4867 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981753 4867 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981761 4867 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981768 4867 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981777 4867 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981848 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.981865 4867 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.981916 4867 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982166 4867 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982181 4867 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982190 4867 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982200 4867 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982209 4867 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982217 4867 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982225 4867 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982233 4867 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982242 4867 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982249 4867 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982257 4867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982264 4867 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982272 4867 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982282 4867 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982290 4867 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982297 4867 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982304 4867 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982312 4867 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982320 4867 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982330 4867 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982338 4867 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982346 4867 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982355 4867 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982362 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982371 4867 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982379 4867 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982388 4867 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982399 4867 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982408 4867 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982418 4867 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982431 4867 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982442 4867 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982450 4867 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982458 4867 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982466 4867 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982474 4867 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982481 4867 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982490 4867 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982498 4867 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982505 4867 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982513 4867 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982521 4867 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982529 4867 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982536 4867 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982544 4867 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982552 4867 feature_gate.go:330] unrecognized feature gate: Example Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982561 4867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982568 4867 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982576 4867 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982583 4867 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982595 4867 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982605 4867 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982614 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982623 4867 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982632 4867 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982642 4867 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982652 4867 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982661 4867 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982671 4867 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982681 4867 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982690 4867 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982698 4867 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982706 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982716 4867 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982724 4867 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982732 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982740 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982748 4867 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982756 4867 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982764 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 01 08:26:30 crc kubenswrapper[4867]: W0101 08:26:30.982771 4867 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.982784 4867 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.983073 4867 server.go:940] "Client rotation is on, will bootstrap in background" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.991528 4867 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.991722 4867 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.992671 4867 server.go:997] "Starting client certificate rotation" Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.992713 4867 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.992947 4867 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-21 00:58:11.989839441 +0000 UTC Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.993046 4867 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 01 08:26:30 crc kubenswrapper[4867]: I0101 08:26:30.999500 4867 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 01 08:26:31 crc kubenswrapper[4867]: E0101 08:26:31.001427 4867 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.002270 4867 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.011821 4867 log.go:25] "Validated CRI v1 runtime API" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.033683 4867 log.go:25] "Validated CRI v1 image API" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.036007 4867 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.038236 4867 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-01-08-22-01-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.038274 4867 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.060411 4867 manager.go:217] Machine: {Timestamp:2026-01-01 08:26:31.055307083 +0000 UTC m=+0.190575872 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:e821d981-d45f-45c6-abaa-62a41c48c1e4 BootID:206ef261-50f6-4f09-a8e0-3b8f2babe599 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:4a:6b:ef Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:4a:6b:ef Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:a9:3b:bf Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:88:40:d1 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:f8:0a:10 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:f5:1d:78 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:28:6a:48 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:3e:10:86:2e:a2:b5 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:c2:0b:c8:b8:1f:b4 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.060840 4867 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.061219 4867 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.061967 4867 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.062303 4867 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.062363 4867 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.062692 4867 topology_manager.go:138] "Creating topology manager with none policy" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.062713 4867 container_manager_linux.go:303] "Creating device plugin manager" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.063020 4867 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.063087 4867 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.063524 4867 state_mem.go:36] "Initialized new in-memory state store" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.063661 4867 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.064703 4867 kubelet.go:418] "Attempting to sync node with API server" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.064737 4867 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.064776 4867 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.064797 4867 kubelet.go:324] "Adding apiserver pod source" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.064815 4867 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 01 08:26:31 crc kubenswrapper[4867]: W0101 08:26:31.067225 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.067312 4867 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 01 08:26:31 crc kubenswrapper[4867]: E0101 08:26:31.067342 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Jan 01 08:26:31 crc kubenswrapper[4867]: W0101 08:26:31.067518 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Jan 01 08:26:31 crc kubenswrapper[4867]: E0101 08:26:31.067628 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.067974 4867 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.068955 4867 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.069620 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.069651 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.069661 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.069670 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.069687 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.069696 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.069706 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.069722 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.069734 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.069744 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.069759 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.069770 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.070029 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.070620 4867 server.go:1280] "Started kubelet" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.070634 4867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.071034 4867 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.071024 4867 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.072114 4867 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 01 08:26:31 crc systemd[1]: Started Kubernetes Kubelet. Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.072689 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.073108 4867 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.073159 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 09:27:48.394938099 +0000 UTC Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.073223 4867 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.073200 4867 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.073271 4867 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 01 08:26:31 crc kubenswrapper[4867]: E0101 08:26:31.073389 4867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 01 08:26:31 crc kubenswrapper[4867]: W0101 08:26:31.074177 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Jan 01 08:26:31 crc kubenswrapper[4867]: E0101 08:26:31.074232 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Jan 01 08:26:31 crc kubenswrapper[4867]: E0101 08:26:31.074586 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="200ms" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.074930 4867 factory.go:55] Registering systemd factory Jan 01 08:26:31 crc kubenswrapper[4867]: E0101 08:26:31.074388 4867 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.12:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18868df5e6f1717c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-01 08:26:31.070585212 +0000 UTC m=+0.205853981,LastTimestamp:2026-01-01 08:26:31.070585212 +0000 UTC m=+0.205853981,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.076236 4867 factory.go:221] Registration of the systemd container factory successfully Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.076589 4867 factory.go:153] Registering CRI-O factory Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.076664 4867 factory.go:221] Registration of the crio container factory successfully Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.076763 4867 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.076842 4867 factory.go:103] Registering Raw factory Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.076922 4867 manager.go:1196] Started watching for new ooms in manager Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.077504 4867 manager.go:319] Starting recovery of all containers Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.078778 4867 server.go:460] "Adding debug handlers to kubelet server" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.085316 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.085431 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.085509 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.085566 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.085627 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.085691 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.085756 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.085824 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.085910 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.085980 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.086041 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.086098 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.086164 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.086226 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.086285 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.086341 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.086395 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.086454 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.086510 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.086564 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.086618 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.086676 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.086773 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.086846 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.086918 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.086977 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.087036 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.087191 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.087264 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.087327 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.087387 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.087441 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.087496 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.087560 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.087616 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.087670 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.087722 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.087779 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.087844 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.087962 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.088018 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.088073 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.088134 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.088188 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.088249 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.088311 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.088365 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.088418 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.088483 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.088548 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.088605 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.088661 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.088719 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.089283 4867 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.089436 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.089500 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.089556 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.089611 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.089668 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.089728 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.089794 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.089848 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.089915 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.089973 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.090027 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.090089 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.090145 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.090219 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.090276 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.090332 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.090391 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.090453 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.090507 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.090563 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.090619 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.090677 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.090732 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.090786 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.090845 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.090950 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.091009 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.091072 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.091130 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.091206 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.091266 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.091325 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.091386 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.091442 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.091495 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.091555 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.091610 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.091671 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.091730 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.091785 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.091838 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.091915 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.091974 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.092041 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.092097 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.092152 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.092206 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.092262 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.092326 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.092385 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.092440 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.092502 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.092559 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.092622 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.092679 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.092735 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.092794 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.092851 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.092930 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.092995 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.093059 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.093113 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.093168 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.093227 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.093292 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.093353 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.093411 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.093466 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.093521 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.093585 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.093640 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.093696 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.093753 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.093808 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.093867 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.093942 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.094001 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.094057 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.094113 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.094180 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.094243 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.094300 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.094354 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.094414 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.094471 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.094531 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.094586 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.094649 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.094705 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.094760 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.094823 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.094908 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.094967 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.095020 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.095074 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.095131 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.095216 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.095275 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.095330 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.095384 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.095437 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.095497 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.095552 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.095606 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.095661 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.095713 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.095771 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.095826 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.095933 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.096022 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.096100 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.096179 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.096259 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.096328 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.096385 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.096441 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.096506 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.096580 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.096641 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.096697 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.096752 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.096812 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.096904 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.096976 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.097034 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.097088 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.097147 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.097207 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.097272 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.097328 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.097383 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.097439 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.097495 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.097557 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.097621 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.097678 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.097737 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.097792 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.097851 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.097950 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.098009 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.098068 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.098124 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.098184 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.098240 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.098294 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.098347 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.098402 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.098478 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.098556 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.098624 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.098682 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.098744 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.098799 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.098861 4867 reconstruct.go:97] "Volume reconstruction finished" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.098929 4867 reconciler.go:26] "Reconciler: start to sync state" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.104453 4867 manager.go:324] Recovery completed Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.114208 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.116043 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.119578 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.119628 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.120716 4867 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.120734 4867 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.120825 4867 state_mem.go:36] "Initialized new in-memory state store" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.125766 4867 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.127271 4867 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.127306 4867 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.127328 4867 kubelet.go:2335] "Starting kubelet main sync loop" Jan 01 08:26:31 crc kubenswrapper[4867]: E0101 08:26:31.127370 4867 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 01 08:26:31 crc kubenswrapper[4867]: W0101 08:26:31.128209 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Jan 01 08:26:31 crc kubenswrapper[4867]: E0101 08:26:31.128260 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.131759 4867 policy_none.go:49] "None policy: Start" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.133076 4867 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.133098 4867 state_mem.go:35] "Initializing new in-memory state store" Jan 01 08:26:31 crc kubenswrapper[4867]: E0101 08:26:31.173471 4867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.192759 4867 manager.go:334] "Starting Device Plugin manager" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.192812 4867 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.192829 4867 server.go:79] "Starting device plugin registration server" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.193391 4867 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.193412 4867 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.193578 4867 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.193705 4867 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.193729 4867 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 01 08:26:31 crc kubenswrapper[4867]: E0101 08:26:31.204896 4867 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.227536 4867 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.227730 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.230134 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.230173 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.230184 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.230343 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.230555 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.230607 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.231579 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.231645 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.231679 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.231899 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.231926 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.231934 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.232030 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.232314 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.232387 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.232751 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.232822 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.232847 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.233104 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.233276 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.233330 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.233602 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.233631 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.233641 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.234210 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.234327 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.234710 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.235759 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.234803 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.235788 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.235933 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.236035 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.236115 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.238869 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.238922 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.238935 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.239261 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.239294 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.239307 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.239466 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.239501 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.240228 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.240249 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.240257 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:31 crc kubenswrapper[4867]: E0101 08:26:31.278139 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="400ms" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.293790 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.295138 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.295190 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.295208 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.295242 4867 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 01 08:26:31 crc kubenswrapper[4867]: E0101 08:26:31.295849 4867 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.12:6443: connect: connection refused" node="crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.302075 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.302175 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.302222 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.302247 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.302271 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.302294 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.302314 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.302337 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.302358 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.302396 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.302459 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.302494 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.302523 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.302555 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.302579 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.403565 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.403635 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.403740 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.403766 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.403789 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.403818 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.403828 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.403909 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.403946 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.403999 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.404021 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.403843 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.404107 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.404119 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.404155 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.404195 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.404217 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.404255 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.404309 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.404347 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.404325 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.404361 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.404390 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.404416 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.404458 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.404496 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.404524 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.404526 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.403961 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.404592 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.496914 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.498773 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.498856 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.498877 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.498946 4867 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 01 08:26:31 crc kubenswrapper[4867]: E0101 08:26:31.499548 4867 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.12:6443: connect: connection refused" node="crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.562493 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.592796 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: W0101 08:26:31.597705 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-63ded9d0a7e32a0140f25b442a046daf5bc38ebb31ab6d7a32b5d4ef33855a6a WatchSource:0}: Error finding container 63ded9d0a7e32a0140f25b442a046daf5bc38ebb31ab6d7a32b5d4ef33855a6a: Status 404 returned error can't find the container with id 63ded9d0a7e32a0140f25b442a046daf5bc38ebb31ab6d7a32b5d4ef33855a6a Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.607721 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.618558 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: W0101 08:26:31.622834 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-965ff27e39f5af6fc0f41cb2f12043258d2e964c311ce694114f27e088948c99 WatchSource:0}: Error finding container 965ff27e39f5af6fc0f41cb2f12043258d2e964c311ce694114f27e088948c99: Status 404 returned error can't find the container with id 965ff27e39f5af6fc0f41cb2f12043258d2e964c311ce694114f27e088948c99 Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.624262 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 01 08:26:31 crc kubenswrapper[4867]: W0101 08:26:31.630264 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-c3c66e1048223d85789e48ccd81b6d9080382dcaca8b147d703bfe926a7fa313 WatchSource:0}: Error finding container c3c66e1048223d85789e48ccd81b6d9080382dcaca8b147d703bfe926a7fa313: Status 404 returned error can't find the container with id c3c66e1048223d85789e48ccd81b6d9080382dcaca8b147d703bfe926a7fa313 Jan 01 08:26:31 crc kubenswrapper[4867]: W0101 08:26:31.642910 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-a7fba181c41ff4c132dac827a04b9d2fa40779b8587c9380ec98f28910029d7d WatchSource:0}: Error finding container a7fba181c41ff4c132dac827a04b9d2fa40779b8587c9380ec98f28910029d7d: Status 404 returned error can't find the container with id a7fba181c41ff4c132dac827a04b9d2fa40779b8587c9380ec98f28910029d7d Jan 01 08:26:31 crc kubenswrapper[4867]: E0101 08:26:31.679308 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="800ms" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.900078 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.901653 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.901703 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.901716 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:31 crc kubenswrapper[4867]: I0101 08:26:31.901748 4867 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 01 08:26:31 crc kubenswrapper[4867]: E0101 08:26:31.902328 4867 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.12:6443: connect: connection refused" node="crc" Jan 01 08:26:31 crc kubenswrapper[4867]: W0101 08:26:31.939558 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Jan 01 08:26:31 crc kubenswrapper[4867]: E0101 08:26:31.939791 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.071480 4867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.073488 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 08:02:06.036673637 +0000 UTC Jan 01 08:26:32 crc kubenswrapper[4867]: W0101 08:26:32.085364 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Jan 01 08:26:32 crc kubenswrapper[4867]: E0101 08:26:32.085457 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Jan 01 08:26:32 crc kubenswrapper[4867]: W0101 08:26:32.093033 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Jan 01 08:26:32 crc kubenswrapper[4867]: E0101 08:26:32.093094 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.132845 4867 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="2692cc645abb3e0a02a141050e25d3a2d28f20a8dd4f95fde496cae4486662e9" exitCode=0 Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.132950 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"2692cc645abb3e0a02a141050e25d3a2d28f20a8dd4f95fde496cae4486662e9"} Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.133206 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"965ff27e39f5af6fc0f41cb2f12043258d2e964c311ce694114f27e088948c99"} Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.133431 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.134705 4867 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="b7af3c1c378c65ecb77fe867cfcfcb8e90466f1cddf579a1b7c386dc8eefc204" exitCode=0 Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.134764 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"b7af3c1c378c65ecb77fe867cfcfcb8e90466f1cddf579a1b7c386dc8eefc204"} Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.134792 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"63ded9d0a7e32a0140f25b442a046daf5bc38ebb31ab6d7a32b5d4ef33855a6a"} Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.134861 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.135754 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.135800 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.135819 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.136295 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.136325 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.136338 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.137861 4867 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="9ea87c868a609b4ad7b0ac9fee0e9335bd9d5c640479e575ccdefc2777e74558" exitCode=0 Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.137909 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"9ea87c868a609b4ad7b0ac9fee0e9335bd9d5c640479e575ccdefc2777e74558"} Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.138030 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1b65160c4bb18576ff6f756646eb9566070aa89f26f6525ea4fa9981b192d4a6"} Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.138113 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.139190 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.139231 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.139243 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.140192 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fb9b9aae16cc1c29ffb288ab01b54fa559cfe599c48f3ed97fe62bcc6e5b3288"} Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.140216 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a7fba181c41ff4c132dac827a04b9d2fa40779b8587c9380ec98f28910029d7d"} Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.142816 4867 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac" exitCode=0 Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.142866 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac"} Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.142965 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c3c66e1048223d85789e48ccd81b6d9080382dcaca8b147d703bfe926a7fa313"} Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.143098 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.144271 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.144311 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.144323 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.151636 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.152723 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.152813 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.152840 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:32 crc kubenswrapper[4867]: E0101 08:26:32.480300 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="1.6s" Jan 01 08:26:32 crc kubenswrapper[4867]: W0101 08:26:32.658162 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Jan 01 08:26:32 crc kubenswrapper[4867]: E0101 08:26:32.658575 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.703136 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.705101 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.705136 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.705160 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:32 crc kubenswrapper[4867]: I0101 08:26:32.705211 4867 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 01 08:26:32 crc kubenswrapper[4867]: E0101 08:26:32.705670 4867 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.12:6443: connect: connection refused" node="crc" Jan 01 08:26:33 crc kubenswrapper[4867]: I0101 08:26:33.073904 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 15:18:19.73415964 +0000 UTC Jan 01 08:26:33 crc kubenswrapper[4867]: I0101 08:26:33.073965 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 366h51m46.660197912s for next certificate rotation Jan 01 08:26:33 crc kubenswrapper[4867]: I0101 08:26:33.075159 4867 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 01 08:26:33 crc kubenswrapper[4867]: I0101 08:26:33.146679 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b9bcc967783fd9c73b9bdbb32623a3b3400a1489841b37b696810844be5c5686"} Jan 01 08:26:33 crc kubenswrapper[4867]: I0101 08:26:33.146728 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"90768dce0ec952afe36a613a6e7ba00fe58331f820e40afff933da92ce33d762"} Jan 01 08:26:33 crc kubenswrapper[4867]: I0101 08:26:33.146740 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"bbef8e941e36563a6e489876fe03f03a6305edb8acbf6d31b1d098be4b23ebd7"} Jan 01 08:26:33 crc kubenswrapper[4867]: I0101 08:26:33.146832 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:33 crc kubenswrapper[4867]: I0101 08:26:33.147815 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:33 crc kubenswrapper[4867]: I0101 08:26:33.147865 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:33 crc kubenswrapper[4867]: I0101 08:26:33.147874 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:33 crc kubenswrapper[4867]: I0101 08:26:33.151116 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"67682952747bac1bee9a88d0d4960e1b723a69088fc0dfc6ad9a11d66be35066"} Jan 01 08:26:33 crc kubenswrapper[4867]: I0101 08:26:33.151161 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"24d92095d9119537b08f6c16f41499ea77d353bebdf97681d1078af6cf5d24be"} Jan 01 08:26:33 crc kubenswrapper[4867]: I0101 08:26:33.151172 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2b3a6b291f30c7815be13fde52bdef7ef22ee57e9c8be80809cf8a90029b8dd9"} Jan 01 08:26:33 crc kubenswrapper[4867]: I0101 08:26:33.151174 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:33 crc kubenswrapper[4867]: I0101 08:26:33.152152 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:33 crc kubenswrapper[4867]: I0101 08:26:33.152191 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:33 crc kubenswrapper[4867]: I0101 08:26:33.152210 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:33 crc kubenswrapper[4867]: I0101 08:26:33.155168 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0"} Jan 01 08:26:33 crc kubenswrapper[4867]: I0101 08:26:33.155215 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66"} Jan 01 08:26:33 crc kubenswrapper[4867]: I0101 08:26:33.155237 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0"} Jan 01 08:26:33 crc kubenswrapper[4867]: I0101 08:26:33.155255 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967"} Jan 01 08:26:33 crc kubenswrapper[4867]: I0101 08:26:33.156844 4867 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6d56e4a1e9ec73a3b10082536eb153245ce0f6416abaf0b0289980556c239761" exitCode=0 Jan 01 08:26:33 crc kubenswrapper[4867]: I0101 08:26:33.156950 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6d56e4a1e9ec73a3b10082536eb153245ce0f6416abaf0b0289980556c239761"} Jan 01 08:26:33 crc kubenswrapper[4867]: I0101 08:26:33.157105 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:33 crc kubenswrapper[4867]: I0101 08:26:33.158191 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:33 crc kubenswrapper[4867]: I0101 08:26:33.158234 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:33 crc kubenswrapper[4867]: I0101 08:26:33.158252 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:33 crc kubenswrapper[4867]: I0101 08:26:33.160868 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"d769179e4504fd8f8dffde50dc5e4e944e232fa5a431ff933e28d41385fb7e71"} Jan 01 08:26:33 crc kubenswrapper[4867]: I0101 08:26:33.161081 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:33 crc kubenswrapper[4867]: I0101 08:26:33.166655 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:33 crc kubenswrapper[4867]: I0101 08:26:33.166814 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:33 crc kubenswrapper[4867]: I0101 08:26:33.166968 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:34 crc kubenswrapper[4867]: I0101 08:26:34.168459 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a"} Jan 01 08:26:34 crc kubenswrapper[4867]: I0101 08:26:34.168518 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:34 crc kubenswrapper[4867]: I0101 08:26:34.169457 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:34 crc kubenswrapper[4867]: I0101 08:26:34.169490 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:34 crc kubenswrapper[4867]: I0101 08:26:34.169501 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:34 crc kubenswrapper[4867]: I0101 08:26:34.170831 4867 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f32fe8f6a93a219d706b3fffcaa4304f97fe935e5860883f9b3cd36cded6a872" exitCode=0 Jan 01 08:26:34 crc kubenswrapper[4867]: I0101 08:26:34.170920 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f32fe8f6a93a219d706b3fffcaa4304f97fe935e5860883f9b3cd36cded6a872"} Jan 01 08:26:34 crc kubenswrapper[4867]: I0101 08:26:34.171036 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:34 crc kubenswrapper[4867]: I0101 08:26:34.171041 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:34 crc kubenswrapper[4867]: I0101 08:26:34.172198 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:34 crc kubenswrapper[4867]: I0101 08:26:34.172238 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:34 crc kubenswrapper[4867]: I0101 08:26:34.172258 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:34 crc kubenswrapper[4867]: I0101 08:26:34.172268 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:34 crc kubenswrapper[4867]: I0101 08:26:34.172303 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:34 crc kubenswrapper[4867]: I0101 08:26:34.172320 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:34 crc kubenswrapper[4867]: I0101 08:26:34.306687 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:34 crc kubenswrapper[4867]: I0101 08:26:34.308252 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:34 crc kubenswrapper[4867]: I0101 08:26:34.308321 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:34 crc kubenswrapper[4867]: I0101 08:26:34.308348 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:34 crc kubenswrapper[4867]: I0101 08:26:34.308394 4867 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 01 08:26:34 crc kubenswrapper[4867]: I0101 08:26:34.566630 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:26:34 crc kubenswrapper[4867]: I0101 08:26:34.587702 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 01 08:26:34 crc kubenswrapper[4867]: I0101 08:26:34.593631 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 01 08:26:35 crc kubenswrapper[4867]: I0101 08:26:35.183511 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3b9573a0775b3de5a656055b91134577c72b89ef83c248ac7768a81857aa5a2a"} Jan 01 08:26:35 crc kubenswrapper[4867]: I0101 08:26:35.183670 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"becd25296a46df0c1d48c938c7f9a9507285b959944ed6c5b45a7fd9b2a7a03a"} Jan 01 08:26:35 crc kubenswrapper[4867]: I0101 08:26:35.183566 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:35 crc kubenswrapper[4867]: I0101 08:26:35.183731 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4f193097904a153b498527827bd7da0ad07b01b1da256defa6f4ee5e33449baa"} Jan 01 08:26:35 crc kubenswrapper[4867]: I0101 08:26:35.183576 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:35 crc kubenswrapper[4867]: I0101 08:26:35.185398 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:35 crc kubenswrapper[4867]: I0101 08:26:35.185471 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:35 crc kubenswrapper[4867]: I0101 08:26:35.185513 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:35 crc kubenswrapper[4867]: I0101 08:26:35.185675 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:35 crc kubenswrapper[4867]: I0101 08:26:35.185743 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:35 crc kubenswrapper[4867]: I0101 08:26:35.185793 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:35 crc kubenswrapper[4867]: I0101 08:26:35.747864 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 01 08:26:36 crc kubenswrapper[4867]: I0101 08:26:36.088178 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:26:36 crc kubenswrapper[4867]: I0101 08:26:36.193462 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e581719403b464558952cc64f4d69be4170c02d8edbda8badc2583fb08ab16f6"} Jan 01 08:26:36 crc kubenswrapper[4867]: I0101 08:26:36.193541 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"90eac4894a2a22cc2f2fc6b8f35bd370dd765f7fef625045e7a92a46a1cba3dd"} Jan 01 08:26:36 crc kubenswrapper[4867]: I0101 08:26:36.193558 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:36 crc kubenswrapper[4867]: I0101 08:26:36.193598 4867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 01 08:26:36 crc kubenswrapper[4867]: I0101 08:26:36.193670 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:36 crc kubenswrapper[4867]: I0101 08:26:36.193795 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:36 crc kubenswrapper[4867]: I0101 08:26:36.195097 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:36 crc kubenswrapper[4867]: I0101 08:26:36.195147 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:36 crc kubenswrapper[4867]: I0101 08:26:36.195165 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:36 crc kubenswrapper[4867]: I0101 08:26:36.195274 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:36 crc kubenswrapper[4867]: I0101 08:26:36.195308 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:36 crc kubenswrapper[4867]: I0101 08:26:36.195324 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:36 crc kubenswrapper[4867]: I0101 08:26:36.195735 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:36 crc kubenswrapper[4867]: I0101 08:26:36.195768 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:36 crc kubenswrapper[4867]: I0101 08:26:36.195786 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:36 crc kubenswrapper[4867]: I0101 08:26:36.472295 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 01 08:26:37 crc kubenswrapper[4867]: I0101 08:26:37.194509 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 01 08:26:37 crc kubenswrapper[4867]: I0101 08:26:37.196533 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:37 crc kubenswrapper[4867]: I0101 08:26:37.196562 4867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 01 08:26:37 crc kubenswrapper[4867]: I0101 08:26:37.196648 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:37 crc kubenswrapper[4867]: I0101 08:26:37.197142 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:37 crc kubenswrapper[4867]: I0101 08:26:37.198252 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:37 crc kubenswrapper[4867]: I0101 08:26:37.198313 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:37 crc kubenswrapper[4867]: I0101 08:26:37.198336 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:37 crc kubenswrapper[4867]: I0101 08:26:37.198345 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:37 crc kubenswrapper[4867]: I0101 08:26:37.198391 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:37 crc kubenswrapper[4867]: I0101 08:26:37.198409 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:37 crc kubenswrapper[4867]: I0101 08:26:37.198645 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:37 crc kubenswrapper[4867]: I0101 08:26:37.198671 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:37 crc kubenswrapper[4867]: I0101 08:26:37.198687 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:37 crc kubenswrapper[4867]: I0101 08:26:37.601377 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 01 08:26:37 crc kubenswrapper[4867]: I0101 08:26:37.601695 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:37 crc kubenswrapper[4867]: I0101 08:26:37.604103 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:37 crc kubenswrapper[4867]: I0101 08:26:37.604218 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:37 crc kubenswrapper[4867]: I0101 08:26:37.604247 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:38 crc kubenswrapper[4867]: I0101 08:26:38.200178 4867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 01 08:26:38 crc kubenswrapper[4867]: I0101 08:26:38.200222 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:38 crc kubenswrapper[4867]: I0101 08:26:38.200257 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:38 crc kubenswrapper[4867]: I0101 08:26:38.202136 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:38 crc kubenswrapper[4867]: I0101 08:26:38.202156 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:38 crc kubenswrapper[4867]: I0101 08:26:38.202194 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:38 crc kubenswrapper[4867]: I0101 08:26:38.202204 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:38 crc kubenswrapper[4867]: I0101 08:26:38.202213 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:38 crc kubenswrapper[4867]: I0101 08:26:38.202222 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:38 crc kubenswrapper[4867]: I0101 08:26:38.257290 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 01 08:26:38 crc kubenswrapper[4867]: I0101 08:26:38.748963 4867 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 01 08:26:38 crc kubenswrapper[4867]: I0101 08:26:38.749123 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 01 08:26:38 crc kubenswrapper[4867]: I0101 08:26:38.775366 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:26:38 crc kubenswrapper[4867]: I0101 08:26:38.775609 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:38 crc kubenswrapper[4867]: I0101 08:26:38.777275 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:38 crc kubenswrapper[4867]: I0101 08:26:38.777347 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:38 crc kubenswrapper[4867]: I0101 08:26:38.777369 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:39 crc kubenswrapper[4867]: I0101 08:26:39.203317 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:39 crc kubenswrapper[4867]: I0101 08:26:39.204862 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:39 crc kubenswrapper[4867]: I0101 08:26:39.204975 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:39 crc kubenswrapper[4867]: I0101 08:26:39.205003 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:40 crc kubenswrapper[4867]: I0101 08:26:40.657741 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 01 08:26:40 crc kubenswrapper[4867]: I0101 08:26:40.658222 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:40 crc kubenswrapper[4867]: I0101 08:26:40.660594 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:40 crc kubenswrapper[4867]: I0101 08:26:40.660688 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:40 crc kubenswrapper[4867]: I0101 08:26:40.660724 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:41 crc kubenswrapper[4867]: E0101 08:26:41.205813 4867 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 01 08:26:43 crc kubenswrapper[4867]: I0101 08:26:43.072664 4867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 01 08:26:43 crc kubenswrapper[4867]: E0101 08:26:43.076938 4867 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 01 08:26:43 crc kubenswrapper[4867]: W0101 08:26:43.639139 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 01 08:26:43 crc kubenswrapper[4867]: I0101 08:26:43.639281 4867 trace.go:236] Trace[757505575]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Jan-2026 08:26:33.637) (total time: 10001ms): Jan 01 08:26:43 crc kubenswrapper[4867]: Trace[757505575]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (08:26:43.639) Jan 01 08:26:43 crc kubenswrapper[4867]: Trace[757505575]: [10.001486973s] [10.001486973s] END Jan 01 08:26:43 crc kubenswrapper[4867]: E0101 08:26:43.639313 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 01 08:26:43 crc kubenswrapper[4867]: I0101 08:26:43.697468 4867 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 01 08:26:43 crc kubenswrapper[4867]: I0101 08:26:43.697566 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 01 08:26:43 crc kubenswrapper[4867]: I0101 08:26:43.782717 4867 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 01 08:26:43 crc kubenswrapper[4867]: [+]log ok Jan 01 08:26:43 crc kubenswrapper[4867]: [+]etcd ok Jan 01 08:26:43 crc kubenswrapper[4867]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 01 08:26:43 crc kubenswrapper[4867]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 01 08:26:43 crc kubenswrapper[4867]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 01 08:26:43 crc kubenswrapper[4867]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 01 08:26:43 crc kubenswrapper[4867]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 01 08:26:43 crc kubenswrapper[4867]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 01 08:26:43 crc kubenswrapper[4867]: [+]poststarthook/generic-apiserver-start-informers ok Jan 01 08:26:43 crc kubenswrapper[4867]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 01 08:26:43 crc kubenswrapper[4867]: [+]poststarthook/priority-and-fairness-filter ok Jan 01 08:26:43 crc kubenswrapper[4867]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 01 08:26:43 crc kubenswrapper[4867]: [+]poststarthook/start-apiextensions-informers ok Jan 01 08:26:43 crc kubenswrapper[4867]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Jan 01 08:26:43 crc kubenswrapper[4867]: [-]poststarthook/crd-informer-synced failed: reason withheld Jan 01 08:26:43 crc kubenswrapper[4867]: [+]poststarthook/start-system-namespaces-controller ok Jan 01 08:26:43 crc kubenswrapper[4867]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 01 08:26:43 crc kubenswrapper[4867]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 01 08:26:43 crc kubenswrapper[4867]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 01 08:26:43 crc kubenswrapper[4867]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 01 08:26:43 crc kubenswrapper[4867]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 01 08:26:43 crc kubenswrapper[4867]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 01 08:26:43 crc kubenswrapper[4867]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 01 08:26:43 crc kubenswrapper[4867]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 01 08:26:43 crc kubenswrapper[4867]: [+]poststarthook/bootstrap-controller ok Jan 01 08:26:43 crc kubenswrapper[4867]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 01 08:26:43 crc kubenswrapper[4867]: [+]poststarthook/start-kube-aggregator-informers ok Jan 01 08:26:43 crc kubenswrapper[4867]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 01 08:26:43 crc kubenswrapper[4867]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 01 08:26:43 crc kubenswrapper[4867]: [-]poststarthook/apiservice-registration-controller failed: reason withheld Jan 01 08:26:43 crc kubenswrapper[4867]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 01 08:26:43 crc kubenswrapper[4867]: [-]poststarthook/apiservice-discovery-controller failed: reason withheld Jan 01 08:26:43 crc kubenswrapper[4867]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 01 08:26:43 crc kubenswrapper[4867]: [+]autoregister-completion ok Jan 01 08:26:43 crc kubenswrapper[4867]: [+]poststarthook/apiservice-openapi-controller ok Jan 01 08:26:43 crc kubenswrapper[4867]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 01 08:26:43 crc kubenswrapper[4867]: livez check failed Jan 01 08:26:43 crc kubenswrapper[4867]: I0101 08:26:43.782794 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 01 08:26:47 crc kubenswrapper[4867]: I0101 08:26:47.235229 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 01 08:26:47 crc kubenswrapper[4867]: I0101 08:26:47.235682 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:47 crc kubenswrapper[4867]: I0101 08:26:47.237859 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:47 crc kubenswrapper[4867]: I0101 08:26:47.237972 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:47 crc kubenswrapper[4867]: I0101 08:26:47.237999 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:47 crc kubenswrapper[4867]: I0101 08:26:47.252368 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 01 08:26:47 crc kubenswrapper[4867]: I0101 08:26:47.304623 4867 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 01 08:26:47 crc kubenswrapper[4867]: I0101 08:26:47.322430 4867 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 01 08:26:48 crc kubenswrapper[4867]: I0101 08:26:48.229756 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:48 crc kubenswrapper[4867]: I0101 08:26:48.231086 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:48 crc kubenswrapper[4867]: I0101 08:26:48.231147 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:48 crc kubenswrapper[4867]: I0101 08:26:48.231171 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:48 crc kubenswrapper[4867]: I0101 08:26:48.265697 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 01 08:26:48 crc kubenswrapper[4867]: I0101 08:26:48.265876 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:48 crc kubenswrapper[4867]: I0101 08:26:48.267376 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:48 crc kubenswrapper[4867]: I0101 08:26:48.267428 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:48 crc kubenswrapper[4867]: I0101 08:26:48.267452 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:48 crc kubenswrapper[4867]: I0101 08:26:48.691006 4867 trace.go:236] Trace[1595852773]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Jan-2026 08:26:33.903) (total time: 14787ms): Jan 01 08:26:48 crc kubenswrapper[4867]: Trace[1595852773]: ---"Objects listed" error: 14787ms (08:26:48.690) Jan 01 08:26:48 crc kubenswrapper[4867]: Trace[1595852773]: [14.787310237s] [14.787310237s] END Jan 01 08:26:48 crc kubenswrapper[4867]: I0101 08:26:48.691050 4867 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 01 08:26:48 crc kubenswrapper[4867]: I0101 08:26:48.693484 4867 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 01 08:26:48 crc kubenswrapper[4867]: E0101 08:26:48.693680 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 01 08:26:48 crc kubenswrapper[4867]: I0101 08:26:48.696816 4867 trace.go:236] Trace[1396588305]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Jan-2026 08:26:35.575) (total time: 13121ms): Jan 01 08:26:48 crc kubenswrapper[4867]: Trace[1396588305]: ---"Objects listed" error: 13121ms (08:26:48.696) Jan 01 08:26:48 crc kubenswrapper[4867]: Trace[1396588305]: [13.121060693s] [13.121060693s] END Jan 01 08:26:48 crc kubenswrapper[4867]: I0101 08:26:48.696882 4867 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 01 08:26:48 crc kubenswrapper[4867]: E0101 08:26:48.701191 4867 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 01 08:26:48 crc kubenswrapper[4867]: I0101 08:26:48.704320 4867 trace.go:236] Trace[1461267221]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Jan-2026 08:26:34.336) (total time: 14367ms): Jan 01 08:26:48 crc kubenswrapper[4867]: Trace[1461267221]: ---"Objects listed" error: 14367ms (08:26:48.703) Jan 01 08:26:48 crc kubenswrapper[4867]: Trace[1461267221]: [14.367921335s] [14.367921335s] END Jan 01 08:26:48 crc kubenswrapper[4867]: I0101 08:26:48.704373 4867 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 01 08:26:48 crc kubenswrapper[4867]: I0101 08:26:48.749258 4867 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 01 08:26:48 crc kubenswrapper[4867]: I0101 08:26:48.749369 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 01 08:26:48 crc kubenswrapper[4867]: I0101 08:26:48.783251 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:26:48 crc kubenswrapper[4867]: I0101 08:26:48.790040 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.075970 4867 apiserver.go:52] "Watching apiserver" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.079649 4867 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.080570 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.081090 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:26:49 crc kubenswrapper[4867]: E0101 08:26:49.081271 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.081287 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.081575 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.081371 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.081795 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.081821 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 01 08:26:49 crc kubenswrapper[4867]: E0101 08:26:49.081988 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:26:49 crc kubenswrapper[4867]: E0101 08:26:49.081999 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.084134 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.084356 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.084443 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.085040 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.086439 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.087409 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.087508 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.087654 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.088417 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.119386 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.134008 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.145216 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.155852 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.169939 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.174329 4867 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.181644 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.192831 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.195676 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.195709 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.195731 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.195750 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.195769 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.195786 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.195802 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.195818 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.195846 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.195862 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.195879 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.195921 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.195942 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.195974 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.195989 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196005 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196022 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196042 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196059 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196076 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196094 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196112 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196129 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196144 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196174 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196200 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196200 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196216 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196246 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196263 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196282 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196298 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196315 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196331 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196347 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196358 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196362 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196389 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196408 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196422 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196441 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196457 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196500 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196518 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196535 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196552 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196568 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196585 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196600 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196619 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196655 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196672 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196688 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196704 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196722 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196739 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196755 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196770 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196786 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196802 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196818 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196835 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196852 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196870 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196902 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196924 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196940 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196954 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196969 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.196986 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197000 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197014 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197032 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197047 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197065 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197081 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197096 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197112 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197127 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197143 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197160 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197177 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197176 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197194 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197211 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197228 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197246 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197267 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197284 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197301 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197318 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197334 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197334 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197352 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197368 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197384 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197401 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197421 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197461 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197476 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197494 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197512 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197527 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197548 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197565 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197581 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197599 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197615 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197632 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197617 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197651 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197682 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197716 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197802 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197855 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197878 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197870 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197924 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197905 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.197975 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.198021 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.198049 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.198040 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.198067 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.198121 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.198166 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.198210 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.198225 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.198249 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.198295 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.198313 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.198336 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.198343 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.198382 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.198432 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.198470 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.198515 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.198558 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.198594 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.198632 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.198712 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.198809 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.198881 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.198963 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.199011 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.199094 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.199126 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.199145 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.199187 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.199230 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.199240 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.199273 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.199319 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.199321 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.199359 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.199414 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.199429 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.199460 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.199508 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.199548 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.199587 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.199631 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.199668 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.199713 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.199760 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.199803 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.199843 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.199881 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.199942 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.199982 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.200017 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.200053 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.200091 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.200126 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.200171 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.200223 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.200267 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.200313 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.200349 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.200417 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.200455 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.200522 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.200558 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.200600 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.200637 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.200675 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.200711 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.200746 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.200789 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.200825 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.200872 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.205681 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.205752 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.205788 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.205819 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.205856 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.205962 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.206000 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.206808 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.207352 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.207448 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.207492 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.207571 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.207644 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.207719 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.207759 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.207830 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.207974 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.208014 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.208048 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.208115 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.208148 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.208182 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.208226 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.208263 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.208527 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.208710 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.208828 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.212594 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.212721 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.212857 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.212941 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.212984 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.213016 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.213081 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.213124 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.213176 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.213215 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.213244 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.213272 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.213495 4867 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.213519 4867 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.213536 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.213549 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.213564 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.213579 4867 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.213592 4867 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.213602 4867 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.213619 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.213634 4867 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.213648 4867 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.213665 4867 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.213680 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.213694 4867 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.213709 4867 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.213724 4867 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.213738 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.213753 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.214187 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.214610 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.215838 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.215869 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.217044 4867 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.199509 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.199688 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.218536 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.199683 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.199806 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.199955 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.200024 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.200042 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.200165 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.200212 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.200608 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.200831 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.201398 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.203085 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.203189 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.203616 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.204598 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.204672 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.204677 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.204745 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.204749 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.204798 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.204803 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.204819 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.204840 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.204972 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.205007 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.205023 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.205047 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.205240 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.205370 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.205422 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.205463 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.205481 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.205499 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.205619 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.205758 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.206458 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.206560 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.206958 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.207149 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.208162 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.208259 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.208606 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.209725 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.209923 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.210082 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.210106 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.210217 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.210396 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.210661 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.211111 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.211392 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.211437 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.211493 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.211911 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.211952 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.212040 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.212276 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.212369 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.212400 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.212494 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.212505 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.212630 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.212814 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.213024 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.213124 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.213127 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.213486 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.213659 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.213976 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.214176 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.214625 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.214685 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: E0101 08:26:49.214698 4867 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.214704 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.214807 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: E0101 08:26:49.214957 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:26:49.714909571 +0000 UTC m=+18.850178340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.215111 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.215183 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.215319 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.215439 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: E0101 08:26:49.215544 4867 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.216099 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.216528 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.216593 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.216639 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.216671 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.217117 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.217668 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.217784 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.218001 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.218214 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.218749 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.218953 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.219336 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.219430 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.219598 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.219742 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.220048 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.220054 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.220362 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.220729 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.220817 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.220987 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.220857 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.221152 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.221236 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.233297 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.233596 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.233651 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.233746 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.235037 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.235105 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.235171 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.236118 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.236175 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.236188 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.236025 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.236245 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.236669 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.236315 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.236633 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: E0101 08:26:49.237030 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-01 08:26:49.736996737 +0000 UTC m=+18.872265506 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.237146 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: E0101 08:26:49.237427 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-01 08:26:49.737391008 +0000 UTC m=+18.872659777 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.238001 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.238308 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.238477 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.238903 4867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.238925 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.239070 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: E0101 08:26:49.239710 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 01 08:26:49 crc kubenswrapper[4867]: E0101 08:26:49.239749 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 01 08:26:49 crc kubenswrapper[4867]: E0101 08:26:49.239769 4867 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.239788 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: E0101 08:26:49.239842 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-01 08:26:49.739820736 +0000 UTC m=+18.875089525 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 01 08:26:49 crc kubenswrapper[4867]: E0101 08:26:49.239946 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 01 08:26:49 crc kubenswrapper[4867]: E0101 08:26:49.239962 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 01 08:26:49 crc kubenswrapper[4867]: E0101 08:26:49.239979 4867 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 01 08:26:49 crc kubenswrapper[4867]: E0101 08:26:49.240016 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-01 08:26:49.740002501 +0000 UTC m=+18.875271280 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.240099 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.240110 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.240414 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.240799 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.240968 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.241131 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.241297 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.241374 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.241763 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.242196 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.242292 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.242377 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.242621 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.242995 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.243092 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.243161 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.244612 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.245401 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.245905 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.246309 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: E0101 08:26:49.247844 4867 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.248657 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.252215 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.248443 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.253535 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.253812 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.255557 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.258758 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.258838 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.259028 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.259150 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.259192 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.261116 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.262134 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.263413 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.264055 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.264469 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.264555 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.264752 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.266432 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.266994 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.267800 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.268560 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.268586 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.268605 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.268664 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.268752 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.269118 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.269324 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.269473 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.269570 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.282471 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.286480 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.293755 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.295875 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.296524 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.306369 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.314640 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.314667 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.314714 4867 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.314725 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.314735 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.314744 4867 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.314753 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.314762 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.314770 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.314778 4867 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.314786 4867 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.314795 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.314807 4867 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.314817 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.314825 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.314785 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.314833 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.314912 4867 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.314923 4867 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.314934 4867 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.314943 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.314952 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.314961 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.314985 4867 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.314993 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315003 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315012 4867 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315010 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315022 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315094 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315110 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315123 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315135 4867 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315145 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315156 4867 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315167 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315177 4867 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315187 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315198 4867 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315209 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315221 4867 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315231 4867 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315242 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315253 4867 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315263 4867 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315274 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315285 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315295 4867 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315305 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315315 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315327 4867 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315337 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315351 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315365 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315377 4867 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315388 4867 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315400 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315411 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315421 4867 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315432 4867 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315443 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315454 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315467 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315478 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315488 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315499 4867 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315511 4867 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315522 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315531 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315542 4867 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315552 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315562 4867 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315573 4867 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315583 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315592 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315602 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315612 4867 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315622 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315632 4867 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315644 4867 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315654 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315664 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315693 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315703 4867 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315712 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315723 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315733 4867 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315746 4867 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315757 4867 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315766 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315777 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315788 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315799 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315808 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315818 4867 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315829 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315848 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315859 4867 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315871 4867 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315880 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315909 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315919 4867 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315928 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315939 4867 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315950 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315961 4867 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315970 4867 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315979 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315987 4867 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.315997 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316007 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316016 4867 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316026 4867 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316036 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316045 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316055 4867 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316065 4867 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316075 4867 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316084 4867 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316093 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316103 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316113 4867 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316122 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316131 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316140 4867 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316149 4867 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316166 4867 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316175 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316184 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316195 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316212 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316222 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316232 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316241 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316251 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316261 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316270 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316279 4867 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316290 4867 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316299 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316308 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316317 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316342 4867 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316353 4867 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316363 4867 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316373 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316383 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316393 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316403 4867 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316412 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316452 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316463 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316474 4867 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316485 4867 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316496 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316506 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316517 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316527 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316539 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316549 4867 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316558 4867 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316567 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316578 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316588 4867 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316598 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316608 4867 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316617 4867 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316626 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316635 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316644 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316655 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316665 4867 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316675 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316685 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316695 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316705 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316714 4867 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316724 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316734 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.316743 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.320439 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.331085 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.347033 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.402527 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.414642 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.422681 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 01 08:26:49 crc kubenswrapper[4867]: W0101 08:26:49.425012 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-9a190cf3802de313519ca008d8d984010d6838c7a94da704f5ed9ce64adce5dd WatchSource:0}: Error finding container 9a190cf3802de313519ca008d8d984010d6838c7a94da704f5ed9ce64adce5dd: Status 404 returned error can't find the container with id 9a190cf3802de313519ca008d8d984010d6838c7a94da704f5ed9ce64adce5dd Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.512789 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-bqtdc"] Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.513158 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bqtdc" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.517013 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.517040 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.517013 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.529986 4867 csr.go:261] certificate signing request csr-fgvcm is approved, waiting to be issued Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.532864 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.534059 4867 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.545855 4867 csr.go:257] certificate signing request csr-fgvcm is issued Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.550290 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.566160 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.579636 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.597528 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.608389 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.619142 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b2fab997-d36e-43d8-9030-11a0a7a27e41-hosts-file\") pod \"node-resolver-bqtdc\" (UID: \"b2fab997-d36e-43d8-9030-11a0a7a27e41\") " pod="openshift-dns/node-resolver-bqtdc" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.619229 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hscwx\" (UniqueName: \"kubernetes.io/projected/b2fab997-d36e-43d8-9030-11a0a7a27e41-kube-api-access-hscwx\") pod \"node-resolver-bqtdc\" (UID: \"b2fab997-d36e-43d8-9030-11a0a7a27e41\") " pod="openshift-dns/node-resolver-bqtdc" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.622040 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.630408 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.720582 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.720799 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hscwx\" (UniqueName: \"kubernetes.io/projected/b2fab997-d36e-43d8-9030-11a0a7a27e41-kube-api-access-hscwx\") pod \"node-resolver-bqtdc\" (UID: \"b2fab997-d36e-43d8-9030-11a0a7a27e41\") " pod="openshift-dns/node-resolver-bqtdc" Jan 01 08:26:49 crc kubenswrapper[4867]: E0101 08:26:49.720832 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:26:50.720795695 +0000 UTC m=+19.856064484 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.720960 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b2fab997-d36e-43d8-9030-11a0a7a27e41-hosts-file\") pod \"node-resolver-bqtdc\" (UID: \"b2fab997-d36e-43d8-9030-11a0a7a27e41\") " pod="openshift-dns/node-resolver-bqtdc" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.721069 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b2fab997-d36e-43d8-9030-11a0a7a27e41-hosts-file\") pod \"node-resolver-bqtdc\" (UID: \"b2fab997-d36e-43d8-9030-11a0a7a27e41\") " pod="openshift-dns/node-resolver-bqtdc" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.745752 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hscwx\" (UniqueName: \"kubernetes.io/projected/b2fab997-d36e-43d8-9030-11a0a7a27e41-kube-api-access-hscwx\") pod \"node-resolver-bqtdc\" (UID: \"b2fab997-d36e-43d8-9030-11a0a7a27e41\") " pod="openshift-dns/node-resolver-bqtdc" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.821650 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.821719 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.821744 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.821765 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:26:49 crc kubenswrapper[4867]: E0101 08:26:49.821910 4867 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 01 08:26:49 crc kubenswrapper[4867]: E0101 08:26:49.821983 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-01 08:26:50.821964375 +0000 UTC m=+19.957233144 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 01 08:26:49 crc kubenswrapper[4867]: E0101 08:26:49.822459 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 01 08:26:49 crc kubenswrapper[4867]: E0101 08:26:49.822474 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 01 08:26:49 crc kubenswrapper[4867]: E0101 08:26:49.822486 4867 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 01 08:26:49 crc kubenswrapper[4867]: E0101 08:26:49.822511 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-01 08:26:50.82250362 +0000 UTC m=+19.957772389 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 01 08:26:49 crc kubenswrapper[4867]: E0101 08:26:49.822550 4867 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 01 08:26:49 crc kubenswrapper[4867]: E0101 08:26:49.822579 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-01 08:26:50.822571092 +0000 UTC m=+19.957839851 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 01 08:26:49 crc kubenswrapper[4867]: E0101 08:26:49.822624 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 01 08:26:49 crc kubenswrapper[4867]: E0101 08:26:49.822634 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 01 08:26:49 crc kubenswrapper[4867]: E0101 08:26:49.822643 4867 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 01 08:26:49 crc kubenswrapper[4867]: E0101 08:26:49.822661 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-01 08:26:50.822656004 +0000 UTC m=+19.957924773 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 01 08:26:49 crc kubenswrapper[4867]: I0101 08:26:49.824518 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bqtdc" Jan 01 08:26:49 crc kubenswrapper[4867]: W0101 08:26:49.835903 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2fab997_d36e_43d8_9030_11a0a7a27e41.slice/crio-4323bc017db81b6a3bdcbd72606a1170a644943f23ab6df24f9b6cb6c0290f2e WatchSource:0}: Error finding container 4323bc017db81b6a3bdcbd72606a1170a644943f23ab6df24f9b6cb6c0290f2e: Status 404 returned error can't find the container with id 4323bc017db81b6a3bdcbd72606a1170a644943f23ab6df24f9b6cb6c0290f2e Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.242105 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bqtdc" event={"ID":"b2fab997-d36e-43d8-9030-11a0a7a27e41","Type":"ContainerStarted","Data":"5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c"} Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.242159 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bqtdc" event={"ID":"b2fab997-d36e-43d8-9030-11a0a7a27e41","Type":"ContainerStarted","Data":"4323bc017db81b6a3bdcbd72606a1170a644943f23ab6df24f9b6cb6c0290f2e"} Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.243727 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b"} Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.243777 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b"} Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.243786 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4b1b773f3875a98799ac8f03ba329400274840a01a6be9ac2463d8689777e439"} Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.244605 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"9a190cf3802de313519ca008d8d984010d6838c7a94da704f5ed9ce64adce5dd"} Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.245602 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4"} Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.245649 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"797826cdf5eb0626a8edb1231638a4054536fcab5ec2529d4005d3766ca42c55"} Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.265210 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:50Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.276515 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:50Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.289100 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:50Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.304020 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:50Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.315396 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:50Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.331611 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:50Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.345546 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:50Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.359218 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:50Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.362957 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-wkbs8"] Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.363244 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.365655 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-jh66z"] Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.366249 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-69jph"] Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.366403 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-jh66z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.366676 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-69jph" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.367762 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6nftn"] Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.368564 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.368573 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.368675 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.369154 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.369401 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.369444 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.369445 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.370232 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.370249 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.370393 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.370447 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.375444 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.375533 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.375595 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.375601 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.376134 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:50Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.376170 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.376265 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.376317 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.376333 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.376440 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.393587 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:50Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.420270 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:50Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.427268 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/35a93d40-ed12-413d-b8fa-1c683a35a7e2-system-cni-dir\") pod \"multus-additional-cni-plugins-jh66z\" (UID: \"35a93d40-ed12-413d-b8fa-1c683a35a7e2\") " pod="openshift-multus/multus-additional-cni-plugins-jh66z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.427472 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-var-lib-openvswitch\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.427551 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-etc-kubernetes\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.427621 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wjm8\" (UniqueName: \"kubernetes.io/projected/da72a722-a2a3-459e-875a-e1605b442e05-kube-api-access-9wjm8\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.427723 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-multus-conf-dir\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.427793 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-run-systemd\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.427861 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-run-ovn\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.427978 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-multus-cni-dir\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.428074 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/da72a722-a2a3-459e-875a-e1605b442e05-multus-daemon-config\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.428148 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4608a141-23bd-4286-8607-ad4b16b5ee11-rootfs\") pod \"machine-config-daemon-69jph\" (UID: \"4608a141-23bd-4286-8607-ad4b16b5ee11\") " pod="openshift-machine-config-operator/machine-config-daemon-69jph" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.428213 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4608a141-23bd-4286-8607-ad4b16b5ee11-proxy-tls\") pod \"machine-config-daemon-69jph\" (UID: \"4608a141-23bd-4286-8607-ad4b16b5ee11\") " pod="openshift-machine-config-operator/machine-config-daemon-69jph" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.428287 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-ovnkube-config\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.428353 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/35a93d40-ed12-413d-b8fa-1c683a35a7e2-cni-binary-copy\") pod \"multus-additional-cni-plugins-jh66z\" (UID: \"35a93d40-ed12-413d-b8fa-1c683a35a7e2\") " pod="openshift-multus/multus-additional-cni-plugins-jh66z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.428422 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4608a141-23bd-4286-8607-ad4b16b5ee11-mcd-auth-proxy-config\") pod \"machine-config-daemon-69jph\" (UID: \"4608a141-23bd-4286-8607-ad4b16b5ee11\") " pod="openshift-machine-config-operator/machine-config-daemon-69jph" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.428497 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z8tv\" (UniqueName: \"kubernetes.io/projected/4608a141-23bd-4286-8607-ad4b16b5ee11-kube-api-access-6z8tv\") pod \"machine-config-daemon-69jph\" (UID: \"4608a141-23bd-4286-8607-ad4b16b5ee11\") " pod="openshift-machine-config-operator/machine-config-daemon-69jph" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.428562 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-os-release\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.428640 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-host-var-lib-cni-bin\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.428725 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-hostroot\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.428807 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nw7l\" (UniqueName: \"kubernetes.io/projected/35a93d40-ed12-413d-b8fa-1c683a35a7e2-kube-api-access-9nw7l\") pod \"multus-additional-cni-plugins-jh66z\" (UID: \"35a93d40-ed12-413d-b8fa-1c683a35a7e2\") " pod="openshift-multus/multus-additional-cni-plugins-jh66z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.428872 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-cni-netd\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.428955 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-env-overrides\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.429019 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvswz\" (UniqueName: \"kubernetes.io/projected/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-kube-api-access-kvswz\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.429094 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-cnibin\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.429171 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-host-run-k8s-cni-cncf-io\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.429238 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-run-openvswitch\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.429305 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-run-netns\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.429370 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-host-var-lib-kubelet\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.429447 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/35a93d40-ed12-413d-b8fa-1c683a35a7e2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jh66z\" (UID: \"35a93d40-ed12-413d-b8fa-1c683a35a7e2\") " pod="openshift-multus/multus-additional-cni-plugins-jh66z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.429519 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-systemd-units\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.429591 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/da72a722-a2a3-459e-875a-e1605b442e05-cni-binary-copy\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.429656 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-host-run-multus-certs\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.429733 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-ovnkube-script-lib\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.429807 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-host-run-netns\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.429894 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-node-log\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.429968 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/35a93d40-ed12-413d-b8fa-1c683a35a7e2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jh66z\" (UID: \"35a93d40-ed12-413d-b8fa-1c683a35a7e2\") " pod="openshift-multus/multus-additional-cni-plugins-jh66z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.430035 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-kubelet\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.430105 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-multus-socket-dir-parent\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.430177 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-log-socket\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.430246 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-cni-bin\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.430313 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-host-var-lib-cni-multus\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.430395 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.430469 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-ovn-node-metrics-cert\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.430540 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/35a93d40-ed12-413d-b8fa-1c683a35a7e2-os-release\") pod \"multus-additional-cni-plugins-jh66z\" (UID: \"35a93d40-ed12-413d-b8fa-1c683a35a7e2\") " pod="openshift-multus/multus-additional-cni-plugins-jh66z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.430614 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-slash\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.430677 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-etc-openvswitch\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.430743 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/35a93d40-ed12-413d-b8fa-1c683a35a7e2-cnibin\") pod \"multus-additional-cni-plugins-jh66z\" (UID: \"35a93d40-ed12-413d-b8fa-1c683a35a7e2\") " pod="openshift-multus/multus-additional-cni-plugins-jh66z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.430819 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-run-ovn-kubernetes\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.430896 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-system-cni-dir\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.453767 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:50Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.489054 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:50Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.512472 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:50Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.531986 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-multus-socket-dir-parent\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532025 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/35a93d40-ed12-413d-b8fa-1c683a35a7e2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jh66z\" (UID: \"35a93d40-ed12-413d-b8fa-1c683a35a7e2\") " pod="openshift-multus/multus-additional-cni-plugins-jh66z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532043 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-kubelet\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532072 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-log-socket\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532087 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-cni-bin\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532101 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-host-var-lib-cni-multus\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532117 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-slash\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532130 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-etc-openvswitch\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532145 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532163 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-ovn-node-metrics-cert\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532177 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/35a93d40-ed12-413d-b8fa-1c683a35a7e2-os-release\") pod \"multus-additional-cni-plugins-jh66z\" (UID: \"35a93d40-ed12-413d-b8fa-1c683a35a7e2\") " pod="openshift-multus/multus-additional-cni-plugins-jh66z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532200 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/35a93d40-ed12-413d-b8fa-1c683a35a7e2-cnibin\") pod \"multus-additional-cni-plugins-jh66z\" (UID: \"35a93d40-ed12-413d-b8fa-1c683a35a7e2\") " pod="openshift-multus/multus-additional-cni-plugins-jh66z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532245 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-run-ovn-kubernetes\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532264 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-system-cni-dir\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532284 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/35a93d40-ed12-413d-b8fa-1c683a35a7e2-system-cni-dir\") pod \"multus-additional-cni-plugins-jh66z\" (UID: \"35a93d40-ed12-413d-b8fa-1c683a35a7e2\") " pod="openshift-multus/multus-additional-cni-plugins-jh66z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532298 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-var-lib-openvswitch\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532315 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-etc-kubernetes\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532329 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wjm8\" (UniqueName: \"kubernetes.io/projected/da72a722-a2a3-459e-875a-e1605b442e05-kube-api-access-9wjm8\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532318 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-multus-socket-dir-parent\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532388 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-run-systemd\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532343 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-run-systemd\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532433 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-run-ovn\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532454 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-multus-conf-dir\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532478 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4608a141-23bd-4286-8607-ad4b16b5ee11-proxy-tls\") pod \"machine-config-daemon-69jph\" (UID: \"4608a141-23bd-4286-8607-ad4b16b5ee11\") " pod="openshift-machine-config-operator/machine-config-daemon-69jph" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532500 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-ovnkube-config\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532522 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-multus-cni-dir\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532542 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/da72a722-a2a3-459e-875a-e1605b442e05-multus-daemon-config\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532564 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4608a141-23bd-4286-8607-ad4b16b5ee11-rootfs\") pod \"machine-config-daemon-69jph\" (UID: \"4608a141-23bd-4286-8607-ad4b16b5ee11\") " pod="openshift-machine-config-operator/machine-config-daemon-69jph" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532588 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4608a141-23bd-4286-8607-ad4b16b5ee11-mcd-auth-proxy-config\") pod \"machine-config-daemon-69jph\" (UID: \"4608a141-23bd-4286-8607-ad4b16b5ee11\") " pod="openshift-machine-config-operator/machine-config-daemon-69jph" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532613 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/35a93d40-ed12-413d-b8fa-1c683a35a7e2-cni-binary-copy\") pod \"multus-additional-cni-plugins-jh66z\" (UID: \"35a93d40-ed12-413d-b8fa-1c683a35a7e2\") " pod="openshift-multus/multus-additional-cni-plugins-jh66z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532636 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z8tv\" (UniqueName: \"kubernetes.io/projected/4608a141-23bd-4286-8607-ad4b16b5ee11-kube-api-access-6z8tv\") pod \"machine-config-daemon-69jph\" (UID: \"4608a141-23bd-4286-8607-ad4b16b5ee11\") " pod="openshift-machine-config-operator/machine-config-daemon-69jph" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532660 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-cni-netd\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532680 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-env-overrides\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532701 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-os-release\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532719 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-host-var-lib-cni-bin\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532737 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-hostroot\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532759 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nw7l\" (UniqueName: \"kubernetes.io/projected/35a93d40-ed12-413d-b8fa-1c683a35a7e2-kube-api-access-9nw7l\") pod \"multus-additional-cni-plugins-jh66z\" (UID: \"35a93d40-ed12-413d-b8fa-1c683a35a7e2\") " pod="openshift-multus/multus-additional-cni-plugins-jh66z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532785 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-run-openvswitch\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532805 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvswz\" (UniqueName: \"kubernetes.io/projected/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-kube-api-access-kvswz\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532825 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-cnibin\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532844 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-host-run-k8s-cni-cncf-io\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532868 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/35a93d40-ed12-413d-b8fa-1c683a35a7e2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jh66z\" (UID: \"35a93d40-ed12-413d-b8fa-1c683a35a7e2\") " pod="openshift-multus/multus-additional-cni-plugins-jh66z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532924 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-systemd-units\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532945 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-run-netns\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532967 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-host-var-lib-kubelet\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532977 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/35a93d40-ed12-413d-b8fa-1c683a35a7e2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jh66z\" (UID: \"35a93d40-ed12-413d-b8fa-1c683a35a7e2\") " pod="openshift-multus/multus-additional-cni-plugins-jh66z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.532996 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-host-run-multus-certs\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.533021 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/da72a722-a2a3-459e-875a-e1605b442e05-cni-binary-copy\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.533026 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-kubelet\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.533057 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-log-socket\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.533058 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-node-log\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.533086 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-ovnkube-script-lib\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.533089 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-node-log\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.533102 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-host-run-netns\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.533124 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-run-ovn\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.533151 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-cni-bin\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.533151 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-host-run-netns\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.533171 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-multus-conf-dir\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.533185 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-host-var-lib-cni-multus\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.533210 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-slash\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.533232 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-etc-openvswitch\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.533258 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.533659 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/35a93d40-ed12-413d-b8fa-1c683a35a7e2-system-cni-dir\") pod \"multus-additional-cni-plugins-jh66z\" (UID: \"35a93d40-ed12-413d-b8fa-1c683a35a7e2\") " pod="openshift-multus/multus-additional-cni-plugins-jh66z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.533667 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/35a93d40-ed12-413d-b8fa-1c683a35a7e2-cnibin\") pod \"multus-additional-cni-plugins-jh66z\" (UID: \"35a93d40-ed12-413d-b8fa-1c683a35a7e2\") " pod="openshift-multus/multus-additional-cni-plugins-jh66z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.533712 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-system-cni-dir\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.533726 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/35a93d40-ed12-413d-b8fa-1c683a35a7e2-os-release\") pod \"multus-additional-cni-plugins-jh66z\" (UID: \"35a93d40-ed12-413d-b8fa-1c683a35a7e2\") " pod="openshift-multus/multus-additional-cni-plugins-jh66z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.533801 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-etc-kubernetes\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.533801 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-hostroot\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.533815 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-var-lib-openvswitch\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.533905 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-os-release\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.533963 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:50Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.533982 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-host-var-lib-cni-bin\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.534079 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4608a141-23bd-4286-8607-ad4b16b5ee11-rootfs\") pod \"machine-config-daemon-69jph\" (UID: \"4608a141-23bd-4286-8607-ad4b16b5ee11\") " pod="openshift-machine-config-operator/machine-config-daemon-69jph" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.534115 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-host-run-k8s-cni-cncf-io\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.534123 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-cnibin\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.534134 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-host-var-lib-kubelet\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.534166 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-systemd-units\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.534173 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-host-run-multus-certs\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.534191 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-run-netns\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.534240 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/da72a722-a2a3-459e-875a-e1605b442e05-multus-cni-dir\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.534327 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-env-overrides\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.534337 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-run-openvswitch\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.534369 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-cni-netd\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.534463 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/35a93d40-ed12-413d-b8fa-1c683a35a7e2-cni-binary-copy\") pod \"multus-additional-cni-plugins-jh66z\" (UID: \"35a93d40-ed12-413d-b8fa-1c683a35a7e2\") " pod="openshift-multus/multus-additional-cni-plugins-jh66z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.534553 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/35a93d40-ed12-413d-b8fa-1c683a35a7e2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jh66z\" (UID: \"35a93d40-ed12-413d-b8fa-1c683a35a7e2\") " pod="openshift-multus/multus-additional-cni-plugins-jh66z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.534657 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-ovnkube-config\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.534720 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/da72a722-a2a3-459e-875a-e1605b442e05-cni-binary-copy\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.534799 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/da72a722-a2a3-459e-875a-e1605b442e05-multus-daemon-config\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.535093 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4608a141-23bd-4286-8607-ad4b16b5ee11-mcd-auth-proxy-config\") pod \"machine-config-daemon-69jph\" (UID: \"4608a141-23bd-4286-8607-ad4b16b5ee11\") " pod="openshift-machine-config-operator/machine-config-daemon-69jph" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.535164 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-ovnkube-script-lib\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.535171 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-run-ovn-kubernetes\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.540267 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-ovn-node-metrics-cert\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.540299 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4608a141-23bd-4286-8607-ad4b16b5ee11-proxy-tls\") pod \"machine-config-daemon-69jph\" (UID: \"4608a141-23bd-4286-8607-ad4b16b5ee11\") " pod="openshift-machine-config-operator/machine-config-daemon-69jph" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.547119 4867 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-01 08:21:49 +0000 UTC, rotation deadline is 2026-10-03 01:38:15.618395089 +0000 UTC Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.547185 4867 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6593h11m25.071216006s for next certificate rotation Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.558266 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wjm8\" (UniqueName: \"kubernetes.io/projected/da72a722-a2a3-459e-875a-e1605b442e05-kube-api-access-9wjm8\") pod \"multus-wkbs8\" (UID: \"da72a722-a2a3-459e-875a-e1605b442e05\") " pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.558530 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:50Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.560265 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvswz\" (UniqueName: \"kubernetes.io/projected/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-kube-api-access-kvswz\") pod \"ovnkube-node-6nftn\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.566542 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z8tv\" (UniqueName: \"kubernetes.io/projected/4608a141-23bd-4286-8607-ad4b16b5ee11-kube-api-access-6z8tv\") pod \"machine-config-daemon-69jph\" (UID: \"4608a141-23bd-4286-8607-ad4b16b5ee11\") " pod="openshift-machine-config-operator/machine-config-daemon-69jph" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.576871 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nw7l\" (UniqueName: \"kubernetes.io/projected/35a93d40-ed12-413d-b8fa-1c683a35a7e2-kube-api-access-9nw7l\") pod \"multus-additional-cni-plugins-jh66z\" (UID: \"35a93d40-ed12-413d-b8fa-1c683a35a7e2\") " pod="openshift-multus/multus-additional-cni-plugins-jh66z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.577236 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wkbs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da72a722-a2a3-459e-875a-e1605b442e05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wjm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wkbs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:50Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.597187 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:50Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.607246 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:50Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.623036 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a93d40-ed12-413d-b8fa-1c683a35a7e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh66z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:50Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.639222 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:50Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.649897 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:50Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.661307 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:50Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.674710 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:50Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.675905 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-wkbs8" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.683455 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.689327 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-jh66z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.692957 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:50Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.694890 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-69jph" Jan 01 08:26:50 crc kubenswrapper[4867]: W0101 08:26:50.696852 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d26a65b_86d6_4603_bdeb_ffcb2f086fda.slice/crio-b757c86dbe8954ffcc745fd87d69a6c3786db50f80bc1098bfe5f093f59e51c3 WatchSource:0}: Error finding container b757c86dbe8954ffcc745fd87d69a6c3786db50f80bc1098bfe5f093f59e51c3: Status 404 returned error can't find the container with id b757c86dbe8954ffcc745fd87d69a6c3786db50f80bc1098bfe5f093f59e51c3 Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.712513 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:50Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.725917 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:50Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.734649 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:26:50 crc kubenswrapper[4867]: E0101 08:26:50.734863 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:26:52.734831154 +0000 UTC m=+21.870099933 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.741386 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4608a141-23bd-4286-8607-ad4b16b5ee11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-69jph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:50Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.836625 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:26:50 crc kubenswrapper[4867]: E0101 08:26:50.836785 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.837240 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:26:50 crc kubenswrapper[4867]: E0101 08:26:50.837860 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 01 08:26:50 crc kubenswrapper[4867]: E0101 08:26:50.837931 4867 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 01 08:26:50 crc kubenswrapper[4867]: E0101 08:26:50.837932 4867 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.837946 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.837987 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:26:50 crc kubenswrapper[4867]: E0101 08:26:50.838045 4867 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 01 08:26:50 crc kubenswrapper[4867]: E0101 08:26:50.838083 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-01 08:26:52.838069972 +0000 UTC m=+21.973338741 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 01 08:26:50 crc kubenswrapper[4867]: E0101 08:26:50.838143 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 01 08:26:50 crc kubenswrapper[4867]: E0101 08:26:50.838163 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 01 08:26:50 crc kubenswrapper[4867]: E0101 08:26:50.838180 4867 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 01 08:26:50 crc kubenswrapper[4867]: E0101 08:26:50.838220 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-01 08:26:52.838205206 +0000 UTC m=+21.973473975 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 01 08:26:50 crc kubenswrapper[4867]: E0101 08:26:50.838248 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-01 08:26:52.838241817 +0000 UTC m=+21.973510586 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 01 08:26:50 crc kubenswrapper[4867]: E0101 08:26:50.838676 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-01 08:26:52.838655989 +0000 UTC m=+21.973924758 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 01 08:26:50 crc kubenswrapper[4867]: I0101 08:26:50.993528 4867 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 01 08:26:50 crc kubenswrapper[4867]: W0101 08:26:50.993830 4867 reflector.go:484] object-"openshift-multus"/"cni-copy-resources": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"cni-copy-resources": Unexpected watch close - watch lasted less than a second and no items received Jan 01 08:26:50 crc kubenswrapper[4867]: W0101 08:26:50.993866 4867 reflector.go:484] object-"openshift-multus"/"default-cni-sysctl-allowlist": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"default-cni-sysctl-allowlist": Unexpected watch close - watch lasted less than a second and no items received Jan 01 08:26:50 crc kubenswrapper[4867]: W0101 08:26:50.993898 4867 reflector.go:484] object-"openshift-machine-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 01 08:26:50 crc kubenswrapper[4867]: W0101 08:26:50.994101 4867 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"default-dockercfg-2q5b6": Unexpected watch close - watch lasted less than a second and no items received Jan 01 08:26:50 crc kubenswrapper[4867]: W0101 08:26:50.994141 4867 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": Unexpected watch close - watch lasted less than a second and no items received Jan 01 08:26:50 crc kubenswrapper[4867]: W0101 08:26:50.994160 4867 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": watch of *v1.Secret ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": Unexpected watch close - watch lasted less than a second and no items received Jan 01 08:26:50 crc kubenswrapper[4867]: W0101 08:26:50.994169 4867 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"multus-daemon-config": Unexpected watch close - watch lasted less than a second and no items received Jan 01 08:26:50 crc kubenswrapper[4867]: W0101 08:26:50.994188 4867 reflector.go:484] object-"openshift-machine-config-operator"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-rbac-proxy": Unexpected watch close - watch lasted less than a second and no items received Jan 01 08:26:50 crc kubenswrapper[4867]: W0101 08:26:50.993837 4867 reflector.go:484] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 01 08:26:50 crc kubenswrapper[4867]: W0101 08:26:50.993851 4867 reflector.go:484] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": Unexpected watch close - watch lasted less than a second and no items received Jan 01 08:26:50 crc kubenswrapper[4867]: W0101 08:26:50.994213 4867 reflector.go:484] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 01 08:26:50 crc kubenswrapper[4867]: W0101 08:26:50.994199 4867 reflector.go:484] object-"openshift-machine-config-operator"/"proxy-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"proxy-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 01 08:26:50 crc kubenswrapper[4867]: W0101 08:26:50.994248 4867 reflector.go:484] object-"openshift-ovn-kubernetes"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 01 08:26:50 crc kubenswrapper[4867]: W0101 08:26:50.994271 4867 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": Unexpected watch close - watch lasted less than a second and no items received Jan 01 08:26:50 crc kubenswrapper[4867]: W0101 08:26:50.994296 4867 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 01 08:26:50 crc kubenswrapper[4867]: W0101 08:26:50.994337 4867 reflector.go:484] object-"openshift-multus"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 01 08:26:50 crc kubenswrapper[4867]: W0101 08:26:50.994407 4867 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 01 08:26:50 crc kubenswrapper[4867]: W0101 08:26:50.994735 4867 reflector.go:484] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 01 08:26:50 crc kubenswrapper[4867]: W0101 08:26:50.994903 4867 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovnkube-config": Unexpected watch close - watch lasted less than a second and no items received Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.128000 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.128035 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:26:51 crc kubenswrapper[4867]: E0101 08:26:51.128123 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:26:51 crc kubenswrapper[4867]: E0101 08:26:51.128286 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.128463 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:26:51 crc kubenswrapper[4867]: E0101 08:26:51.128521 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.133411 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.133945 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.134775 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.135437 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.136903 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.137408 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.137967 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.138853 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.139483 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.140397 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.140922 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.142186 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.142801 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.143901 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.144700 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.147229 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.147985 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.148410 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.149585 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.150261 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.150716 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.151787 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.152347 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.153609 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.154115 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.154629 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wkbs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da72a722-a2a3-459e-875a-e1605b442e05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wjm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wkbs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.155324 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.156177 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.156653 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.157824 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.158443 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.159377 4867 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.159474 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.161408 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.162647 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.163201 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.165007 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.166181 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.166756 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.167564 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.169012 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.169644 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.170821 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.172101 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.172984 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.173962 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.174523 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.175504 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.176375 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.177725 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.178335 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.178877 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.179834 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.180533 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.181594 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.231712 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.251079 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wkbs8" event={"ID":"da72a722-a2a3-459e-875a-e1605b442e05","Type":"ContainerStarted","Data":"d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f"} Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.251134 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wkbs8" event={"ID":"da72a722-a2a3-459e-875a-e1605b442e05","Type":"ContainerStarted","Data":"144299e4754b823f8e91a0e89b714f78254f79cbfeaefaf96a614fe92b327a65"} Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.253050 4867 generic.go:334] "Generic (PLEG): container finished" podID="35a93d40-ed12-413d-b8fa-1c683a35a7e2" containerID="9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37" exitCode=0 Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.253105 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" event={"ID":"35a93d40-ed12-413d-b8fa-1c683a35a7e2","Type":"ContainerDied","Data":"9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37"} Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.253126 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" event={"ID":"35a93d40-ed12-413d-b8fa-1c683a35a7e2","Type":"ContainerStarted","Data":"e104ea9f62cced722aa55539dbf8c79b1e717a4bcc148fcb3a6bad3fdd51ec93"} Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.254764 4867 generic.go:334] "Generic (PLEG): container finished" podID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerID="19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12" exitCode=0 Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.254787 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" event={"ID":"2d26a65b-86d6-4603-bdeb-ffcb2f086fda","Type":"ContainerDied","Data":"19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12"} Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.254854 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" event={"ID":"2d26a65b-86d6-4603-bdeb-ffcb2f086fda","Type":"ContainerStarted","Data":"b757c86dbe8954ffcc745fd87d69a6c3786db50f80bc1098bfe5f093f59e51c3"} Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.255039 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.257641 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerStarted","Data":"7ee898ddead9a02fda6e950236b68e556221e707ee2a7c1a2d204194cc334124"} Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.257675 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerStarted","Data":"1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df"} Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.257688 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerStarted","Data":"c83bcd87dcb4c94738afc21a566c59353d37d17bc5e5ff9e9cfc57f5d704b824"} Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.298483 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a93d40-ed12-413d-b8fa-1c683a35a7e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh66z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.338177 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.369954 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.388277 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.401485 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.424029 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.434786 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.445896 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.453971 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4608a141-23bd-4286-8607-ad4b16b5ee11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-69jph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.465855 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wkbs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da72a722-a2a3-459e-875a-e1605b442e05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wjm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wkbs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.477930 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.488046 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.501500 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a93d40-ed12-413d-b8fa-1c683a35a7e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh66z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.516756 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.527715 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.538831 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.554665 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.572737 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.585497 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.596743 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.609687 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4608a141-23bd-4286-8607-ad4b16b5ee11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee898ddead9a02fda6e950236b68e556221e707ee2a7c1a2d204194cc334124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-69jph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.799068 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.845532 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.861486 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.884956 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-tg4nj"] Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.885301 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-tg4nj" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.888555 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.888782 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.888985 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.890064 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.892647 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.896553 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.902406 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.904706 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.904741 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.904905 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.904918 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.905026 4867 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.911222 4867 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.911481 4867 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.912495 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.912522 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.912532 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.912546 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.912556 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:51Z","lastTransitionTime":"2026-01-01T08:26:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.918085 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:51 crc kubenswrapper[4867]: E0101 08:26:51.929934 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.930690 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a93d40-ed12-413d-b8fa-1c683a35a7e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh66z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.936433 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.936483 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.936495 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.936511 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.936526 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:51Z","lastTransitionTime":"2026-01-01T08:26:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.945818 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tg4nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"778253a2-b732-4460-994a-9543f533383f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tg4nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:51 crc kubenswrapper[4867]: E0101 08:26:51.950574 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.955384 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.955417 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.955426 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.955443 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.955453 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:51Z","lastTransitionTime":"2026-01-01T08:26:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.959819 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.967375 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.969016 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.972522 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/778253a2-b732-4460-994a-9543f533383f-serviceca\") pod \"node-ca-tg4nj\" (UID: \"778253a2-b732-4460-994a-9543f533383f\") " pod="openshift-image-registry/node-ca-tg4nj" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.972563 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpvzj\" (UniqueName: \"kubernetes.io/projected/778253a2-b732-4460-994a-9543f533383f-kube-api-access-gpvzj\") pod \"node-ca-tg4nj\" (UID: \"778253a2-b732-4460-994a-9543f533383f\") " pod="openshift-image-registry/node-ca-tg4nj" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.972604 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/778253a2-b732-4460-994a-9543f533383f-host\") pod \"node-ca-tg4nj\" (UID: \"778253a2-b732-4460-994a-9543f533383f\") " pod="openshift-image-registry/node-ca-tg4nj" Jan 01 08:26:51 crc kubenswrapper[4867]: E0101 08:26:51.973252 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.973731 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.977655 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.977688 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.977698 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.977712 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.977722 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:51Z","lastTransitionTime":"2026-01-01T08:26:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.986722 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:51 crc kubenswrapper[4867]: E0101 08:26:51.987403 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.989347 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.990203 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.990244 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.990256 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.990273 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.990285 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:51Z","lastTransitionTime":"2026-01-01T08:26:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:51 crc kubenswrapper[4867]: I0101 08:26:51.997119 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4608a141-23bd-4286-8607-ad4b16b5ee11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee898ddead9a02fda6e950236b68e556221e707ee2a7c1a2d204194cc334124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-69jph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:52 crc kubenswrapper[4867]: E0101 08:26:52.006858 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:52 crc kubenswrapper[4867]: E0101 08:26:52.007035 4867 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.008489 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.008522 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.008532 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.008549 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.008558 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:52Z","lastTransitionTime":"2026-01-01T08:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.018231 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:52Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.032022 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:52Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.047387 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:52Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.061772 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:52Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.073564 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wkbs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da72a722-a2a3-459e-875a-e1605b442e05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wjm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wkbs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:52Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.076072 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpvzj\" (UniqueName: \"kubernetes.io/projected/778253a2-b732-4460-994a-9543f533383f-kube-api-access-gpvzj\") pod \"node-ca-tg4nj\" (UID: \"778253a2-b732-4460-994a-9543f533383f\") " pod="openshift-image-registry/node-ca-tg4nj" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.076176 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/778253a2-b732-4460-994a-9543f533383f-host\") pod \"node-ca-tg4nj\" (UID: \"778253a2-b732-4460-994a-9543f533383f\") " pod="openshift-image-registry/node-ca-tg4nj" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.076216 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/778253a2-b732-4460-994a-9543f533383f-serviceca\") pod \"node-ca-tg4nj\" (UID: \"778253a2-b732-4460-994a-9543f533383f\") " pod="openshift-image-registry/node-ca-tg4nj" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.076960 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/778253a2-b732-4460-994a-9543f533383f-host\") pod \"node-ca-tg4nj\" (UID: \"778253a2-b732-4460-994a-9543f533383f\") " pod="openshift-image-registry/node-ca-tg4nj" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.077408 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/778253a2-b732-4460-994a-9543f533383f-serviceca\") pod \"node-ca-tg4nj\" (UID: \"778253a2-b732-4460-994a-9543f533383f\") " pod="openshift-image-registry/node-ca-tg4nj" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.094939 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpvzj\" (UniqueName: \"kubernetes.io/projected/778253a2-b732-4460-994a-9543f533383f-kube-api-access-gpvzj\") pod \"node-ca-tg4nj\" (UID: \"778253a2-b732-4460-994a-9543f533383f\") " pod="openshift-image-registry/node-ca-tg4nj" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.100641 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.111705 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.111804 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.111863 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.111951 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.112006 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:52Z","lastTransitionTime":"2026-01-01T08:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.114572 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.141608 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.200319 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.201014 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.209805 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.210942 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-tg4nj" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.212002 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.213667 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.213700 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.213713 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.213729 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.213740 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:52Z","lastTransitionTime":"2026-01-01T08:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.253766 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.262465 4867 generic.go:334] "Generic (PLEG): container finished" podID="35a93d40-ed12-413d-b8fa-1c683a35a7e2" containerID="516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770" exitCode=0 Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.262515 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" event={"ID":"35a93d40-ed12-413d-b8fa-1c683a35a7e2","Type":"ContainerDied","Data":"516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770"} Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.266646 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" event={"ID":"2d26a65b-86d6-4603-bdeb-ffcb2f086fda","Type":"ContainerStarted","Data":"a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a"} Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.266684 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" event={"ID":"2d26a65b-86d6-4603-bdeb-ffcb2f086fda","Type":"ContainerStarted","Data":"29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a"} Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.266696 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" event={"ID":"2d26a65b-86d6-4603-bdeb-ffcb2f086fda","Type":"ContainerStarted","Data":"21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936"} Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.266705 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" event={"ID":"2d26a65b-86d6-4603-bdeb-ffcb2f086fda","Type":"ContainerStarted","Data":"a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2"} Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.268338 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"36275cdf8433aa5cc7dc4bfa21e80bafb4b9960156aa9d0f7dd23b5c120dfee9"} Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.270703 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-tg4nj" event={"ID":"778253a2-b732-4460-994a-9543f533383f","Type":"ContainerStarted","Data":"43c1dbda4e5457a8c0102d9eacf35f51bdaabe76bcff55931e5d2aa37adaab5d"} Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.282170 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:52Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.293238 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:52Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.308523 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4608a141-23bd-4286-8607-ad4b16b5ee11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee898ddead9a02fda6e950236b68e556221e707ee2a7c1a2d204194cc334124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-69jph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:52Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.315962 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.315985 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.315997 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.316014 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.316043 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:52Z","lastTransitionTime":"2026-01-01T08:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.326605 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:52Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.345194 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:52Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.358273 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:52Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.371840 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wkbs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da72a722-a2a3-459e-875a-e1605b442e05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wjm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wkbs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:52Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.383839 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:52Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.400213 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:52Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.414380 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:52Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.418086 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.418126 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.418138 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.418155 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.418171 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:52Z","lastTransitionTime":"2026-01-01T08:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.427993 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a93d40-ed12-413d-b8fa-1c683a35a7e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh66z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:52Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.460063 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tg4nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"778253a2-b732-4460-994a-9543f533383f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tg4nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:52Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.502722 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:52Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.520209 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.520248 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.520256 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.520271 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.520280 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:52Z","lastTransitionTime":"2026-01-01T08:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.543154 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:52Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.554615 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.574398 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.614994 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.622706 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.622765 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.622786 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.622813 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.622832 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:52Z","lastTransitionTime":"2026-01-01T08:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.640453 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:52Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.684255 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4608a141-23bd-4286-8607-ad4b16b5ee11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee898ddead9a02fda6e950236b68e556221e707ee2a7c1a2d204194cc334124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-69jph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:52Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.725413 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.725487 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.725515 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.725546 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.725564 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:52Z","lastTransitionTime":"2026-01-01T08:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.733707 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:52Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.767324 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:52Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.782332 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:26:52 crc kubenswrapper[4867]: E0101 08:26:52.782656 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:26:56.782603432 +0000 UTC m=+25.917872251 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.805292 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275cdf8433aa5cc7dc4bfa21e80bafb4b9960156aa9d0f7dd23b5c120dfee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:52Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.828501 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.828564 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.828582 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.828608 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.828626 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:52Z","lastTransitionTime":"2026-01-01T08:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.860807 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wkbs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da72a722-a2a3-459e-875a-e1605b442e05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wjm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wkbs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:52Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.883996 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.884048 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.884075 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.884099 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:26:52 crc kubenswrapper[4867]: E0101 08:26:52.884186 4867 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 01 08:26:52 crc kubenswrapper[4867]: E0101 08:26:52.884189 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 01 08:26:52 crc kubenswrapper[4867]: E0101 08:26:52.884216 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 01 08:26:52 crc kubenswrapper[4867]: E0101 08:26:52.884218 4867 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 01 08:26:52 crc kubenswrapper[4867]: E0101 08:26:52.884252 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 01 08:26:52 crc kubenswrapper[4867]: E0101 08:26:52.884272 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 01 08:26:52 crc kubenswrapper[4867]: E0101 08:26:52.884284 4867 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 01 08:26:52 crc kubenswrapper[4867]: E0101 08:26:52.884231 4867 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 01 08:26:52 crc kubenswrapper[4867]: E0101 08:26:52.884244 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-01 08:26:56.884230045 +0000 UTC m=+26.019498834 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 01 08:26:52 crc kubenswrapper[4867]: E0101 08:26:52.884341 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-01 08:26:56.884324948 +0000 UTC m=+26.019593717 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 01 08:26:52 crc kubenswrapper[4867]: E0101 08:26:52.884351 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-01 08:26:56.884346439 +0000 UTC m=+26.019615208 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 01 08:26:52 crc kubenswrapper[4867]: E0101 08:26:52.884361 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-01 08:26:56.884356299 +0000 UTC m=+26.019625068 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.886411 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:52Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.928061 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:52Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.931202 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.931299 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.931328 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.931357 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.931381 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:52Z","lastTransitionTime":"2026-01-01T08:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:52 crc kubenswrapper[4867]: I0101 08:26:52.969085 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:52Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.003646 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.033804 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.033839 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.033852 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.033870 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.033896 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:53Z","lastTransitionTime":"2026-01-01T08:26:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.046767 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a93d40-ed12-413d-b8fa-1c683a35a7e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh66z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.083558 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tg4nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"778253a2-b732-4460-994a-9543f533383f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tg4nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.128190 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.128230 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.128612 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:26:53 crc kubenswrapper[4867]: E0101 08:26:53.128800 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:26:53 crc kubenswrapper[4867]: E0101 08:26:53.129172 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:26:53 crc kubenswrapper[4867]: E0101 08:26:53.129378 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.136277 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.136309 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.136317 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.136332 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.136344 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:53Z","lastTransitionTime":"2026-01-01T08:26:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.239007 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.239063 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.239077 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.239097 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.239113 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:53Z","lastTransitionTime":"2026-01-01T08:26:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.278132 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" event={"ID":"2d26a65b-86d6-4603-bdeb-ffcb2f086fda","Type":"ContainerStarted","Data":"4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20"} Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.278197 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" event={"ID":"2d26a65b-86d6-4603-bdeb-ffcb2f086fda","Type":"ContainerStarted","Data":"88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba"} Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.279693 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-tg4nj" event={"ID":"778253a2-b732-4460-994a-9543f533383f","Type":"ContainerStarted","Data":"9dcfcdcf5aaf1d45a445b50f5ec520543620fc85992894681c627a2fd8ad4ada"} Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.281900 4867 generic.go:334] "Generic (PLEG): container finished" podID="35a93d40-ed12-413d-b8fa-1c683a35a7e2" containerID="d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff" exitCode=0 Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.281925 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" event={"ID":"35a93d40-ed12-413d-b8fa-1c683a35a7e2","Type":"ContainerDied","Data":"d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff"} Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.297479 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.312989 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.332524 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.342193 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.342245 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.342265 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.342291 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.342309 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:53Z","lastTransitionTime":"2026-01-01T08:26:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.357269 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a93d40-ed12-413d-b8fa-1c683a35a7e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh66z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.376182 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tg4nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"778253a2-b732-4460-994a-9543f533383f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dcfcdcf5aaf1d45a445b50f5ec520543620fc85992894681c627a2fd8ad4ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tg4nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.393251 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.407927 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.424871 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.446595 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.446634 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.446643 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.446657 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.447104 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:53Z","lastTransitionTime":"2026-01-01T08:26:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.447873 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4608a141-23bd-4286-8607-ad4b16b5ee11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee898ddead9a02fda6e950236b68e556221e707ee2a7c1a2d204194cc334124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-69jph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.488001 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.524456 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.549324 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.549363 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.549378 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.549396 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.549408 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:53Z","lastTransitionTime":"2026-01-01T08:26:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.564296 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275cdf8433aa5cc7dc4bfa21e80bafb4b9960156aa9d0f7dd23b5c120dfee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.603770 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wkbs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da72a722-a2a3-459e-875a-e1605b442e05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wjm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wkbs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.641748 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.651307 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.651351 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.651363 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.651383 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.651397 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:53Z","lastTransitionTime":"2026-01-01T08:26:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.684131 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.728079 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a93d40-ed12-413d-b8fa-1c683a35a7e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh66z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.754675 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.754743 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.754760 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.754788 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.754805 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:53Z","lastTransitionTime":"2026-01-01T08:26:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.761548 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tg4nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"778253a2-b732-4460-994a-9543f533383f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dcfcdcf5aaf1d45a445b50f5ec520543620fc85992894681c627a2fd8ad4ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tg4nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.808972 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.847225 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.857939 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.857977 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.857988 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.858002 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.858013 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:53Z","lastTransitionTime":"2026-01-01T08:26:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.880091 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.925213 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4608a141-23bd-4286-8607-ad4b16b5ee11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee898ddead9a02fda6e950236b68e556221e707ee2a7c1a2d204194cc334124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-69jph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.960981 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.961024 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.961033 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.961048 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.961059 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:53Z","lastTransitionTime":"2026-01-01T08:26:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:53 crc kubenswrapper[4867]: I0101 08:26:53.968454 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.006914 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:54Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.047759 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:54Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.063464 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.063511 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.063526 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.063543 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.063554 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:54Z","lastTransitionTime":"2026-01-01T08:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.083924 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275cdf8433aa5cc7dc4bfa21e80bafb4b9960156aa9d0f7dd23b5c120dfee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:54Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.127263 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wkbs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da72a722-a2a3-459e-875a-e1605b442e05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wjm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wkbs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:54Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.166013 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.166080 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.166097 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.166199 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.166230 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:54Z","lastTransitionTime":"2026-01-01T08:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.268332 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.268368 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.268379 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.268394 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.268406 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:54Z","lastTransitionTime":"2026-01-01T08:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.288061 4867 generic.go:334] "Generic (PLEG): container finished" podID="35a93d40-ed12-413d-b8fa-1c683a35a7e2" containerID="5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e" exitCode=0 Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.288180 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" event={"ID":"35a93d40-ed12-413d-b8fa-1c683a35a7e2","Type":"ContainerDied","Data":"5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e"} Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.308262 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:54Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.321619 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:54Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.335633 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:54Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.353399 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:54Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.371109 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.371157 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.371172 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.371195 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.371211 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:54Z","lastTransitionTime":"2026-01-01T08:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.371245 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a93d40-ed12-413d-b8fa-1c683a35a7e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh66z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:54Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.387409 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tg4nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"778253a2-b732-4460-994a-9543f533383f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dcfcdcf5aaf1d45a445b50f5ec520543620fc85992894681c627a2fd8ad4ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tg4nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:54Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.404740 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:54Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.443149 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:54Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.474506 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.474538 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.474550 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.474592 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.474604 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:54Z","lastTransitionTime":"2026-01-01T08:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.482760 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4608a141-23bd-4286-8607-ad4b16b5ee11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee898ddead9a02fda6e950236b68e556221e707ee2a7c1a2d204194cc334124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-69jph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:54Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.531398 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:54Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.566261 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:54Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.576964 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.577006 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.577017 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.577036 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.577047 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:54Z","lastTransitionTime":"2026-01-01T08:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.606448 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275cdf8433aa5cc7dc4bfa21e80bafb4b9960156aa9d0f7dd23b5c120dfee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:54Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.647801 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wkbs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da72a722-a2a3-459e-875a-e1605b442e05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wjm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wkbs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:54Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.679713 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.679756 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.679765 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.679779 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.679788 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:54Z","lastTransitionTime":"2026-01-01T08:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.781606 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.781647 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.781657 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.781672 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.781682 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:54Z","lastTransitionTime":"2026-01-01T08:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.884112 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.884157 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.884170 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.884187 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.884200 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:54Z","lastTransitionTime":"2026-01-01T08:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.986259 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.986302 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.986313 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.986330 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:54 crc kubenswrapper[4867]: I0101 08:26:54.986342 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:54Z","lastTransitionTime":"2026-01-01T08:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.088936 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.088996 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.089008 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.089029 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.089041 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:55Z","lastTransitionTime":"2026-01-01T08:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.128046 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.128092 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.128130 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:26:55 crc kubenswrapper[4867]: E0101 08:26:55.128244 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:26:55 crc kubenswrapper[4867]: E0101 08:26:55.128364 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:26:55 crc kubenswrapper[4867]: E0101 08:26:55.128544 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.191047 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.191100 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.191111 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.191128 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.191140 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:55Z","lastTransitionTime":"2026-01-01T08:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.293697 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.293736 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.293744 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.293758 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.293769 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:55Z","lastTransitionTime":"2026-01-01T08:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.296442 4867 generic.go:334] "Generic (PLEG): container finished" podID="35a93d40-ed12-413d-b8fa-1c683a35a7e2" containerID="5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca" exitCode=0 Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.296522 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" event={"ID":"35a93d40-ed12-413d-b8fa-1c683a35a7e2","Type":"ContainerDied","Data":"5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca"} Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.306645 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" event={"ID":"2d26a65b-86d6-4603-bdeb-ffcb2f086fda","Type":"ContainerStarted","Data":"7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d"} Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.311903 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.324684 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4608a141-23bd-4286-8607-ad4b16b5ee11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee898ddead9a02fda6e950236b68e556221e707ee2a7c1a2d204194cc334124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-69jph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.346817 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.362855 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.377627 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.389401 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275cdf8433aa5cc7dc4bfa21e80bafb4b9960156aa9d0f7dd23b5c120dfee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.395407 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.395433 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.395445 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.395458 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.395467 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:55Z","lastTransitionTime":"2026-01-01T08:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.406623 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wkbs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da72a722-a2a3-459e-875a-e1605b442e05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wjm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wkbs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.420204 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.433273 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.448333 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a93d40-ed12-413d-b8fa-1c683a35a7e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh66z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.459780 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tg4nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"778253a2-b732-4460-994a-9543f533383f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dcfcdcf5aaf1d45a445b50f5ec520543620fc85992894681c627a2fd8ad4ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tg4nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.472458 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.487422 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.499459 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.499501 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.499529 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.499543 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.499552 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:55Z","lastTransitionTime":"2026-01-01T08:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.601957 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.601992 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.602003 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.602020 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.602029 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:55Z","lastTransitionTime":"2026-01-01T08:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.704257 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.704326 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.704349 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.704378 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.704400 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:55Z","lastTransitionTime":"2026-01-01T08:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.754315 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.758812 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.766704 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.776715 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.799988 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.808189 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.808259 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.808287 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.808319 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.808339 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:55Z","lastTransitionTime":"2026-01-01T08:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.819174 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.837120 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.859445 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a93d40-ed12-413d-b8fa-1c683a35a7e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh66z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.873656 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tg4nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"778253a2-b732-4460-994a-9543f533383f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dcfcdcf5aaf1d45a445b50f5ec520543620fc85992894681c627a2fd8ad4ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tg4nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.893286 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.905061 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.911172 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.911218 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.911231 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.911248 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.911261 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:55Z","lastTransitionTime":"2026-01-01T08:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.920531 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4608a141-23bd-4286-8607-ad4b16b5ee11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee898ddead9a02fda6e950236b68e556221e707ee2a7c1a2d204194cc334124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-69jph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.948476 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.963092 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.981417 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275cdf8433aa5cc7dc4bfa21e80bafb4b9960156aa9d0f7dd23b5c120dfee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:55 crc kubenswrapper[4867]: I0101 08:26:55.993383 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wkbs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da72a722-a2a3-459e-875a-e1605b442e05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wjm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wkbs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.004162 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:56Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.013547 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.013577 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.013586 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.013600 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.013623 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:56Z","lastTransitionTime":"2026-01-01T08:26:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.015125 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4608a141-23bd-4286-8607-ad4b16b5ee11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee898ddead9a02fda6e950236b68e556221e707ee2a7c1a2d204194cc334124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-69jph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:56Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.032973 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:56Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.050231 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:56Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.063087 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:56Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.074303 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275cdf8433aa5cc7dc4bfa21e80bafb4b9960156aa9d0f7dd23b5c120dfee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:56Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.085139 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wkbs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da72a722-a2a3-459e-875a-e1605b442e05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wjm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wkbs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:56Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.096099 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb276ae-66de-4cb8-8237-1036b73042d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3a6b291f30c7815be13fde52bdef7ef22ee57e9c8be80809cf8a90029b8dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9b9aae16cc1c29ffb288ab01b54fa559cfe599c48f3ed97fe62bcc6e5b3288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d92095d9119537b08f6c16f41499ea77d353bebdf97681d1078af6cf5d24be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67682952747bac1bee9a88d0d4960e1b723a69088fc0dfc6ad9a11d66be35066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:56Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.109035 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:56Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.115525 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.115563 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.115575 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.115591 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.115603 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:56Z","lastTransitionTime":"2026-01-01T08:26:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.123344 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:56Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.147239 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a93d40-ed12-413d-b8fa-1c683a35a7e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh66z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:56Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.181306 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tg4nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"778253a2-b732-4460-994a-9543f533383f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dcfcdcf5aaf1d45a445b50f5ec520543620fc85992894681c627a2fd8ad4ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tg4nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:56Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.217756 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.217800 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.217814 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.217829 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.217841 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:56Z","lastTransitionTime":"2026-01-01T08:26:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.225834 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:56Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.264631 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:56Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.313772 4867 generic.go:334] "Generic (PLEG): container finished" podID="35a93d40-ed12-413d-b8fa-1c683a35a7e2" containerID="85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905" exitCode=0 Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.314219 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" event={"ID":"35a93d40-ed12-413d-b8fa-1c683a35a7e2","Type":"ContainerDied","Data":"85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905"} Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.320731 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.320813 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.320830 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.320880 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.320942 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:56Z","lastTransitionTime":"2026-01-01T08:26:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:56 crc kubenswrapper[4867]: E0101 08:26:56.322916 4867 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.331213 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb276ae-66de-4cb8-8237-1036b73042d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3a6b291f30c7815be13fde52bdef7ef22ee57e9c8be80809cf8a90029b8dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9b9aae16cc1c29ffb288ab01b54fa559cfe599c48f3ed97fe62bcc6e5b3288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d92095d9119537b08f6c16f41499ea77d353bebdf97681d1078af6cf5d24be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67682952747bac1bee9a88d0d4960e1b723a69088fc0dfc6ad9a11d66be35066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:56Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.368729 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:56Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.406722 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275cdf8433aa5cc7dc4bfa21e80bafb4b9960156aa9d0f7dd23b5c120dfee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:56Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.426116 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.426152 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.426165 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.426181 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.426193 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:56Z","lastTransitionTime":"2026-01-01T08:26:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.445800 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wkbs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da72a722-a2a3-459e-875a-e1605b442e05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wjm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wkbs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:56Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.482177 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tg4nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"778253a2-b732-4460-994a-9543f533383f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dcfcdcf5aaf1d45a445b50f5ec520543620fc85992894681c627a2fd8ad4ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tg4nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:56Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.527783 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:56Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.529127 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.529168 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.529179 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.529196 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.529208 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:56Z","lastTransitionTime":"2026-01-01T08:26:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.566407 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:56Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.608139 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:56Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.632176 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.632330 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.632453 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.632761 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.633085 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:56Z","lastTransitionTime":"2026-01-01T08:26:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.647724 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:56Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.692081 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a93d40-ed12-413d-b8fa-1c683a35a7e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh66z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:56Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.729610 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:56Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.735300 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.735340 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.735363 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.735378 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.735388 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:56Z","lastTransitionTime":"2026-01-01T08:26:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.768184 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:56Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.806903 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4608a141-23bd-4286-8607-ad4b16b5ee11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee898ddead9a02fda6e950236b68e556221e707ee2a7c1a2d204194cc334124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-69jph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:56Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.828398 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:26:56 crc kubenswrapper[4867]: E0101 08:26:56.828640 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:27:04.828603617 +0000 UTC m=+33.963872426 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.838302 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.838352 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.838372 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.838395 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.838414 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:56Z","lastTransitionTime":"2026-01-01T08:26:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.857314 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:56Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.929720 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.929782 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.929842 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.929880 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:26:56 crc kubenswrapper[4867]: E0101 08:26:56.930021 4867 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 01 08:26:56 crc kubenswrapper[4867]: E0101 08:26:56.930020 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 01 08:26:56 crc kubenswrapper[4867]: E0101 08:26:56.930074 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 01 08:26:56 crc kubenswrapper[4867]: E0101 08:26:56.930077 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 01 08:26:56 crc kubenswrapper[4867]: E0101 08:26:56.930113 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 01 08:26:56 crc kubenswrapper[4867]: E0101 08:26:56.930128 4867 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 01 08:26:56 crc kubenswrapper[4867]: E0101 08:26:56.930089 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-01 08:27:04.930067946 +0000 UTC m=+34.065336755 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 01 08:26:56 crc kubenswrapper[4867]: E0101 08:26:56.930096 4867 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 01 08:26:56 crc kubenswrapper[4867]: E0101 08:26:56.930203 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-01 08:27:04.930184109 +0000 UTC m=+34.065452898 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 01 08:26:56 crc kubenswrapper[4867]: E0101 08:26:56.930037 4867 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 01 08:26:56 crc kubenswrapper[4867]: E0101 08:26:56.930245 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-01 08:27:04.930237531 +0000 UTC m=+34.065506320 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 01 08:26:56 crc kubenswrapper[4867]: E0101 08:26:56.930295 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-01 08:27:04.930254101 +0000 UTC m=+34.065522900 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.940540 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.940587 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.940605 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.940628 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:56 crc kubenswrapper[4867]: I0101 08:26:56.940645 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:56Z","lastTransitionTime":"2026-01-01T08:26:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.044178 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.044245 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.044266 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.044294 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.044315 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:57Z","lastTransitionTime":"2026-01-01T08:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.128455 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.128583 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.128492 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:26:57 crc kubenswrapper[4867]: E0101 08:26:57.128696 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:26:57 crc kubenswrapper[4867]: E0101 08:26:57.128872 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:26:57 crc kubenswrapper[4867]: E0101 08:26:57.129063 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.147021 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.147092 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.147115 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.147147 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.147169 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:57Z","lastTransitionTime":"2026-01-01T08:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.267617 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.267683 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.267702 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.267726 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.267744 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:57Z","lastTransitionTime":"2026-01-01T08:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.323320 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" event={"ID":"2d26a65b-86d6-4603-bdeb-ffcb2f086fda","Type":"ContainerStarted","Data":"6ce633876af890868689d71ce63fdb4a078afa4706ec76373c50191ffadf0ab0"} Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.323625 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.323685 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.329280 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" event={"ID":"35a93d40-ed12-413d-b8fa-1c683a35a7e2","Type":"ContainerStarted","Data":"5cbb4d600b45c1a7f207f05502281886ef2861173b5ca6aa86a73a0ab8c2afcc"} Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.346950 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:57Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.365352 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.369283 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.370846 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.370920 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.370938 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.370961 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.370979 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:57Z","lastTransitionTime":"2026-01-01T08:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.373797 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:57Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.394446 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a93d40-ed12-413d-b8fa-1c683a35a7e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh66z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:57Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.405032 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tg4nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"778253a2-b732-4460-994a-9543f533383f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dcfcdcf5aaf1d45a445b50f5ec520543620fc85992894681c627a2fd8ad4ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tg4nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:57Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.420789 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:57Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.430152 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:57Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.440485 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:57Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.452167 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4608a141-23bd-4286-8607-ad4b16b5ee11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee898ddead9a02fda6e950236b68e556221e707ee2a7c1a2d204194cc334124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-69jph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:57Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.473982 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.474037 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.474054 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.474080 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.474097 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:57Z","lastTransitionTime":"2026-01-01T08:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.479046 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ce633876af890868689d71ce63fdb4a078afa4706ec76373c50191ffadf0ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:57Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.495245 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:57Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.507757 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:57Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.520293 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275cdf8433aa5cc7dc4bfa21e80bafb4b9960156aa9d0f7dd23b5c120dfee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:57Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.535436 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wkbs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da72a722-a2a3-459e-875a-e1605b442e05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wjm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wkbs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:57Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.552213 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb276ae-66de-4cb8-8237-1036b73042d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3a6b291f30c7815be13fde52bdef7ef22ee57e9c8be80809cf8a90029b8dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9b9aae16cc1c29ffb288ab01b54fa559cfe599c48f3ed97fe62bcc6e5b3288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d92095d9119537b08f6c16f41499ea77d353bebdf97681d1078af6cf5d24be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67682952747bac1bee9a88d0d4960e1b723a69088fc0dfc6ad9a11d66be35066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:57Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.568138 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a93d40-ed12-413d-b8fa-1c683a35a7e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cbb4d600b45c1a7f207f05502281886ef2861173b5ca6aa86a73a0ab8c2afcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh66z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:57Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.576378 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.576406 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.576419 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.576436 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.576448 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:57Z","lastTransitionTime":"2026-01-01T08:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.578557 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tg4nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"778253a2-b732-4460-994a-9543f533383f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dcfcdcf5aaf1d45a445b50f5ec520543620fc85992894681c627a2fd8ad4ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tg4nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:57Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.594875 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:57Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.607136 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:57Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.626200 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:57Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.650787 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:57Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.678649 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.678726 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.678741 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.678794 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.678810 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:57Z","lastTransitionTime":"2026-01-01T08:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.702757 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ce633876af890868689d71ce63fdb4a078afa4706ec76373c50191ffadf0ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:57Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.730783 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:57Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.766804 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:57Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.781433 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.781469 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.781486 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.781512 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.781532 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:57Z","lastTransitionTime":"2026-01-01T08:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.809385 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4608a141-23bd-4286-8607-ad4b16b5ee11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee898ddead9a02fda6e950236b68e556221e707ee2a7c1a2d204194cc334124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-69jph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:57Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.850541 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wkbs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da72a722-a2a3-459e-875a-e1605b442e05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wjm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wkbs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:57Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.884338 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.884442 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.884466 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.884496 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.884519 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:57Z","lastTransitionTime":"2026-01-01T08:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.893278 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb276ae-66de-4cb8-8237-1036b73042d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3a6b291f30c7815be13fde52bdef7ef22ee57e9c8be80809cf8a90029b8dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9b9aae16cc1c29ffb288ab01b54fa559cfe599c48f3ed97fe62bcc6e5b3288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d92095d9119537b08f6c16f41499ea77d353bebdf97681d1078af6cf5d24be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67682952747bac1bee9a88d0d4960e1b723a69088fc0dfc6ad9a11d66be35066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:57Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.929049 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:57Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.969289 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275cdf8433aa5cc7dc4bfa21e80bafb4b9960156aa9d0f7dd23b5c120dfee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:57Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.987524 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.987584 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.987601 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.987623 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:57 crc kubenswrapper[4867]: I0101 08:26:57.987638 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:57Z","lastTransitionTime":"2026-01-01T08:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.090844 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.090931 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.090948 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.090970 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.090986 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:58Z","lastTransitionTime":"2026-01-01T08:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.194422 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.194524 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.194541 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.194565 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.194588 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:58Z","lastTransitionTime":"2026-01-01T08:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.298099 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.298135 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.298146 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.298166 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.298179 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:58Z","lastTransitionTime":"2026-01-01T08:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.332586 4867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.401424 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.401469 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.401482 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.401499 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.401518 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:58Z","lastTransitionTime":"2026-01-01T08:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.504712 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.504764 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.504778 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.504801 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.504816 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:58Z","lastTransitionTime":"2026-01-01T08:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.607695 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.607757 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.607775 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.607799 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.607817 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:58Z","lastTransitionTime":"2026-01-01T08:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.710391 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.710455 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.710473 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.710502 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.710521 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:58Z","lastTransitionTime":"2026-01-01T08:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.813121 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.813169 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.813186 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.813210 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.813226 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:58Z","lastTransitionTime":"2026-01-01T08:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.916566 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.916620 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.916638 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.916659 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:58 crc kubenswrapper[4867]: I0101 08:26:58.916674 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:58Z","lastTransitionTime":"2026-01-01T08:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.019767 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.019817 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.019828 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.019867 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.019879 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:59Z","lastTransitionTime":"2026-01-01T08:26:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.123211 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.123258 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.123293 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.123372 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.123393 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:59Z","lastTransitionTime":"2026-01-01T08:26:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.127712 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:26:59 crc kubenswrapper[4867]: E0101 08:26:59.128457 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.128662 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.128804 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:26:59 crc kubenswrapper[4867]: E0101 08:26:59.133515 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:26:59 crc kubenswrapper[4867]: E0101 08:26:59.133709 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.226814 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.226879 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.226924 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.226943 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.226957 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:59Z","lastTransitionTime":"2026-01-01T08:26:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.329661 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.329704 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.329715 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.329732 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.329745 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:59Z","lastTransitionTime":"2026-01-01T08:26:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.338023 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nftn_2d26a65b-86d6-4603-bdeb-ffcb2f086fda/ovnkube-controller/0.log" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.341639 4867 generic.go:334] "Generic (PLEG): container finished" podID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerID="6ce633876af890868689d71ce63fdb4a078afa4706ec76373c50191ffadf0ab0" exitCode=1 Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.341679 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" event={"ID":"2d26a65b-86d6-4603-bdeb-ffcb2f086fda","Type":"ContainerDied","Data":"6ce633876af890868689d71ce63fdb4a078afa4706ec76373c50191ffadf0ab0"} Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.342865 4867 scope.go:117] "RemoveContainer" containerID="6ce633876af890868689d71ce63fdb4a078afa4706ec76373c50191ffadf0ab0" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.363062 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:59Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.388677 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a93d40-ed12-413d-b8fa-1c683a35a7e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cbb4d600b45c1a7f207f05502281886ef2861173b5ca6aa86a73a0ab8c2afcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh66z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:59Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.410494 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tg4nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"778253a2-b732-4460-994a-9543f533383f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dcfcdcf5aaf1d45a445b50f5ec520543620fc85992894681c627a2fd8ad4ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tg4nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:59Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.433794 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.433847 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.433861 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.433910 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.433827 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:59Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.433927 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:59Z","lastTransitionTime":"2026-01-01T08:26:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.456479 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:59Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.472317 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:59Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.483596 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4608a141-23bd-4286-8607-ad4b16b5ee11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee898ddead9a02fda6e950236b68e556221e707ee2a7c1a2d204194cc334124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-69jph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:59Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.506720 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ce633876af890868689d71ce63fdb4a078afa4706ec76373c50191ffadf0ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce633876af890868689d71ce63fdb4a078afa4706ec76373c50191ffadf0ab0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-01T08:26:59Z\\\",\\\"message\\\":\\\".273918 6132 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0101 08:26:59.273952 6132 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0101 08:26:59.273971 6132 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0101 08:26:59.275263 6132 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0101 08:26:59.275300 6132 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0101 08:26:59.275306 6132 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0101 08:26:59.275319 6132 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0101 08:26:59.275324 6132 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0101 08:26:59.275338 6132 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0101 08:26:59.275353 6132 factory.go:656] Stopping watch factory\\\\nI0101 08:26:59.275367 6132 ovnkube.go:599] Stopped ovnkube\\\\nI0101 08:26:59.275361 6132 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0101 08:26:59.275390 6132 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0101 08:26:59.275394 6132 handler.go:208] Removed *v1.Node event handler 2\\\\nI0101 08:26:59.275401 6132 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0101 08:26:59.275395 6132 handler.go:208] Removed *v1.NetworkPolicy even\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:59Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.524282 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:59Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.538247 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:59Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.538342 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.538383 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.538395 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.538413 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.538426 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:59Z","lastTransitionTime":"2026-01-01T08:26:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.554753 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275cdf8433aa5cc7dc4bfa21e80bafb4b9960156aa9d0f7dd23b5c120dfee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:59Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.570793 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wkbs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da72a722-a2a3-459e-875a-e1605b442e05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wjm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wkbs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:59Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.583512 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb276ae-66de-4cb8-8237-1036b73042d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3a6b291f30c7815be13fde52bdef7ef22ee57e9c8be80809cf8a90029b8dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9b9aae16cc1c29ffb288ab01b54fa559cfe599c48f3ed97fe62bcc6e5b3288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d92095d9119537b08f6c16f41499ea77d353bebdf97681d1078af6cf5d24be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67682952747bac1bee9a88d0d4960e1b723a69088fc0dfc6ad9a11d66be35066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:59Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.597416 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:26:59Z is after 2025-08-24T17:21:41Z" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.641612 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.641654 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.641667 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.641684 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.641697 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:59Z","lastTransitionTime":"2026-01-01T08:26:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.744342 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.744440 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.744470 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.744502 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.744526 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:59Z","lastTransitionTime":"2026-01-01T08:26:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.846379 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.846417 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.846431 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.846446 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.846456 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:59Z","lastTransitionTime":"2026-01-01T08:26:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.949413 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.949472 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.949490 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.949512 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:26:59 crc kubenswrapper[4867]: I0101 08:26:59.949531 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:26:59Z","lastTransitionTime":"2026-01-01T08:26:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.052279 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.052339 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.052356 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.052382 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.052399 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:00Z","lastTransitionTime":"2026-01-01T08:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.156118 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.156189 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.156213 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.156241 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.156264 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:00Z","lastTransitionTime":"2026-01-01T08:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.259436 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.259502 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.259519 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.259545 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.259563 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:00Z","lastTransitionTime":"2026-01-01T08:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.349880 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nftn_2d26a65b-86d6-4603-bdeb-ffcb2f086fda/ovnkube-controller/0.log" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.354512 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" event={"ID":"2d26a65b-86d6-4603-bdeb-ffcb2f086fda","Type":"ContainerStarted","Data":"3ccecca21314aa9556407b7a6f8524cfd1298751681c8fbedc5d9f04a7dfffc9"} Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.354603 4867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.363473 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.363505 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.363520 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.363545 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.363558 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:00Z","lastTransitionTime":"2026-01-01T08:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.467049 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.467087 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.467098 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.467114 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.467125 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:00Z","lastTransitionTime":"2026-01-01T08:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.498393 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a93d40-ed12-413d-b8fa-1c683a35a7e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cbb4d600b45c1a7f207f05502281886ef2861173b5ca6aa86a73a0ab8c2afcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh66z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:00Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.513928 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tg4nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"778253a2-b732-4460-994a-9543f533383f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dcfcdcf5aaf1d45a445b50f5ec520543620fc85992894681c627a2fd8ad4ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tg4nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:00Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.535423 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:00Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.551557 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:00Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.571247 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.571301 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.571318 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.571342 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.571359 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:00Z","lastTransitionTime":"2026-01-01T08:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.572149 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:00Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.599189 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:00Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.629314 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccecca21314aa9556407b7a6f8524cfd1298751681c8fbedc5d9f04a7dfffc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce633876af890868689d71ce63fdb4a078afa4706ec76373c50191ffadf0ab0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-01T08:26:59Z\\\",\\\"message\\\":\\\".273918 6132 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0101 08:26:59.273952 6132 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0101 08:26:59.273971 6132 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0101 08:26:59.275263 6132 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0101 08:26:59.275300 6132 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0101 08:26:59.275306 6132 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0101 08:26:59.275319 6132 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0101 08:26:59.275324 6132 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0101 08:26:59.275338 6132 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0101 08:26:59.275353 6132 factory.go:656] Stopping watch factory\\\\nI0101 08:26:59.275367 6132 ovnkube.go:599] Stopped ovnkube\\\\nI0101 08:26:59.275361 6132 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0101 08:26:59.275390 6132 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0101 08:26:59.275394 6132 handler.go:208] Removed *v1.Node event handler 2\\\\nI0101 08:26:59.275401 6132 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0101 08:26:59.275395 6132 handler.go:208] Removed *v1.NetworkPolicy even\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:00Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.651154 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:00Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.673925 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.673983 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.674002 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.674032 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.674055 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:00Z","lastTransitionTime":"2026-01-01T08:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.675303 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:00Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.691241 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4608a141-23bd-4286-8607-ad4b16b5ee11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee898ddead9a02fda6e950236b68e556221e707ee2a7c1a2d204194cc334124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-69jph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:00Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.715954 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wkbs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da72a722-a2a3-459e-875a-e1605b442e05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wjm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wkbs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:00Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.732330 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb276ae-66de-4cb8-8237-1036b73042d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3a6b291f30c7815be13fde52bdef7ef22ee57e9c8be80809cf8a90029b8dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9b9aae16cc1c29ffb288ab01b54fa559cfe599c48f3ed97fe62bcc6e5b3288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d92095d9119537b08f6c16f41499ea77d353bebdf97681d1078af6cf5d24be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67682952747bac1bee9a88d0d4960e1b723a69088fc0dfc6ad9a11d66be35066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:00Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.751786 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:00Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.765984 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275cdf8433aa5cc7dc4bfa21e80bafb4b9960156aa9d0f7dd23b5c120dfee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:00Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.776872 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.776930 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.776942 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.776959 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.776972 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:00Z","lastTransitionTime":"2026-01-01T08:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.880013 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.880075 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.880091 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.880113 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.880129 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:00Z","lastTransitionTime":"2026-01-01T08:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.983676 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.983743 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.983762 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.983786 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:00 crc kubenswrapper[4867]: I0101 08:27:00.983807 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:00Z","lastTransitionTime":"2026-01-01T08:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.087036 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.087111 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.087136 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.087170 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.087193 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:01Z","lastTransitionTime":"2026-01-01T08:27:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.128306 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.128441 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.128521 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:27:01 crc kubenswrapper[4867]: E0101 08:27:01.128447 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:27:01 crc kubenswrapper[4867]: E0101 08:27:01.128639 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:27:01 crc kubenswrapper[4867]: E0101 08:27:01.128795 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.150516 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.166877 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.184637 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4608a141-23bd-4286-8607-ad4b16b5ee11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee898ddead9a02fda6e950236b68e556221e707ee2a7c1a2d204194cc334124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-69jph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.189994 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.190064 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.190079 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.190102 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.190118 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:01Z","lastTransitionTime":"2026-01-01T08:27:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.218175 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccecca21314aa9556407b7a6f8524cfd1298751681c8fbedc5d9f04a7dfffc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce633876af890868689d71ce63fdb4a078afa4706ec76373c50191ffadf0ab0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-01T08:26:59Z\\\",\\\"message\\\":\\\".273918 6132 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0101 08:26:59.273952 6132 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0101 08:26:59.273971 6132 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0101 08:26:59.275263 6132 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0101 08:26:59.275300 6132 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0101 08:26:59.275306 6132 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0101 08:26:59.275319 6132 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0101 08:26:59.275324 6132 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0101 08:26:59.275338 6132 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0101 08:26:59.275353 6132 factory.go:656] Stopping watch factory\\\\nI0101 08:26:59.275367 6132 ovnkube.go:599] Stopped ovnkube\\\\nI0101 08:26:59.275361 6132 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0101 08:26:59.275390 6132 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0101 08:26:59.275394 6132 handler.go:208] Removed *v1.Node event handler 2\\\\nI0101 08:26:59.275401 6132 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0101 08:26:59.275395 6132 handler.go:208] Removed *v1.NetworkPolicy even\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.237600 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb276ae-66de-4cb8-8237-1036b73042d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3a6b291f30c7815be13fde52bdef7ef22ee57e9c8be80809cf8a90029b8dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9b9aae16cc1c29ffb288ab01b54fa559cfe599c48f3ed97fe62bcc6e5b3288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d92095d9119537b08f6c16f41499ea77d353bebdf97681d1078af6cf5d24be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67682952747bac1bee9a88d0d4960e1b723a69088fc0dfc6ad9a11d66be35066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.257415 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.275599 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275cdf8433aa5cc7dc4bfa21e80bafb4b9960156aa9d0f7dd23b5c120dfee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.292555 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.292646 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.292674 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.292710 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.292735 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:01Z","lastTransitionTime":"2026-01-01T08:27:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.301966 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wkbs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da72a722-a2a3-459e-875a-e1605b442e05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wjm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wkbs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.314798 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tg4nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"778253a2-b732-4460-994a-9543f533383f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dcfcdcf5aaf1d45a445b50f5ec520543620fc85992894681c627a2fd8ad4ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tg4nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.329287 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.346991 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.362247 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nftn_2d26a65b-86d6-4603-bdeb-ffcb2f086fda/ovnkube-controller/1.log" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.363106 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nftn_2d26a65b-86d6-4603-bdeb-ffcb2f086fda/ovnkube-controller/0.log" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.365927 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.368439 4867 generic.go:334] "Generic (PLEG): container finished" podID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerID="3ccecca21314aa9556407b7a6f8524cfd1298751681c8fbedc5d9f04a7dfffc9" exitCode=1 Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.368498 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" event={"ID":"2d26a65b-86d6-4603-bdeb-ffcb2f086fda","Type":"ContainerDied","Data":"3ccecca21314aa9556407b7a6f8524cfd1298751681c8fbedc5d9f04a7dfffc9"} Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.368570 4867 scope.go:117] "RemoveContainer" containerID="6ce633876af890868689d71ce63fdb4a078afa4706ec76373c50191ffadf0ab0" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.369713 4867 scope.go:117] "RemoveContainer" containerID="3ccecca21314aa9556407b7a6f8524cfd1298751681c8fbedc5d9f04a7dfffc9" Jan 01 08:27:01 crc kubenswrapper[4867]: E0101 08:27:01.370056 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6nftn_openshift-ovn-kubernetes(2d26a65b-86d6-4603-bdeb-ffcb2f086fda)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.381102 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.396940 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.397010 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.397039 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.397078 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.397104 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:01Z","lastTransitionTime":"2026-01-01T08:27:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.407067 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a93d40-ed12-413d-b8fa-1c683a35a7e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cbb4d600b45c1a7f207f05502281886ef2861173b5ca6aa86a73a0ab8c2afcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh66z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.428266 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275cdf8433aa5cc7dc4bfa21e80bafb4b9960156aa9d0f7dd23b5c120dfee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.449702 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wkbs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da72a722-a2a3-459e-875a-e1605b442e05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wjm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wkbs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.470096 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb276ae-66de-4cb8-8237-1036b73042d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3a6b291f30c7815be13fde52bdef7ef22ee57e9c8be80809cf8a90029b8dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9b9aae16cc1c29ffb288ab01b54fa559cfe599c48f3ed97fe62bcc6e5b3288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d92095d9119537b08f6c16f41499ea77d353bebdf97681d1078af6cf5d24be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67682952747bac1bee9a88d0d4960e1b723a69088fc0dfc6ad9a11d66be35066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.491012 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.499866 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.499958 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.499977 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.500006 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.500026 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:01Z","lastTransitionTime":"2026-01-01T08:27:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.510219 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.533032 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a93d40-ed12-413d-b8fa-1c683a35a7e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cbb4d600b45c1a7f207f05502281886ef2861173b5ca6aa86a73a0ab8c2afcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh66z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.549165 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tg4nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"778253a2-b732-4460-994a-9543f533383f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dcfcdcf5aaf1d45a445b50f5ec520543620fc85992894681c627a2fd8ad4ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tg4nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.566532 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.585626 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.602828 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.602915 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.602933 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.602956 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.602974 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:01Z","lastTransitionTime":"2026-01-01T08:27:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.604317 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.621866 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4608a141-23bd-4286-8607-ad4b16b5ee11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee898ddead9a02fda6e950236b68e556221e707ee2a7c1a2d204194cc334124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-69jph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.646823 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccecca21314aa9556407b7a6f8524cfd1298751681c8fbedc5d9f04a7dfffc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce633876af890868689d71ce63fdb4a078afa4706ec76373c50191ffadf0ab0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-01T08:26:59Z\\\",\\\"message\\\":\\\".273918 6132 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0101 08:26:59.273952 6132 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0101 08:26:59.273971 6132 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0101 08:26:59.275263 6132 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0101 08:26:59.275300 6132 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0101 08:26:59.275306 6132 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0101 08:26:59.275319 6132 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0101 08:26:59.275324 6132 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0101 08:26:59.275338 6132 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0101 08:26:59.275353 6132 factory.go:656] Stopping watch factory\\\\nI0101 08:26:59.275367 6132 ovnkube.go:599] Stopped ovnkube\\\\nI0101 08:26:59.275361 6132 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0101 08:26:59.275390 6132 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0101 08:26:59.275394 6132 handler.go:208] Removed *v1.Node event handler 2\\\\nI0101 08:26:59.275401 6132 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0101 08:26:59.275395 6132 handler.go:208] Removed *v1.NetworkPolicy even\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccecca21314aa9556407b7a6f8524cfd1298751681c8fbedc5d9f04a7dfffc9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-01T08:27:00Z\\\",\\\"message\\\":\\\"ute/v1/apis/informers/externalversions/factory.go:140\\\\nI0101 08:27:00.287106 6252 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0101 08:27:00.287118 6252 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0101 08:27:00.287151 6252 handler.go:208] Removed *v1.Node event handler 2\\\\nI0101 08:27:00.287163 6252 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0101 08:27:00.287176 6252 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0101 08:27:00.287183 6252 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0101 08:27:00.287199 6252 handler.go:208] Removed *v1.Node event handler 7\\\\nI0101 08:27:00.287211 6252 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0101 08:27:00.287213 6252 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0101 08:27:00.287267 6252 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0101 08:27:00.287292 6252 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0101 08:27:00.287309 6252 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0101 08:27:00.287336 6252 factory.go:656] Stopping watch factory\\\\nI0101 08:27:00.287334 6252 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0101 08:27:00.287348 6252 ovnkube.go:599] Stopped ovnkube\\\\nI0101 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.668512 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.684341 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.705069 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.705113 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.705125 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.705141 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.705152 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:01Z","lastTransitionTime":"2026-01-01T08:27:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.808065 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.808128 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.808152 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.808177 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.808194 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:01Z","lastTransitionTime":"2026-01-01T08:27:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.911538 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.911603 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.911628 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.911658 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:01 crc kubenswrapper[4867]: I0101 08:27:01.911679 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:01Z","lastTransitionTime":"2026-01-01T08:27:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.016542 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.016641 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.016661 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.016687 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.016705 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:02Z","lastTransitionTime":"2026-01-01T08:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.119438 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.119493 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.119513 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.119542 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.119592 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:02Z","lastTransitionTime":"2026-01-01T08:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.222260 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.222383 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.222412 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.222443 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.222468 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:02Z","lastTransitionTime":"2026-01-01T08:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.324856 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.324910 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.324920 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.324932 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.324941 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:02Z","lastTransitionTime":"2026-01-01T08:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.375788 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nftn_2d26a65b-86d6-4603-bdeb-ffcb2f086fda/ovnkube-controller/1.log" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.380695 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.380750 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.380768 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.380791 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.380807 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:02Z","lastTransitionTime":"2026-01-01T08:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.382785 4867 scope.go:117] "RemoveContainer" containerID="3ccecca21314aa9556407b7a6f8524cfd1298751681c8fbedc5d9f04a7dfffc9" Jan 01 08:27:02 crc kubenswrapper[4867]: E0101 08:27:02.383106 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6nftn_openshift-ovn-kubernetes(2d26a65b-86d6-4603-bdeb-ffcb2f086fda)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" Jan 01 08:27:02 crc kubenswrapper[4867]: E0101 08:27:02.412531 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:02Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.419675 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tg4nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"778253a2-b732-4460-994a-9543f533383f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dcfcdcf5aaf1d45a445b50f5ec520543620fc85992894681c627a2fd8ad4ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tg4nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:02Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.422150 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.422214 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.422266 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.422296 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.422318 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:02Z","lastTransitionTime":"2026-01-01T08:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.447175 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zs59x"] Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.447710 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zs59x" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.449875 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.449911 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 01 08:27:02 crc kubenswrapper[4867]: E0101 08:27:02.451411 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:02Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.457826 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.457862 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.457873 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.457933 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.457947 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:02Z","lastTransitionTime":"2026-01-01T08:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.464982 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:02Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:02 crc kubenswrapper[4867]: E0101 08:27:02.476940 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:02Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.479441 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:02Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.479981 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.480029 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.480041 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.480058 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.480072 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:02Z","lastTransitionTime":"2026-01-01T08:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.493470 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:02Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:02 crc kubenswrapper[4867]: E0101 08:27:02.493542 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:02Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.497243 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.497276 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.497284 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.497299 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.497308 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:02Z","lastTransitionTime":"2026-01-01T08:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.506036 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:02Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:02 crc kubenswrapper[4867]: E0101 08:27:02.508425 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:02Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:02 crc kubenswrapper[4867]: E0101 08:27:02.508823 4867 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.511085 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.511129 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.511143 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.511163 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.511183 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:02Z","lastTransitionTime":"2026-01-01T08:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.522794 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3db3b0fa-02f9-475b-a6ca-8ac262cbe337-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zs59x\" (UID: \"3db3b0fa-02f9-475b-a6ca-8ac262cbe337\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zs59x" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.522837 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3db3b0fa-02f9-475b-a6ca-8ac262cbe337-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zs59x\" (UID: \"3db3b0fa-02f9-475b-a6ca-8ac262cbe337\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zs59x" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.522943 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3db3b0fa-02f9-475b-a6ca-8ac262cbe337-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zs59x\" (UID: \"3db3b0fa-02f9-475b-a6ca-8ac262cbe337\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zs59x" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.522969 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99n2b\" (UniqueName: \"kubernetes.io/projected/3db3b0fa-02f9-475b-a6ca-8ac262cbe337-kube-api-access-99n2b\") pod \"ovnkube-control-plane-749d76644c-zs59x\" (UID: \"3db3b0fa-02f9-475b-a6ca-8ac262cbe337\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zs59x" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.528418 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a93d40-ed12-413d-b8fa-1c683a35a7e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cbb4d600b45c1a7f207f05502281886ef2861173b5ca6aa86a73a0ab8c2afcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh66z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:02Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.546507 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:02Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.558544 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:02Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.572205 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4608a141-23bd-4286-8607-ad4b16b5ee11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee898ddead9a02fda6e950236b68e556221e707ee2a7c1a2d204194cc334124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-69jph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:02Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.600405 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccecca21314aa9556407b7a6f8524cfd1298751681c8fbedc5d9f04a7dfffc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccecca21314aa9556407b7a6f8524cfd1298751681c8fbedc5d9f04a7dfffc9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-01T08:27:00Z\\\",\\\"message\\\":\\\"ute/v1/apis/informers/externalversions/factory.go:140\\\\nI0101 08:27:00.287106 6252 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0101 08:27:00.287118 6252 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0101 08:27:00.287151 6252 handler.go:208] Removed *v1.Node event handler 2\\\\nI0101 08:27:00.287163 6252 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0101 08:27:00.287176 6252 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0101 08:27:00.287183 6252 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0101 08:27:00.287199 6252 handler.go:208] Removed *v1.Node event handler 7\\\\nI0101 08:27:00.287211 6252 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0101 08:27:00.287213 6252 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0101 08:27:00.287267 6252 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0101 08:27:00.287292 6252 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0101 08:27:00.287309 6252 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0101 08:27:00.287336 6252 factory.go:656] Stopping watch factory\\\\nI0101 08:27:00.287334 6252 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0101 08:27:00.287348 6252 ovnkube.go:599] Stopped ovnkube\\\\nI0101 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6nftn_openshift-ovn-kubernetes(2d26a65b-86d6-4603-bdeb-ffcb2f086fda)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:02Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.613729 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.613791 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.613815 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.613840 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.613858 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:02Z","lastTransitionTime":"2026-01-01T08:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.619669 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb276ae-66de-4cb8-8237-1036b73042d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3a6b291f30c7815be13fde52bdef7ef22ee57e9c8be80809cf8a90029b8dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9b9aae16cc1c29ffb288ab01b54fa559cfe599c48f3ed97fe62bcc6e5b3288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d92095d9119537b08f6c16f41499ea77d353bebdf97681d1078af6cf5d24be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67682952747bac1bee9a88d0d4960e1b723a69088fc0dfc6ad9a11d66be35066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:02Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.623495 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3db3b0fa-02f9-475b-a6ca-8ac262cbe337-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zs59x\" (UID: \"3db3b0fa-02f9-475b-a6ca-8ac262cbe337\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zs59x" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.623551 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99n2b\" (UniqueName: \"kubernetes.io/projected/3db3b0fa-02f9-475b-a6ca-8ac262cbe337-kube-api-access-99n2b\") pod \"ovnkube-control-plane-749d76644c-zs59x\" (UID: \"3db3b0fa-02f9-475b-a6ca-8ac262cbe337\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zs59x" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.623591 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3db3b0fa-02f9-475b-a6ca-8ac262cbe337-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zs59x\" (UID: \"3db3b0fa-02f9-475b-a6ca-8ac262cbe337\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zs59x" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.623621 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3db3b0fa-02f9-475b-a6ca-8ac262cbe337-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zs59x\" (UID: \"3db3b0fa-02f9-475b-a6ca-8ac262cbe337\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zs59x" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.624471 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3db3b0fa-02f9-475b-a6ca-8ac262cbe337-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zs59x\" (UID: \"3db3b0fa-02f9-475b-a6ca-8ac262cbe337\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zs59x" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.624774 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3db3b0fa-02f9-475b-a6ca-8ac262cbe337-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zs59x\" (UID: \"3db3b0fa-02f9-475b-a6ca-8ac262cbe337\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zs59x" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.631003 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3db3b0fa-02f9-475b-a6ca-8ac262cbe337-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zs59x\" (UID: \"3db3b0fa-02f9-475b-a6ca-8ac262cbe337\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zs59x" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.639597 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:02Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.646999 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99n2b\" (UniqueName: \"kubernetes.io/projected/3db3b0fa-02f9-475b-a6ca-8ac262cbe337-kube-api-access-99n2b\") pod \"ovnkube-control-plane-749d76644c-zs59x\" (UID: \"3db3b0fa-02f9-475b-a6ca-8ac262cbe337\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zs59x" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.658293 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275cdf8433aa5cc7dc4bfa21e80bafb4b9960156aa9d0f7dd23b5c120dfee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:02Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.676752 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wkbs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da72a722-a2a3-459e-875a-e1605b442e05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wjm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wkbs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:02Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.692690 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zs59x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db3b0fa-02f9-475b-a6ca-8ac262cbe337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99n2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99n2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:27:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zs59x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:02Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.706838 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:02Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.716648 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.716697 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.716710 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.716728 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.716741 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:02Z","lastTransitionTime":"2026-01-01T08:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.723445 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:02Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.741086 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4608a141-23bd-4286-8607-ad4b16b5ee11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee898ddead9a02fda6e950236b68e556221e707ee2a7c1a2d204194cc334124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-69jph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:02Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.762791 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zs59x" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.773864 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccecca21314aa9556407b7a6f8524cfd1298751681c8fbedc5d9f04a7dfffc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccecca21314aa9556407b7a6f8524cfd1298751681c8fbedc5d9f04a7dfffc9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-01T08:27:00Z\\\",\\\"message\\\":\\\"ute/v1/apis/informers/externalversions/factory.go:140\\\\nI0101 08:27:00.287106 6252 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0101 08:27:00.287118 6252 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0101 08:27:00.287151 6252 handler.go:208] Removed *v1.Node event handler 2\\\\nI0101 08:27:00.287163 6252 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0101 08:27:00.287176 6252 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0101 08:27:00.287183 6252 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0101 08:27:00.287199 6252 handler.go:208] Removed *v1.Node event handler 7\\\\nI0101 08:27:00.287211 6252 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0101 08:27:00.287213 6252 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0101 08:27:00.287267 6252 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0101 08:27:00.287292 6252 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0101 08:27:00.287309 6252 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0101 08:27:00.287336 6252 factory.go:656] Stopping watch factory\\\\nI0101 08:27:00.287334 6252 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0101 08:27:00.287348 6252 ovnkube.go:599] Stopped ovnkube\\\\nI0101 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6nftn_openshift-ovn-kubernetes(2d26a65b-86d6-4603-bdeb-ffcb2f086fda)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:02Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:02 crc kubenswrapper[4867]: W0101 08:27:02.777855 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3db3b0fa_02f9_475b_a6ca_8ac262cbe337.slice/crio-f9c8e77a82549d13824b7640221884120efb0485179917727a8cc9ec7b58631b WatchSource:0}: Error finding container f9c8e77a82549d13824b7640221884120efb0485179917727a8cc9ec7b58631b: Status 404 returned error can't find the container with id f9c8e77a82549d13824b7640221884120efb0485179917727a8cc9ec7b58631b Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.795693 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb276ae-66de-4cb8-8237-1036b73042d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3a6b291f30c7815be13fde52bdef7ef22ee57e9c8be80809cf8a90029b8dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9b9aae16cc1c29ffb288ab01b54fa559cfe599c48f3ed97fe62bcc6e5b3288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d92095d9119537b08f6c16f41499ea77d353bebdf97681d1078af6cf5d24be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67682952747bac1bee9a88d0d4960e1b723a69088fc0dfc6ad9a11d66be35066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:02Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.819398 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.819778 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.819790 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.819809 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.819823 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:02Z","lastTransitionTime":"2026-01-01T08:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.822403 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:02Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.842530 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275cdf8433aa5cc7dc4bfa21e80bafb4b9960156aa9d0f7dd23b5c120dfee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:02Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.862737 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wkbs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da72a722-a2a3-459e-875a-e1605b442e05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wjm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wkbs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:02Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.876346 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tg4nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"778253a2-b732-4460-994a-9543f533383f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dcfcdcf5aaf1d45a445b50f5ec520543620fc85992894681c627a2fd8ad4ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tg4nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:02Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.898241 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:02Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.914251 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:02Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.923158 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.923216 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.923234 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.923259 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.923276 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:02Z","lastTransitionTime":"2026-01-01T08:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.933087 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:02Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.950029 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:02Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:02 crc kubenswrapper[4867]: I0101 08:27:02.970102 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a93d40-ed12-413d-b8fa-1c683a35a7e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cbb4d600b45c1a7f207f05502281886ef2861173b5ca6aa86a73a0ab8c2afcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh66z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:02Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.025965 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.026001 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.026011 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.026031 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.026043 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:03Z","lastTransitionTime":"2026-01-01T08:27:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.127812 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.127982 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.128160 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:27:03 crc kubenswrapper[4867]: E0101 08:27:03.128163 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:27:03 crc kubenswrapper[4867]: E0101 08:27:03.128331 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:27:03 crc kubenswrapper[4867]: E0101 08:27:03.128459 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.128617 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.128666 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.128686 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.128709 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.128726 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:03Z","lastTransitionTime":"2026-01-01T08:27:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.231615 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.231672 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.231688 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.231712 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.231729 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:03Z","lastTransitionTime":"2026-01-01T08:27:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.334833 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.334943 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.334964 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.334991 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.335013 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:03Z","lastTransitionTime":"2026-01-01T08:27:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.387098 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zs59x" event={"ID":"3db3b0fa-02f9-475b-a6ca-8ac262cbe337","Type":"ContainerStarted","Data":"f9c8e77a82549d13824b7640221884120efb0485179917727a8cc9ec7b58631b"} Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.437415 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.437480 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.437503 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.437537 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.437559 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:03Z","lastTransitionTime":"2026-01-01T08:27:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.540372 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.540448 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.540472 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.540502 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.540544 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:03Z","lastTransitionTime":"2026-01-01T08:27:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.644399 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.644465 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.644488 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.644517 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.644538 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:03Z","lastTransitionTime":"2026-01-01T08:27:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.746787 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.746947 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.746966 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.746989 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.747005 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:03Z","lastTransitionTime":"2026-01-01T08:27:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.850386 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.850466 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.850492 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.850524 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.850547 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:03Z","lastTransitionTime":"2026-01-01T08:27:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.953790 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.953856 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.953874 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.953933 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:03 crc kubenswrapper[4867]: I0101 08:27:03.953951 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:03Z","lastTransitionTime":"2026-01-01T08:27:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.056550 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.056614 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.056637 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.056662 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.056679 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:04Z","lastTransitionTime":"2026-01-01T08:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.159719 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.159797 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.159815 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.159840 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.159858 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:04Z","lastTransitionTime":"2026-01-01T08:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.263476 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.263545 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.263563 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.263590 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.263608 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:04Z","lastTransitionTime":"2026-01-01T08:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.362374 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-kv8wr"] Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.363221 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:27:04 crc kubenswrapper[4867]: E0101 08:27:04.363346 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.367097 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.367159 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.367181 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.367207 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.367228 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:04Z","lastTransitionTime":"2026-01-01T08:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.391702 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:04Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.394110 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zs59x" event={"ID":"3db3b0fa-02f9-475b-a6ca-8ac262cbe337","Type":"ContainerStarted","Data":"6f320736d0565d6bdbe13a7f7f6bc59048d54de5b289822d570df98a517bb4ee"} Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.394171 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zs59x" event={"ID":"3db3b0fa-02f9-475b-a6ca-8ac262cbe337","Type":"ContainerStarted","Data":"40419cc0c7e84f74407395a89899e4b3107697ef63704b804b426d1cf7652d82"} Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.413667 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:04Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.433315 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:04Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.444148 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgb2t\" (UniqueName: \"kubernetes.io/projected/28af0def-191f-4949-b617-a7a07dd8145b-kube-api-access-hgb2t\") pod \"network-metrics-daemon-kv8wr\" (UID: \"28af0def-191f-4949-b617-a7a07dd8145b\") " pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.444478 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28af0def-191f-4949-b617-a7a07dd8145b-metrics-certs\") pod \"network-metrics-daemon-kv8wr\" (UID: \"28af0def-191f-4949-b617-a7a07dd8145b\") " pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.458854 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a93d40-ed12-413d-b8fa-1c683a35a7e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cbb4d600b45c1a7f207f05502281886ef2861173b5ca6aa86a73a0ab8c2afcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh66z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:04Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.470012 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.470073 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.470090 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.470115 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.470133 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:04Z","lastTransitionTime":"2026-01-01T08:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.477394 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tg4nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"778253a2-b732-4460-994a-9543f533383f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dcfcdcf5aaf1d45a445b50f5ec520543620fc85992894681c627a2fd8ad4ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tg4nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:04Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.498541 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:04Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.517556 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:04Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.533331 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:04Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.545319 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28af0def-191f-4949-b617-a7a07dd8145b-metrics-certs\") pod \"network-metrics-daemon-kv8wr\" (UID: \"28af0def-191f-4949-b617-a7a07dd8145b\") " pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.545373 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgb2t\" (UniqueName: \"kubernetes.io/projected/28af0def-191f-4949-b617-a7a07dd8145b-kube-api-access-hgb2t\") pod \"network-metrics-daemon-kv8wr\" (UID: \"28af0def-191f-4949-b617-a7a07dd8145b\") " pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:27:04 crc kubenswrapper[4867]: E0101 08:27:04.545591 4867 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 01 08:27:04 crc kubenswrapper[4867]: E0101 08:27:04.545724 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28af0def-191f-4949-b617-a7a07dd8145b-metrics-certs podName:28af0def-191f-4949-b617-a7a07dd8145b nodeName:}" failed. No retries permitted until 2026-01-01 08:27:05.045696297 +0000 UTC m=+34.180965096 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/28af0def-191f-4949-b617-a7a07dd8145b-metrics-certs") pod "network-metrics-daemon-kv8wr" (UID: "28af0def-191f-4949-b617-a7a07dd8145b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.558126 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4608a141-23bd-4286-8607-ad4b16b5ee11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee898ddead9a02fda6e950236b68e556221e707ee2a7c1a2d204194cc334124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-69jph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:04Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.573022 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.573074 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.573092 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.573116 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.573137 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:04Z","lastTransitionTime":"2026-01-01T08:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.581832 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgb2t\" (UniqueName: \"kubernetes.io/projected/28af0def-191f-4949-b617-a7a07dd8145b-kube-api-access-hgb2t\") pod \"network-metrics-daemon-kv8wr\" (UID: \"28af0def-191f-4949-b617-a7a07dd8145b\") " pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.598591 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccecca21314aa9556407b7a6f8524cfd1298751681c8fbedc5d9f04a7dfffc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccecca21314aa9556407b7a6f8524cfd1298751681c8fbedc5d9f04a7dfffc9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-01T08:27:00Z\\\",\\\"message\\\":\\\"ute/v1/apis/informers/externalversions/factory.go:140\\\\nI0101 08:27:00.287106 6252 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0101 08:27:00.287118 6252 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0101 08:27:00.287151 6252 handler.go:208] Removed *v1.Node event handler 2\\\\nI0101 08:27:00.287163 6252 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0101 08:27:00.287176 6252 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0101 08:27:00.287183 6252 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0101 08:27:00.287199 6252 handler.go:208] Removed *v1.Node event handler 7\\\\nI0101 08:27:00.287211 6252 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0101 08:27:00.287213 6252 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0101 08:27:00.287267 6252 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0101 08:27:00.287292 6252 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0101 08:27:00.287309 6252 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0101 08:27:00.287336 6252 factory.go:656] Stopping watch factory\\\\nI0101 08:27:00.287334 6252 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0101 08:27:00.287348 6252 ovnkube.go:599] Stopped ovnkube\\\\nI0101 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6nftn_openshift-ovn-kubernetes(2d26a65b-86d6-4603-bdeb-ffcb2f086fda)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:04Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.614357 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zs59x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db3b0fa-02f9-475b-a6ca-8ac262cbe337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99n2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99n2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:27:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zs59x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:04Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.637992 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:04Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.661448 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275cdf8433aa5cc7dc4bfa21e80bafb4b9960156aa9d0f7dd23b5c120dfee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:04Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.675836 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.675950 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.675982 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.676011 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.676029 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:04Z","lastTransitionTime":"2026-01-01T08:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.682568 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wkbs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da72a722-a2a3-459e-875a-e1605b442e05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wjm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wkbs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:04Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.696695 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kv8wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28af0def-191f-4949-b617-a7a07dd8145b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:27:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kv8wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:04Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.714849 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb276ae-66de-4cb8-8237-1036b73042d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3a6b291f30c7815be13fde52bdef7ef22ee57e9c8be80809cf8a90029b8dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9b9aae16cc1c29ffb288ab01b54fa559cfe599c48f3ed97fe62bcc6e5b3288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d92095d9119537b08f6c16f41499ea77d353bebdf97681d1078af6cf5d24be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67682952747bac1bee9a88d0d4960e1b723a69088fc0dfc6ad9a11d66be35066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:04Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.745669 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccecca21314aa9556407b7a6f8524cfd1298751681c8fbedc5d9f04a7dfffc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccecca21314aa9556407b7a6f8524cfd1298751681c8fbedc5d9f04a7dfffc9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-01T08:27:00Z\\\",\\\"message\\\":\\\"ute/v1/apis/informers/externalversions/factory.go:140\\\\nI0101 08:27:00.287106 6252 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0101 08:27:00.287118 6252 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0101 08:27:00.287151 6252 handler.go:208] Removed *v1.Node event handler 2\\\\nI0101 08:27:00.287163 6252 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0101 08:27:00.287176 6252 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0101 08:27:00.287183 6252 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0101 08:27:00.287199 6252 handler.go:208] Removed *v1.Node event handler 7\\\\nI0101 08:27:00.287211 6252 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0101 08:27:00.287213 6252 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0101 08:27:00.287267 6252 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0101 08:27:00.287292 6252 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0101 08:27:00.287309 6252 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0101 08:27:00.287336 6252 factory.go:656] Stopping watch factory\\\\nI0101 08:27:00.287334 6252 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0101 08:27:00.287348 6252 ovnkube.go:599] Stopped ovnkube\\\\nI0101 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6nftn_openshift-ovn-kubernetes(2d26a65b-86d6-4603-bdeb-ffcb2f086fda)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:04Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.764626 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zs59x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db3b0fa-02f9-475b-a6ca-8ac262cbe337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40419cc0c7e84f74407395a89899e4b3107697ef63704b804b426d1cf7652d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99n2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f320736d0565d6bdbe13a7f7f6bc59048d54de5b289822d570df98a517bb4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99n2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:27:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zs59x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:04Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.778673 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.778734 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.778765 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.778792 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.778810 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:04Z","lastTransitionTime":"2026-01-01T08:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.784788 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:04Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.800370 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:04Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.817823 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4608a141-23bd-4286-8607-ad4b16b5ee11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee898ddead9a02fda6e950236b68e556221e707ee2a7c1a2d204194cc334124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-69jph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:04Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.840685 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wkbs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da72a722-a2a3-459e-875a-e1605b442e05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wjm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wkbs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:04Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.848217 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:27:04 crc kubenswrapper[4867]: E0101 08:27:04.848380 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:27:20.848349404 +0000 UTC m=+49.983618213 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.857535 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kv8wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28af0def-191f-4949-b617-a7a07dd8145b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:27:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kv8wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:04Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.878163 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb276ae-66de-4cb8-8237-1036b73042d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3a6b291f30c7815be13fde52bdef7ef22ee57e9c8be80809cf8a90029b8dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9b9aae16cc1c29ffb288ab01b54fa559cfe599c48f3ed97fe62bcc6e5b3288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d92095d9119537b08f6c16f41499ea77d353bebdf97681d1078af6cf5d24be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67682952747bac1bee9a88d0d4960e1b723a69088fc0dfc6ad9a11d66be35066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:04Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.881336 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.881446 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.881475 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.881508 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.881530 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:04Z","lastTransitionTime":"2026-01-01T08:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.903148 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:04Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.922098 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275cdf8433aa5cc7dc4bfa21e80bafb4b9960156aa9d0f7dd23b5c120dfee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:04Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.946272 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a93d40-ed12-413d-b8fa-1c683a35a7e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cbb4d600b45c1a7f207f05502281886ef2861173b5ca6aa86a73a0ab8c2afcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh66z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:04Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.949549 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.949612 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.949662 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.949704 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:27:04 crc kubenswrapper[4867]: E0101 08:27:04.949805 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 01 08:27:04 crc kubenswrapper[4867]: E0101 08:27:04.949863 4867 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 01 08:27:04 crc kubenswrapper[4867]: E0101 08:27:04.949931 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 01 08:27:04 crc kubenswrapper[4867]: E0101 08:27:04.949938 4867 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 01 08:27:04 crc kubenswrapper[4867]: E0101 08:27:04.949963 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 01 08:27:04 crc kubenswrapper[4867]: E0101 08:27:04.949985 4867 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 01 08:27:04 crc kubenswrapper[4867]: E0101 08:27:04.949995 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-01 08:27:20.949970017 +0000 UTC m=+50.085238806 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 01 08:27:04 crc kubenswrapper[4867]: E0101 08:27:04.950034 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-01 08:27:20.950009218 +0000 UTC m=+50.085278017 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 01 08:27:04 crc kubenswrapper[4867]: E0101 08:27:04.949870 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 01 08:27:04 crc kubenswrapper[4867]: E0101 08:27:04.950075 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-01 08:27:20.9500578 +0000 UTC m=+50.085326689 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 01 08:27:04 crc kubenswrapper[4867]: E0101 08:27:04.950088 4867 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 01 08:27:04 crc kubenswrapper[4867]: E0101 08:27:04.950209 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-01 08:27:20.950174423 +0000 UTC m=+50.085443232 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.962304 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tg4nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"778253a2-b732-4460-994a-9543f533383f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dcfcdcf5aaf1d45a445b50f5ec520543620fc85992894681c627a2fd8ad4ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tg4nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:04Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.983472 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:04Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.984844 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.984907 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.984921 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.984937 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:04 crc kubenswrapper[4867]: I0101 08:27:04.984951 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:04Z","lastTransitionTime":"2026-01-01T08:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.004059 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:05Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.022849 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:05Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.042822 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:05Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.050428 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28af0def-191f-4949-b617-a7a07dd8145b-metrics-certs\") pod \"network-metrics-daemon-kv8wr\" (UID: \"28af0def-191f-4949-b617-a7a07dd8145b\") " pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:27:05 crc kubenswrapper[4867]: E0101 08:27:05.050610 4867 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 01 08:27:05 crc kubenswrapper[4867]: E0101 08:27:05.050718 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28af0def-191f-4949-b617-a7a07dd8145b-metrics-certs podName:28af0def-191f-4949-b617-a7a07dd8145b nodeName:}" failed. No retries permitted until 2026-01-01 08:27:06.050693465 +0000 UTC m=+35.185962264 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/28af0def-191f-4949-b617-a7a07dd8145b-metrics-certs") pod "network-metrics-daemon-kv8wr" (UID: "28af0def-191f-4949-b617-a7a07dd8145b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.087728 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.087793 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.087810 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.087838 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.087857 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:05Z","lastTransitionTime":"2026-01-01T08:27:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.128562 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.128627 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.128710 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:27:05 crc kubenswrapper[4867]: E0101 08:27:05.128762 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:27:05 crc kubenswrapper[4867]: E0101 08:27:05.128982 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:27:05 crc kubenswrapper[4867]: E0101 08:27:05.129179 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.191057 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.191116 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.191181 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.191199 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.191212 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:05Z","lastTransitionTime":"2026-01-01T08:27:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.294128 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.294217 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.294247 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.294277 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.294300 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:05Z","lastTransitionTime":"2026-01-01T08:27:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.396962 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.397011 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.397068 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.397092 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.397143 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:05Z","lastTransitionTime":"2026-01-01T08:27:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.499793 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.499851 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.499870 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.499918 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.499937 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:05Z","lastTransitionTime":"2026-01-01T08:27:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.602833 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.602929 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.602953 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.602980 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.603000 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:05Z","lastTransitionTime":"2026-01-01T08:27:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.707163 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.707217 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.707236 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.707259 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.707279 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:05Z","lastTransitionTime":"2026-01-01T08:27:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.810630 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.810707 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.810730 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.810759 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.810781 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:05Z","lastTransitionTime":"2026-01-01T08:27:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.913982 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.914044 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.914068 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.914100 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:05 crc kubenswrapper[4867]: I0101 08:27:05.914121 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:05Z","lastTransitionTime":"2026-01-01T08:27:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.017312 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.017403 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.017424 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.017449 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.017468 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:06Z","lastTransitionTime":"2026-01-01T08:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.062466 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28af0def-191f-4949-b617-a7a07dd8145b-metrics-certs\") pod \"network-metrics-daemon-kv8wr\" (UID: \"28af0def-191f-4949-b617-a7a07dd8145b\") " pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:27:06 crc kubenswrapper[4867]: E0101 08:27:06.062708 4867 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 01 08:27:06 crc kubenswrapper[4867]: E0101 08:27:06.062970 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28af0def-191f-4949-b617-a7a07dd8145b-metrics-certs podName:28af0def-191f-4949-b617-a7a07dd8145b nodeName:}" failed. No retries permitted until 2026-01-01 08:27:08.062850083 +0000 UTC m=+37.198118932 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/28af0def-191f-4949-b617-a7a07dd8145b-metrics-certs") pod "network-metrics-daemon-kv8wr" (UID: "28af0def-191f-4949-b617-a7a07dd8145b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.121008 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.121068 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.121085 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.121109 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.121127 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:06Z","lastTransitionTime":"2026-01-01T08:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.128419 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:27:06 crc kubenswrapper[4867]: E0101 08:27:06.128683 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.224061 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.224154 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.224179 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.224209 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.224232 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:06Z","lastTransitionTime":"2026-01-01T08:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.327503 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.327564 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.327583 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.327609 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.327627 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:06Z","lastTransitionTime":"2026-01-01T08:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.430384 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.430456 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.430478 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.430509 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.430530 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:06Z","lastTransitionTime":"2026-01-01T08:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.533259 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.533397 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.533483 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.533572 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.533620 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:06Z","lastTransitionTime":"2026-01-01T08:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.637069 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.637124 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.637143 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.637165 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.637182 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:06Z","lastTransitionTime":"2026-01-01T08:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.740995 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.741067 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.741090 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.741115 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.741133 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:06Z","lastTransitionTime":"2026-01-01T08:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.843946 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.844002 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.844019 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.844044 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.844060 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:06Z","lastTransitionTime":"2026-01-01T08:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.947398 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.947476 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.947500 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.947530 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:06 crc kubenswrapper[4867]: I0101 08:27:06.947550 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:06Z","lastTransitionTime":"2026-01-01T08:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.051412 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.051476 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.051499 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.051529 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.051551 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:07Z","lastTransitionTime":"2026-01-01T08:27:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.127541 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.127633 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:27:07 crc kubenswrapper[4867]: E0101 08:27:07.127721 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:27:07 crc kubenswrapper[4867]: E0101 08:27:07.127836 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.128151 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:27:07 crc kubenswrapper[4867]: E0101 08:27:07.128301 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.154403 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.154577 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.154632 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.154664 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.154688 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:07Z","lastTransitionTime":"2026-01-01T08:27:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.257862 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.257983 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.258005 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.258035 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.258074 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:07Z","lastTransitionTime":"2026-01-01T08:27:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.361825 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.361940 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.361975 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.362006 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.362028 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:07Z","lastTransitionTime":"2026-01-01T08:27:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.465622 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.465720 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.465739 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.465762 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.465779 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:07Z","lastTransitionTime":"2026-01-01T08:27:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.569783 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.569842 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.569862 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.569922 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.569939 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:07Z","lastTransitionTime":"2026-01-01T08:27:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.674131 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.674208 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.674231 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.674261 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.674284 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:07Z","lastTransitionTime":"2026-01-01T08:27:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.777564 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.777620 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.777636 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.777658 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.777675 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:07Z","lastTransitionTime":"2026-01-01T08:27:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.881240 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.881317 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.881339 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.881369 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.881394 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:07Z","lastTransitionTime":"2026-01-01T08:27:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.985261 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.985337 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.985363 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.985394 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:07 crc kubenswrapper[4867]: I0101 08:27:07.985419 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:07Z","lastTransitionTime":"2026-01-01T08:27:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.085442 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28af0def-191f-4949-b617-a7a07dd8145b-metrics-certs\") pod \"network-metrics-daemon-kv8wr\" (UID: \"28af0def-191f-4949-b617-a7a07dd8145b\") " pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:27:08 crc kubenswrapper[4867]: E0101 08:27:08.085675 4867 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 01 08:27:08 crc kubenswrapper[4867]: E0101 08:27:08.085764 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28af0def-191f-4949-b617-a7a07dd8145b-metrics-certs podName:28af0def-191f-4949-b617-a7a07dd8145b nodeName:}" failed. No retries permitted until 2026-01-01 08:27:12.085738707 +0000 UTC m=+41.221007516 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/28af0def-191f-4949-b617-a7a07dd8145b-metrics-certs") pod "network-metrics-daemon-kv8wr" (UID: "28af0def-191f-4949-b617-a7a07dd8145b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.088099 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.088150 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.088171 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.088199 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.088221 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:08Z","lastTransitionTime":"2026-01-01T08:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.127982 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:27:08 crc kubenswrapper[4867]: E0101 08:27:08.128169 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.191305 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.191390 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.191441 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.191463 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.191482 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:08Z","lastTransitionTime":"2026-01-01T08:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.294498 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.294554 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.294571 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.294594 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.294612 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:08Z","lastTransitionTime":"2026-01-01T08:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.397760 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.397816 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.397833 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.397857 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.397873 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:08Z","lastTransitionTime":"2026-01-01T08:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.501207 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.501277 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.501300 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.501330 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.501348 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:08Z","lastTransitionTime":"2026-01-01T08:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.604352 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.604408 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.604425 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.604447 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.604463 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:08Z","lastTransitionTime":"2026-01-01T08:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.707376 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.707439 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.707455 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.707478 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.707497 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:08Z","lastTransitionTime":"2026-01-01T08:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.809859 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.809905 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.809914 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.809928 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.809936 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:08Z","lastTransitionTime":"2026-01-01T08:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.912374 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.912468 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.912490 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.912520 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:08 crc kubenswrapper[4867]: I0101 08:27:08.912543 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:08Z","lastTransitionTime":"2026-01-01T08:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.016202 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.016284 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.016306 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.016342 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.016365 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:09Z","lastTransitionTime":"2026-01-01T08:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.119280 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.119352 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.119374 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.119402 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.119423 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:09Z","lastTransitionTime":"2026-01-01T08:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.128022 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.128056 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.128103 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:27:09 crc kubenswrapper[4867]: E0101 08:27:09.128217 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:27:09 crc kubenswrapper[4867]: E0101 08:27:09.128337 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:27:09 crc kubenswrapper[4867]: E0101 08:27:09.128458 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.221926 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.221995 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.222028 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.222056 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.222076 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:09Z","lastTransitionTime":"2026-01-01T08:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.325158 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.325245 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.325278 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.325310 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.325333 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:09Z","lastTransitionTime":"2026-01-01T08:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.428704 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.428767 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.428784 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.428810 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.428828 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:09Z","lastTransitionTime":"2026-01-01T08:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.463066 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.464433 4867 scope.go:117] "RemoveContainer" containerID="3ccecca21314aa9556407b7a6f8524cfd1298751681c8fbedc5d9f04a7dfffc9" Jan 01 08:27:09 crc kubenswrapper[4867]: E0101 08:27:09.464706 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6nftn_openshift-ovn-kubernetes(2d26a65b-86d6-4603-bdeb-ffcb2f086fda)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.531460 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.531551 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.531569 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.531593 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.531609 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:09Z","lastTransitionTime":"2026-01-01T08:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.634874 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.635042 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.635062 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.635088 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.635106 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:09Z","lastTransitionTime":"2026-01-01T08:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.737378 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.737456 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.737481 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.737513 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.737538 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:09Z","lastTransitionTime":"2026-01-01T08:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.840399 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.840470 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.840490 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.840516 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.840535 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:09Z","lastTransitionTime":"2026-01-01T08:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.943991 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.944252 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.944279 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.944309 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:09 crc kubenswrapper[4867]: I0101 08:27:09.944331 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:09Z","lastTransitionTime":"2026-01-01T08:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.047336 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.047390 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.047407 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.047429 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.047445 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:10Z","lastTransitionTime":"2026-01-01T08:27:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.128329 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:27:10 crc kubenswrapper[4867]: E0101 08:27:10.128611 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.151966 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.152748 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.153348 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.153462 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.153531 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:10Z","lastTransitionTime":"2026-01-01T08:27:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.256583 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.256653 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.256669 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.256695 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.256713 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:10Z","lastTransitionTime":"2026-01-01T08:27:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.359797 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.359882 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.359934 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.359958 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.359975 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:10Z","lastTransitionTime":"2026-01-01T08:27:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.462608 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.462684 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.462707 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.462742 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.462777 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:10Z","lastTransitionTime":"2026-01-01T08:27:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.566348 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.566433 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.566451 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.566475 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.566494 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:10Z","lastTransitionTime":"2026-01-01T08:27:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.669745 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.669805 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.669824 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.669853 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.669876 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:10Z","lastTransitionTime":"2026-01-01T08:27:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.772992 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.773052 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.773070 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.773092 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.773113 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:10Z","lastTransitionTime":"2026-01-01T08:27:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.876454 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.876524 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.876542 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.876566 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.876587 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:10Z","lastTransitionTime":"2026-01-01T08:27:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.980023 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.980091 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.980108 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.980131 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:10 crc kubenswrapper[4867]: I0101 08:27:10.980155 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:10Z","lastTransitionTime":"2026-01-01T08:27:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.083163 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.083237 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.083260 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.083291 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.083313 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:11Z","lastTransitionTime":"2026-01-01T08:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.128578 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.128640 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:27:11 crc kubenswrapper[4867]: E0101 08:27:11.128762 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.128948 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:27:11 crc kubenswrapper[4867]: E0101 08:27:11.129126 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:27:11 crc kubenswrapper[4867]: E0101 08:27:11.129233 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.156318 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:11Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.176981 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:11Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.185795 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.185926 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.185947 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.185972 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.185993 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:11Z","lastTransitionTime":"2026-01-01T08:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.198343 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:11Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.229979 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:11Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.260940 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a93d40-ed12-413d-b8fa-1c683a35a7e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cbb4d600b45c1a7f207f05502281886ef2861173b5ca6aa86a73a0ab8c2afcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh66z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:11Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.279052 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tg4nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"778253a2-b732-4460-994a-9543f533383f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dcfcdcf5aaf1d45a445b50f5ec520543620fc85992894681c627a2fd8ad4ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tg4nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:11Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.288749 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.288818 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.288841 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.288868 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.288924 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:11Z","lastTransitionTime":"2026-01-01T08:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.298609 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:11Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.319258 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:11Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.337497 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4608a141-23bd-4286-8607-ad4b16b5ee11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee898ddead9a02fda6e950236b68e556221e707ee2a7c1a2d204194cc334124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-69jph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:11Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.369265 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccecca21314aa9556407b7a6f8524cfd1298751681c8fbedc5d9f04a7dfffc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccecca21314aa9556407b7a6f8524cfd1298751681c8fbedc5d9f04a7dfffc9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-01T08:27:00Z\\\",\\\"message\\\":\\\"ute/v1/apis/informers/externalversions/factory.go:140\\\\nI0101 08:27:00.287106 6252 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0101 08:27:00.287118 6252 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0101 08:27:00.287151 6252 handler.go:208] Removed *v1.Node event handler 2\\\\nI0101 08:27:00.287163 6252 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0101 08:27:00.287176 6252 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0101 08:27:00.287183 6252 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0101 08:27:00.287199 6252 handler.go:208] Removed *v1.Node event handler 7\\\\nI0101 08:27:00.287211 6252 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0101 08:27:00.287213 6252 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0101 08:27:00.287267 6252 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0101 08:27:00.287292 6252 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0101 08:27:00.287309 6252 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0101 08:27:00.287336 6252 factory.go:656] Stopping watch factory\\\\nI0101 08:27:00.287334 6252 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0101 08:27:00.287348 6252 ovnkube.go:599] Stopped ovnkube\\\\nI0101 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6nftn_openshift-ovn-kubernetes(2d26a65b-86d6-4603-bdeb-ffcb2f086fda)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:11Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.385515 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zs59x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db3b0fa-02f9-475b-a6ca-8ac262cbe337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40419cc0c7e84f74407395a89899e4b3107697ef63704b804b426d1cf7652d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99n2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f320736d0565d6bdbe13a7f7f6bc59048d54de5b289822d570df98a517bb4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99n2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:27:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zs59x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:11Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.390937 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.390983 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.391000 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.391018 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.391032 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:11Z","lastTransitionTime":"2026-01-01T08:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.400537 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb276ae-66de-4cb8-8237-1036b73042d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3a6b291f30c7815be13fde52bdef7ef22ee57e9c8be80809cf8a90029b8dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9b9aae16cc1c29ffb288ab01b54fa559cfe599c48f3ed97fe62bcc6e5b3288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d92095d9119537b08f6c16f41499ea77d353bebdf97681d1078af6cf5d24be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67682952747bac1bee9a88d0d4960e1b723a69088fc0dfc6ad9a11d66be35066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:11Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.420944 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:11Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.437699 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275cdf8433aa5cc7dc4bfa21e80bafb4b9960156aa9d0f7dd23b5c120dfee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:11Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.451733 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wkbs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da72a722-a2a3-459e-875a-e1605b442e05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wjm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wkbs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:11Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.467700 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kv8wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28af0def-191f-4949-b617-a7a07dd8145b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:27:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kv8wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:11Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.494389 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.494455 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.494472 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.494496 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.494513 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:11Z","lastTransitionTime":"2026-01-01T08:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.599190 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.599280 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.599304 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.599350 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.599374 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:11Z","lastTransitionTime":"2026-01-01T08:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.702628 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.702705 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.702723 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.702751 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.702775 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:11Z","lastTransitionTime":"2026-01-01T08:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.805639 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.805716 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.805740 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.805771 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.805790 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:11Z","lastTransitionTime":"2026-01-01T08:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.909048 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.909109 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.909131 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.909161 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:11 crc kubenswrapper[4867]: I0101 08:27:11.909185 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:11Z","lastTransitionTime":"2026-01-01T08:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.012487 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.012587 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.012612 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.012641 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.012662 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:12Z","lastTransitionTime":"2026-01-01T08:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.115721 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.115773 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.115794 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.115823 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.115846 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:12Z","lastTransitionTime":"2026-01-01T08:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.128338 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:27:12 crc kubenswrapper[4867]: E0101 08:27:12.128509 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.132958 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28af0def-191f-4949-b617-a7a07dd8145b-metrics-certs\") pod \"network-metrics-daemon-kv8wr\" (UID: \"28af0def-191f-4949-b617-a7a07dd8145b\") " pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:27:12 crc kubenswrapper[4867]: E0101 08:27:12.133119 4867 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 01 08:27:12 crc kubenswrapper[4867]: E0101 08:27:12.133196 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28af0def-191f-4949-b617-a7a07dd8145b-metrics-certs podName:28af0def-191f-4949-b617-a7a07dd8145b nodeName:}" failed. No retries permitted until 2026-01-01 08:27:20.133171873 +0000 UTC m=+49.268440682 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/28af0def-191f-4949-b617-a7a07dd8145b-metrics-certs") pod "network-metrics-daemon-kv8wr" (UID: "28af0def-191f-4949-b617-a7a07dd8145b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.218729 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.218777 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.218794 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.218816 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.218836 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:12Z","lastTransitionTime":"2026-01-01T08:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.321825 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.321905 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.321922 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.321947 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.321964 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:12Z","lastTransitionTime":"2026-01-01T08:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.424722 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.424776 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.424795 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.424818 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.424836 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:12Z","lastTransitionTime":"2026-01-01T08:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.528330 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.528399 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.528425 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.528456 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.528479 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:12Z","lastTransitionTime":"2026-01-01T08:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.631573 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.631651 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.631677 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.631706 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.631728 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:12Z","lastTransitionTime":"2026-01-01T08:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.735247 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.735313 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.735333 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.735356 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.735373 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:12Z","lastTransitionTime":"2026-01-01T08:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.759869 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.759917 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.759929 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.759949 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.759961 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:12Z","lastTransitionTime":"2026-01-01T08:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:12 crc kubenswrapper[4867]: E0101 08:27:12.778559 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:12Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.782482 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.782515 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.782526 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.782540 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.782550 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:12Z","lastTransitionTime":"2026-01-01T08:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:12 crc kubenswrapper[4867]: E0101 08:27:12.799795 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:12Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.805000 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.805102 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.805125 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.805187 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.805214 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:12Z","lastTransitionTime":"2026-01-01T08:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:12 crc kubenswrapper[4867]: E0101 08:27:12.823078 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:12Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.827500 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.827556 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.827581 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.827614 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.827636 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:12Z","lastTransitionTime":"2026-01-01T08:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:12 crc kubenswrapper[4867]: E0101 08:27:12.840841 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:12Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.845540 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.845604 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.845626 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.845657 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.845680 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:12Z","lastTransitionTime":"2026-01-01T08:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:12 crc kubenswrapper[4867]: E0101 08:27:12.862690 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:12Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:12 crc kubenswrapper[4867]: E0101 08:27:12.863406 4867 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.865268 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.865344 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.865360 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.865380 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.865396 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:12Z","lastTransitionTime":"2026-01-01T08:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.967996 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.968130 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.968154 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.968182 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:12 crc kubenswrapper[4867]: I0101 08:27:12.968244 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:12Z","lastTransitionTime":"2026-01-01T08:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.071608 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.071676 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.071699 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.071729 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.071752 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:13Z","lastTransitionTime":"2026-01-01T08:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.127608 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.127663 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:27:13 crc kubenswrapper[4867]: E0101 08:27:13.127784 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.127823 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:27:13 crc kubenswrapper[4867]: E0101 08:27:13.128041 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:27:13 crc kubenswrapper[4867]: E0101 08:27:13.128209 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.174856 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.174957 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.174976 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.175005 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.175025 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:13Z","lastTransitionTime":"2026-01-01T08:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.277938 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.278007 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.278027 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.278051 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.278073 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:13Z","lastTransitionTime":"2026-01-01T08:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.380879 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.380984 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.381003 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.381026 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.381043 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:13Z","lastTransitionTime":"2026-01-01T08:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.483350 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.483418 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.483438 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.483462 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.483479 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:13Z","lastTransitionTime":"2026-01-01T08:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.585771 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.585824 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.585841 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.585864 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.585881 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:13Z","lastTransitionTime":"2026-01-01T08:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.689129 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.689238 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.689265 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.689296 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.689318 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:13Z","lastTransitionTime":"2026-01-01T08:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.792291 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.792371 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.792399 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.792428 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.792449 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:13Z","lastTransitionTime":"2026-01-01T08:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.895080 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.895121 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.895154 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.895174 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.895185 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:13Z","lastTransitionTime":"2026-01-01T08:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.997924 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.998004 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.998026 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.998054 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:13 crc kubenswrapper[4867]: I0101 08:27:13.998075 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:13Z","lastTransitionTime":"2026-01-01T08:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.100875 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.100979 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.100997 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.101020 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.101038 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:14Z","lastTransitionTime":"2026-01-01T08:27:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.128488 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:27:14 crc kubenswrapper[4867]: E0101 08:27:14.128669 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.203973 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.204056 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.204075 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.204100 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.204118 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:14Z","lastTransitionTime":"2026-01-01T08:27:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.307550 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.307607 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.307627 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.307650 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.307668 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:14Z","lastTransitionTime":"2026-01-01T08:27:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.410440 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.410490 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.410517 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.410545 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.410555 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:14Z","lastTransitionTime":"2026-01-01T08:27:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.513242 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.513314 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.513339 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.513369 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.513391 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:14Z","lastTransitionTime":"2026-01-01T08:27:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.616767 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.616921 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.616943 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.616968 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.616986 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:14Z","lastTransitionTime":"2026-01-01T08:27:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.719951 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.720039 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.720060 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.720082 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.720103 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:14Z","lastTransitionTime":"2026-01-01T08:27:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.839019 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.839078 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.839095 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.839117 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.839133 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:14Z","lastTransitionTime":"2026-01-01T08:27:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.942608 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.943534 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.943691 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.943835 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:14 crc kubenswrapper[4867]: I0101 08:27:14.944036 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:14Z","lastTransitionTime":"2026-01-01T08:27:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.046921 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.047216 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.047348 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.047473 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.047603 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:15Z","lastTransitionTime":"2026-01-01T08:27:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.128121 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.128154 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:27:15 crc kubenswrapper[4867]: E0101 08:27:15.128402 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.128480 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:27:15 crc kubenswrapper[4867]: E0101 08:27:15.128701 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:27:15 crc kubenswrapper[4867]: E0101 08:27:15.128988 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.151244 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.152064 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.152144 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.152174 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.152213 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:15Z","lastTransitionTime":"2026-01-01T08:27:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.255518 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.255583 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.255602 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.255625 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.255646 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:15Z","lastTransitionTime":"2026-01-01T08:27:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.358798 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.358870 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.358926 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.358951 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.358971 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:15Z","lastTransitionTime":"2026-01-01T08:27:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.462608 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.462670 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.462687 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.462711 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.462728 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:15Z","lastTransitionTime":"2026-01-01T08:27:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.565653 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.565793 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.565817 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.565846 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.565865 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:15Z","lastTransitionTime":"2026-01-01T08:27:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.671602 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.671657 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.671679 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.671704 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.671723 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:15Z","lastTransitionTime":"2026-01-01T08:27:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.774564 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.774619 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.774633 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.774652 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.774664 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:15Z","lastTransitionTime":"2026-01-01T08:27:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.877290 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.877358 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.877381 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.877410 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.877432 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:15Z","lastTransitionTime":"2026-01-01T08:27:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.979711 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.979765 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.979782 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.979801 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:15 crc kubenswrapper[4867]: I0101 08:27:15.979816 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:15Z","lastTransitionTime":"2026-01-01T08:27:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.082532 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.082597 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.082616 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.082643 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.082661 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:16Z","lastTransitionTime":"2026-01-01T08:27:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.127858 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:27:16 crc kubenswrapper[4867]: E0101 08:27:16.128135 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.185928 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.185998 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.186022 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.186048 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.186064 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:16Z","lastTransitionTime":"2026-01-01T08:27:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.289634 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.289713 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.289803 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.289836 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.289865 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:16Z","lastTransitionTime":"2026-01-01T08:27:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.392583 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.392952 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.393119 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.393386 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.393531 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:16Z","lastTransitionTime":"2026-01-01T08:27:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.497149 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.497209 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.497228 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.497256 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.497274 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:16Z","lastTransitionTime":"2026-01-01T08:27:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.623356 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.623413 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.623438 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.623465 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.623482 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:16Z","lastTransitionTime":"2026-01-01T08:27:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.727642 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.727689 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.727701 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.727719 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.727733 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:16Z","lastTransitionTime":"2026-01-01T08:27:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.831275 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.831326 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.831338 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.831355 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.831366 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:16Z","lastTransitionTime":"2026-01-01T08:27:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.934317 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.934408 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.934426 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.934449 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:16 crc kubenswrapper[4867]: I0101 08:27:16.934508 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:16Z","lastTransitionTime":"2026-01-01T08:27:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.037546 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.037612 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.037636 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.037663 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.037683 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:17Z","lastTransitionTime":"2026-01-01T08:27:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.128243 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.128318 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:27:17 crc kubenswrapper[4867]: E0101 08:27:17.128467 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.128362 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:27:17 crc kubenswrapper[4867]: E0101 08:27:17.128655 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:27:17 crc kubenswrapper[4867]: E0101 08:27:17.128858 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.140244 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.140317 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.140341 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.140372 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.140397 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:17Z","lastTransitionTime":"2026-01-01T08:27:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.243217 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.243496 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.243544 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.243574 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.243592 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:17Z","lastTransitionTime":"2026-01-01T08:27:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.346450 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.346516 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.346534 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.346558 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.346577 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:17Z","lastTransitionTime":"2026-01-01T08:27:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.448744 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.448815 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.448840 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.448868 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.448921 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:17Z","lastTransitionTime":"2026-01-01T08:27:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.552793 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.552866 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.552928 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.552963 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.552986 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:17Z","lastTransitionTime":"2026-01-01T08:27:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.609193 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.620389 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.629292 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275cdf8433aa5cc7dc4bfa21e80bafb4b9960156aa9d0f7dd23b5c120dfee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:17Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.651596 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wkbs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da72a722-a2a3-459e-875a-e1605b442e05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wjm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wkbs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:17Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.656092 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.656161 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.656188 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.656215 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.656232 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:17Z","lastTransitionTime":"2026-01-01T08:27:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.670981 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kv8wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28af0def-191f-4949-b617-a7a07dd8145b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:27:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kv8wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:17Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.690988 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb276ae-66de-4cb8-8237-1036b73042d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3a6b291f30c7815be13fde52bdef7ef22ee57e9c8be80809cf8a90029b8dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9b9aae16cc1c29ffb288ab01b54fa559cfe599c48f3ed97fe62bcc6e5b3288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d92095d9119537b08f6c16f41499ea77d353bebdf97681d1078af6cf5d24be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67682952747bac1bee9a88d0d4960e1b723a69088fc0dfc6ad9a11d66be35066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:17Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.711373 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:17Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.731161 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:17Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.753558 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a93d40-ed12-413d-b8fa-1c683a35a7e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cbb4d600b45c1a7f207f05502281886ef2861173b5ca6aa86a73a0ab8c2afcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh66z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:17Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.759518 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.759580 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.759605 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.759632 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.759649 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:17Z","lastTransitionTime":"2026-01-01T08:27:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.770502 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tg4nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"778253a2-b732-4460-994a-9543f533383f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dcfcdcf5aaf1d45a445b50f5ec520543620fc85992894681c627a2fd8ad4ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tg4nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:17Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.793075 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:17Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.812287 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:17Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.831137 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:17Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.848765 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4608a141-23bd-4286-8607-ad4b16b5ee11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee898ddead9a02fda6e950236b68e556221e707ee2a7c1a2d204194cc334124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-69jph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:17Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.863364 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.863444 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.863467 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.863496 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.863519 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:17Z","lastTransitionTime":"2026-01-01T08:27:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.879614 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccecca21314aa9556407b7a6f8524cfd1298751681c8fbedc5d9f04a7dfffc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccecca21314aa9556407b7a6f8524cfd1298751681c8fbedc5d9f04a7dfffc9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-01T08:27:00Z\\\",\\\"message\\\":\\\"ute/v1/apis/informers/externalversions/factory.go:140\\\\nI0101 08:27:00.287106 6252 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0101 08:27:00.287118 6252 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0101 08:27:00.287151 6252 handler.go:208] Removed *v1.Node event handler 2\\\\nI0101 08:27:00.287163 6252 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0101 08:27:00.287176 6252 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0101 08:27:00.287183 6252 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0101 08:27:00.287199 6252 handler.go:208] Removed *v1.Node event handler 7\\\\nI0101 08:27:00.287211 6252 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0101 08:27:00.287213 6252 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0101 08:27:00.287267 6252 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0101 08:27:00.287292 6252 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0101 08:27:00.287309 6252 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0101 08:27:00.287336 6252 factory.go:656] Stopping watch factory\\\\nI0101 08:27:00.287334 6252 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0101 08:27:00.287348 6252 ovnkube.go:599] Stopped ovnkube\\\\nI0101 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6nftn_openshift-ovn-kubernetes(2d26a65b-86d6-4603-bdeb-ffcb2f086fda)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:17Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.896778 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zs59x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db3b0fa-02f9-475b-a6ca-8ac262cbe337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40419cc0c7e84f74407395a89899e4b3107697ef63704b804b426d1cf7652d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99n2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f320736d0565d6bdbe13a7f7f6bc59048d54de5b289822d570df98a517bb4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99n2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:27:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zs59x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:17Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.916174 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:17Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.938354 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:17Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.966651 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.966699 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.966715 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.966737 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:17 crc kubenswrapper[4867]: I0101 08:27:17.966768 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:17Z","lastTransitionTime":"2026-01-01T08:27:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.070292 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.070344 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.070363 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.070386 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.070404 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:18Z","lastTransitionTime":"2026-01-01T08:27:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.127559 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:27:18 crc kubenswrapper[4867]: E0101 08:27:18.127779 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.173332 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.173377 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.173395 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.173419 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.173435 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:18Z","lastTransitionTime":"2026-01-01T08:27:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.276756 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.276810 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.276826 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.276850 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.276867 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:18Z","lastTransitionTime":"2026-01-01T08:27:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.380357 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.380440 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.380469 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.380499 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.380521 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:18Z","lastTransitionTime":"2026-01-01T08:27:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.483079 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.483138 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.483156 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.483178 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.483196 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:18Z","lastTransitionTime":"2026-01-01T08:27:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.585984 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.586037 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.586059 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.586082 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.586100 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:18Z","lastTransitionTime":"2026-01-01T08:27:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.688822 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.688882 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.688937 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.688961 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.688980 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:18Z","lastTransitionTime":"2026-01-01T08:27:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.792047 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.792108 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.792126 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.792148 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.792166 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:18Z","lastTransitionTime":"2026-01-01T08:27:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.895437 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.895832 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.896085 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.896274 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.896457 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:18Z","lastTransitionTime":"2026-01-01T08:27:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.999485 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.999547 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.999566 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.999592 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:18 crc kubenswrapper[4867]: I0101 08:27:18.999609 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:18Z","lastTransitionTime":"2026-01-01T08:27:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.104014 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.104066 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.104093 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.104122 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.104139 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:19Z","lastTransitionTime":"2026-01-01T08:27:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.127999 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.128070 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:27:19 crc kubenswrapper[4867]: E0101 08:27:19.128161 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.128348 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:27:19 crc kubenswrapper[4867]: E0101 08:27:19.128422 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:27:19 crc kubenswrapper[4867]: E0101 08:27:19.128864 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.207955 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.208021 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.208039 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.208060 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.208081 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:19Z","lastTransitionTime":"2026-01-01T08:27:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.310390 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.310451 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.310468 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.310491 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.310514 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:19Z","lastTransitionTime":"2026-01-01T08:27:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.413393 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.413443 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.413460 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.413484 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.413502 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:19Z","lastTransitionTime":"2026-01-01T08:27:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.515877 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.516061 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.516083 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.516106 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.516122 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:19Z","lastTransitionTime":"2026-01-01T08:27:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.619662 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.620123 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.620277 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.620424 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.620563 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:19Z","lastTransitionTime":"2026-01-01T08:27:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.723294 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.723391 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.723447 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.723473 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.723491 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:19Z","lastTransitionTime":"2026-01-01T08:27:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.827127 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.827230 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.827250 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.827274 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.827291 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:19Z","lastTransitionTime":"2026-01-01T08:27:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.929702 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.929790 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.929812 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.929839 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:19 crc kubenswrapper[4867]: I0101 08:27:19.929863 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:19Z","lastTransitionTime":"2026-01-01T08:27:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.033955 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.034034 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.034051 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.034078 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.034095 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:20Z","lastTransitionTime":"2026-01-01T08:27:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.128245 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:27:20 crc kubenswrapper[4867]: E0101 08:27:20.128446 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.139206 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.139272 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.139290 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.139317 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.139335 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:20Z","lastTransitionTime":"2026-01-01T08:27:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.223560 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28af0def-191f-4949-b617-a7a07dd8145b-metrics-certs\") pod \"network-metrics-daemon-kv8wr\" (UID: \"28af0def-191f-4949-b617-a7a07dd8145b\") " pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:27:20 crc kubenswrapper[4867]: E0101 08:27:20.223760 4867 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 01 08:27:20 crc kubenswrapper[4867]: E0101 08:27:20.223843 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28af0def-191f-4949-b617-a7a07dd8145b-metrics-certs podName:28af0def-191f-4949-b617-a7a07dd8145b nodeName:}" failed. No retries permitted until 2026-01-01 08:27:36.223820744 +0000 UTC m=+65.359089553 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/28af0def-191f-4949-b617-a7a07dd8145b-metrics-certs") pod "network-metrics-daemon-kv8wr" (UID: "28af0def-191f-4949-b617-a7a07dd8145b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.247578 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.247639 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.247657 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.247684 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.247702 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:20Z","lastTransitionTime":"2026-01-01T08:27:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.351694 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.352253 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.352418 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.352553 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.352678 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:20Z","lastTransitionTime":"2026-01-01T08:27:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.456240 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.456303 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.456326 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.456370 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.456393 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:20Z","lastTransitionTime":"2026-01-01T08:27:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.559509 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.559558 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.559574 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.559594 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.559611 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:20Z","lastTransitionTime":"2026-01-01T08:27:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.664951 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.665012 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.665035 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.665060 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.665076 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:20Z","lastTransitionTime":"2026-01-01T08:27:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.767987 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.768058 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.768083 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.768114 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.768136 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:20Z","lastTransitionTime":"2026-01-01T08:27:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.871709 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.871782 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.871799 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.872229 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.872285 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:20Z","lastTransitionTime":"2026-01-01T08:27:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.932043 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:27:20 crc kubenswrapper[4867]: E0101 08:27:20.932251 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:27:52.932216303 +0000 UTC m=+82.067485102 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.976129 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.976176 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.976194 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.976217 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:20 crc kubenswrapper[4867]: I0101 08:27:20.976235 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:20Z","lastTransitionTime":"2026-01-01T08:27:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.033273 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.033345 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.033387 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.033424 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:27:21 crc kubenswrapper[4867]: E0101 08:27:21.033594 4867 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 01 08:27:21 crc kubenswrapper[4867]: E0101 08:27:21.033662 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-01 08:27:53.033640361 +0000 UTC m=+82.168909160 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 01 08:27:21 crc kubenswrapper[4867]: E0101 08:27:21.033949 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 01 08:27:21 crc kubenswrapper[4867]: E0101 08:27:21.033976 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 01 08:27:21 crc kubenswrapper[4867]: E0101 08:27:21.033994 4867 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 01 08:27:21 crc kubenswrapper[4867]: E0101 08:27:21.034038 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-01 08:27:53.034023052 +0000 UTC m=+82.169291861 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 01 08:27:21 crc kubenswrapper[4867]: E0101 08:27:21.034338 4867 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 01 08:27:21 crc kubenswrapper[4867]: E0101 08:27:21.034425 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-01 08:27:53.034401812 +0000 UTC m=+82.169670611 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 01 08:27:21 crc kubenswrapper[4867]: E0101 08:27:21.034433 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 01 08:27:21 crc kubenswrapper[4867]: E0101 08:27:21.034497 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 01 08:27:21 crc kubenswrapper[4867]: E0101 08:27:21.034519 4867 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 01 08:27:21 crc kubenswrapper[4867]: E0101 08:27:21.034603 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-01 08:27:53.034575287 +0000 UTC m=+82.169844086 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.080024 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.080980 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.081015 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.081040 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.081057 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:21Z","lastTransitionTime":"2026-01-01T08:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.127977 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.128055 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.128099 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:27:21 crc kubenswrapper[4867]: E0101 08:27:21.128273 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:27:21 crc kubenswrapper[4867]: E0101 08:27:21.128379 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:27:21 crc kubenswrapper[4867]: E0101 08:27:21.128520 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.165628 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccecca21314aa9556407b7a6f8524cfd1298751681c8fbedc5d9f04a7dfffc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccecca21314aa9556407b7a6f8524cfd1298751681c8fbedc5d9f04a7dfffc9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-01T08:27:00Z\\\",\\\"message\\\":\\\"ute/v1/apis/informers/externalversions/factory.go:140\\\\nI0101 08:27:00.287106 6252 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0101 08:27:00.287118 6252 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0101 08:27:00.287151 6252 handler.go:208] Removed *v1.Node event handler 2\\\\nI0101 08:27:00.287163 6252 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0101 08:27:00.287176 6252 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0101 08:27:00.287183 6252 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0101 08:27:00.287199 6252 handler.go:208] Removed *v1.Node event handler 7\\\\nI0101 08:27:00.287211 6252 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0101 08:27:00.287213 6252 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0101 08:27:00.287267 6252 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0101 08:27:00.287292 6252 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0101 08:27:00.287309 6252 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0101 08:27:00.287336 6252 factory.go:656] Stopping watch factory\\\\nI0101 08:27:00.287334 6252 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0101 08:27:00.287348 6252 ovnkube.go:599] Stopped ovnkube\\\\nI0101 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6nftn_openshift-ovn-kubernetes(2d26a65b-86d6-4603-bdeb-ffcb2f086fda)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:21Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.184470 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.184542 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.184567 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.184602 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.184623 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:21Z","lastTransitionTime":"2026-01-01T08:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.185927 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zs59x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db3b0fa-02f9-475b-a6ca-8ac262cbe337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40419cc0c7e84f74407395a89899e4b3107697ef63704b804b426d1cf7652d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99n2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f320736d0565d6bdbe13a7f7f6bc59048d54de5b289822d570df98a517bb4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99n2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:27:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zs59x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:21Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.207058 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:21Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.227959 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:21Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.248066 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4608a141-23bd-4286-8607-ad4b16b5ee11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee898ddead9a02fda6e950236b68e556221e707ee2a7c1a2d204194cc334124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-69jph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:21Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.264544 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wkbs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da72a722-a2a3-459e-875a-e1605b442e05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wjm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wkbs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:21Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.275763 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kv8wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28af0def-191f-4949-b617-a7a07dd8145b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:27:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kv8wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:21Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.286462 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.286524 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.286539 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.286555 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.286570 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:21Z","lastTransitionTime":"2026-01-01T08:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.290014 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb276ae-66de-4cb8-8237-1036b73042d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3a6b291f30c7815be13fde52bdef7ef22ee57e9c8be80809cf8a90029b8dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9b9aae16cc1c29ffb288ab01b54fa559cfe599c48f3ed97fe62bcc6e5b3288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d92095d9119537b08f6c16f41499ea77d353bebdf97681d1078af6cf5d24be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67682952747bac1bee9a88d0d4960e1b723a69088fc0dfc6ad9a11d66be35066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:21Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.306925 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97f622a4-8715-4813-9226-cd87ad7c38e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbef8e941e36563a6e489876fe03f03a6305edb8acbf6d31b1d098be4b23ebd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90768dce0ec952afe36a613a6e7ba00fe58331f820e40afff933da92ce33d762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9bcc967783fd9c73b9bdbb32623a3b3400a1489841b37b696810844be5c5686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea87c868a609b4ad7b0ac9fee0e9335bd9d5c640479e575ccdefc2777e74558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea87c868a609b4ad7b0ac9fee0e9335bd9d5c640479e575ccdefc2777e74558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:21Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.322502 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:21Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.338655 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275cdf8433aa5cc7dc4bfa21e80bafb4b9960156aa9d0f7dd23b5c120dfee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:21Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.361658 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a93d40-ed12-413d-b8fa-1c683a35a7e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cbb4d600b45c1a7f207f05502281886ef2861173b5ca6aa86a73a0ab8c2afcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh66z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:21Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.377161 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tg4nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"778253a2-b732-4460-994a-9543f533383f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dcfcdcf5aaf1d45a445b50f5ec520543620fc85992894681c627a2fd8ad4ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tg4nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:21Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.390169 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.390223 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.390238 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.390260 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.390275 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:21Z","lastTransitionTime":"2026-01-01T08:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.397245 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:21Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.423484 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:21Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.437609 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:21Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.451323 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:21Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.493741 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.493773 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.493783 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.493800 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.493812 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:21Z","lastTransitionTime":"2026-01-01T08:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.597684 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.597785 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.597799 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.597824 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.597846 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:21Z","lastTransitionTime":"2026-01-01T08:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.700991 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.701052 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.701069 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.701090 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.701108 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:21Z","lastTransitionTime":"2026-01-01T08:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.803796 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.803868 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.803927 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.803954 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.803970 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:21Z","lastTransitionTime":"2026-01-01T08:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.906568 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.906620 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.906631 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.906654 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:21 crc kubenswrapper[4867]: I0101 08:27:21.906666 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:21Z","lastTransitionTime":"2026-01-01T08:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.012528 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.012600 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.012619 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.012644 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.012661 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:22Z","lastTransitionTime":"2026-01-01T08:27:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.115263 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.115326 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.115344 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.115367 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.115385 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:22Z","lastTransitionTime":"2026-01-01T08:27:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.128439 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:27:22 crc kubenswrapper[4867]: E0101 08:27:22.128666 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.218416 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.218491 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.218515 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.218546 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.218569 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:22Z","lastTransitionTime":"2026-01-01T08:27:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.321454 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.321514 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.321536 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.321561 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.321579 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:22Z","lastTransitionTime":"2026-01-01T08:27:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.424799 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.424865 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.424921 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.424946 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.424963 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:22Z","lastTransitionTime":"2026-01-01T08:27:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.528863 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.528956 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.528978 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.529002 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.529019 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:22Z","lastTransitionTime":"2026-01-01T08:27:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.631747 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.631793 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.631809 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.631831 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.631846 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:22Z","lastTransitionTime":"2026-01-01T08:27:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.735041 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.735096 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.735113 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.735137 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.735154 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:22Z","lastTransitionTime":"2026-01-01T08:27:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.837970 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.838061 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.838082 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.838105 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.838121 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:22Z","lastTransitionTime":"2026-01-01T08:27:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.941300 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.941361 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.941381 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.941406 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:22 crc kubenswrapper[4867]: I0101 08:27:22.941425 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:22Z","lastTransitionTime":"2026-01-01T08:27:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.044443 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.044508 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.044531 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.044562 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.044588 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:23Z","lastTransitionTime":"2026-01-01T08:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.128032 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.128179 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:27:23 crc kubenswrapper[4867]: E0101 08:27:23.128353 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.128472 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:27:23 crc kubenswrapper[4867]: E0101 08:27:23.128666 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:27:23 crc kubenswrapper[4867]: E0101 08:27:23.128759 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.129974 4867 scope.go:117] "RemoveContainer" containerID="3ccecca21314aa9556407b7a6f8524cfd1298751681c8fbedc5d9f04a7dfffc9" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.147324 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.147407 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.147435 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.147469 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.147488 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:23Z","lastTransitionTime":"2026-01-01T08:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.194534 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.194848 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.194865 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.194918 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.194937 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:23Z","lastTransitionTime":"2026-01-01T08:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:23 crc kubenswrapper[4867]: E0101 08:27:23.216376 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:23Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.221392 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.221439 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.221458 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.221482 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.221499 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:23Z","lastTransitionTime":"2026-01-01T08:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:23 crc kubenswrapper[4867]: E0101 08:27:23.251502 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:23Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.255942 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.256013 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.256032 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.256057 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.256078 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:23Z","lastTransitionTime":"2026-01-01T08:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:23 crc kubenswrapper[4867]: E0101 08:27:23.273046 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:23Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.277216 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.277292 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.277319 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.277348 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.277373 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:23Z","lastTransitionTime":"2026-01-01T08:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:23 crc kubenswrapper[4867]: E0101 08:27:23.297573 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:23Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.302317 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.302476 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.302564 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.302652 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.302736 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:23Z","lastTransitionTime":"2026-01-01T08:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:23 crc kubenswrapper[4867]: E0101 08:27:23.322596 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:23Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:23 crc kubenswrapper[4867]: E0101 08:27:23.322967 4867 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.325644 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.325876 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.325985 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.326160 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.326325 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:23Z","lastTransitionTime":"2026-01-01T08:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.429421 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.429495 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.429518 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.429554 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.429578 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:23Z","lastTransitionTime":"2026-01-01T08:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.471185 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nftn_2d26a65b-86d6-4603-bdeb-ffcb2f086fda/ovnkube-controller/1.log" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.474805 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" event={"ID":"2d26a65b-86d6-4603-bdeb-ffcb2f086fda","Type":"ContainerStarted","Data":"e0210cc190fc2af5c125315618c94a0254f72a47309a2d1bb03b3a2b36b0f82b"} Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.476010 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.486112 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tg4nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"778253a2-b732-4460-994a-9543f533383f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dcfcdcf5aaf1d45a445b50f5ec520543620fc85992894681c627a2fd8ad4ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tg4nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:23Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.501365 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:23Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.519022 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:23Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.532124 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.532168 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.532180 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.532193 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.532179 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:23Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.532240 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:23Z","lastTransitionTime":"2026-01-01T08:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.549632 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:23Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.570724 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a93d40-ed12-413d-b8fa-1c683a35a7e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cbb4d600b45c1a7f207f05502281886ef2861173b5ca6aa86a73a0ab8c2afcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh66z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:23Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.586205 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zs59x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db3b0fa-02f9-475b-a6ca-8ac262cbe337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40419cc0c7e84f74407395a89899e4b3107697ef63704b804b426d1cf7652d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99n2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f320736d0565d6bdbe13a7f7f6bc59048d54de5b289822d570df98a517bb4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99n2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:27:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zs59x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:23Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.599866 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:23Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.617953 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:23Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.634645 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4608a141-23bd-4286-8607-ad4b16b5ee11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee898ddead9a02fda6e950236b68e556221e707ee2a7c1a2d204194cc334124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-69jph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:23Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.634691 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.634795 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.634805 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.634819 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.634833 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:23Z","lastTransitionTime":"2026-01-01T08:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.652433 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0210cc190fc2af5c125315618c94a0254f72a47309a2d1bb03b3a2b36b0f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccecca21314aa9556407b7a6f8524cfd1298751681c8fbedc5d9f04a7dfffc9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-01T08:27:00Z\\\",\\\"message\\\":\\\"ute/v1/apis/informers/externalversions/factory.go:140\\\\nI0101 08:27:00.287106 6252 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0101 08:27:00.287118 6252 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0101 08:27:00.287151 6252 handler.go:208] Removed *v1.Node event handler 2\\\\nI0101 08:27:00.287163 6252 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0101 08:27:00.287176 6252 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0101 08:27:00.287183 6252 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0101 08:27:00.287199 6252 handler.go:208] Removed *v1.Node event handler 7\\\\nI0101 08:27:00.287211 6252 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0101 08:27:00.287213 6252 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0101 08:27:00.287267 6252 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0101 08:27:00.287292 6252 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0101 08:27:00.287309 6252 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0101 08:27:00.287336 6252 factory.go:656] Stopping watch factory\\\\nI0101 08:27:00.287334 6252 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0101 08:27:00.287348 6252 ovnkube.go:599] Stopped ovnkube\\\\nI0101 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:23Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.663549 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kv8wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28af0def-191f-4949-b617-a7a07dd8145b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:27:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kv8wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:23Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.675848 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb276ae-66de-4cb8-8237-1036b73042d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3a6b291f30c7815be13fde52bdef7ef22ee57e9c8be80809cf8a90029b8dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9b9aae16cc1c29ffb288ab01b54fa559cfe599c48f3ed97fe62bcc6e5b3288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d92095d9119537b08f6c16f41499ea77d353bebdf97681d1078af6cf5d24be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67682952747bac1bee9a88d0d4960e1b723a69088fc0dfc6ad9a11d66be35066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:23Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.687398 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97f622a4-8715-4813-9226-cd87ad7c38e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbef8e941e36563a6e489876fe03f03a6305edb8acbf6d31b1d098be4b23ebd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90768dce0ec952afe36a613a6e7ba00fe58331f820e40afff933da92ce33d762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9bcc967783fd9c73b9bdbb32623a3b3400a1489841b37b696810844be5c5686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea87c868a609b4ad7b0ac9fee0e9335bd9d5c640479e575ccdefc2777e74558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea87c868a609b4ad7b0ac9fee0e9335bd9d5c640479e575ccdefc2777e74558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:23Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.699423 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:23Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.715209 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275cdf8433aa5cc7dc4bfa21e80bafb4b9960156aa9d0f7dd23b5c120dfee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:23Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.730980 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wkbs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da72a722-a2a3-459e-875a-e1605b442e05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wjm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wkbs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:23Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.737737 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.737797 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.737816 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.737845 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.737861 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:23Z","lastTransitionTime":"2026-01-01T08:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.840427 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.840486 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.840504 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.840527 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.840544 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:23Z","lastTransitionTime":"2026-01-01T08:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.943711 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.943776 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.943793 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.943819 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:23 crc kubenswrapper[4867]: I0101 08:27:23.943840 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:23Z","lastTransitionTime":"2026-01-01T08:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.047499 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.047571 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.047594 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.047623 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.047647 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:24Z","lastTransitionTime":"2026-01-01T08:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.127972 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:27:24 crc kubenswrapper[4867]: E0101 08:27:24.128133 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.150007 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.150035 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.150045 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.150059 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.150070 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:24Z","lastTransitionTime":"2026-01-01T08:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.252744 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.252803 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.252821 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.252847 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.252876 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:24Z","lastTransitionTime":"2026-01-01T08:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.355730 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.355794 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.355817 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.355846 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.355869 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:24Z","lastTransitionTime":"2026-01-01T08:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.458870 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.458970 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.459005 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.459027 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.459042 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:24Z","lastTransitionTime":"2026-01-01T08:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.481265 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nftn_2d26a65b-86d6-4603-bdeb-ffcb2f086fda/ovnkube-controller/2.log" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.482260 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nftn_2d26a65b-86d6-4603-bdeb-ffcb2f086fda/ovnkube-controller/1.log" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.486120 4867 generic.go:334] "Generic (PLEG): container finished" podID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerID="e0210cc190fc2af5c125315618c94a0254f72a47309a2d1bb03b3a2b36b0f82b" exitCode=1 Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.486182 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" event={"ID":"2d26a65b-86d6-4603-bdeb-ffcb2f086fda","Type":"ContainerDied","Data":"e0210cc190fc2af5c125315618c94a0254f72a47309a2d1bb03b3a2b36b0f82b"} Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.486236 4867 scope.go:117] "RemoveContainer" containerID="3ccecca21314aa9556407b7a6f8524cfd1298751681c8fbedc5d9f04a7dfffc9" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.487256 4867 scope.go:117] "RemoveContainer" containerID="e0210cc190fc2af5c125315618c94a0254f72a47309a2d1bb03b3a2b36b0f82b" Jan 01 08:27:24 crc kubenswrapper[4867]: E0101 08:27:24.487506 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nftn_openshift-ovn-kubernetes(2d26a65b-86d6-4603-bdeb-ffcb2f086fda)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.513543 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:24Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.532632 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:24Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.552303 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:24Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.561848 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.561884 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.561942 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.561971 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.561992 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:24Z","lastTransitionTime":"2026-01-01T08:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.574949 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:24Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.600037 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a93d40-ed12-413d-b8fa-1c683a35a7e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cbb4d600b45c1a7f207f05502281886ef2861173b5ca6aa86a73a0ab8c2afcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh66z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:24Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.617461 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tg4nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"778253a2-b732-4460-994a-9543f533383f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dcfcdcf5aaf1d45a445b50f5ec520543620fc85992894681c627a2fd8ad4ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tg4nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:24Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.638589 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:24Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.655479 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:24Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.665821 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.665923 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.665949 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.665986 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.666004 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:24Z","lastTransitionTime":"2026-01-01T08:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.674489 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4608a141-23bd-4286-8607-ad4b16b5ee11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee898ddead9a02fda6e950236b68e556221e707ee2a7c1a2d204194cc334124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-69jph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:24Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.704453 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0210cc190fc2af5c125315618c94a0254f72a47309a2d1bb03b3a2b36b0f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccecca21314aa9556407b7a6f8524cfd1298751681c8fbedc5d9f04a7dfffc9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-01T08:27:00Z\\\",\\\"message\\\":\\\"ute/v1/apis/informers/externalversions/factory.go:140\\\\nI0101 08:27:00.287106 6252 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0101 08:27:00.287118 6252 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0101 08:27:00.287151 6252 handler.go:208] Removed *v1.Node event handler 2\\\\nI0101 08:27:00.287163 6252 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0101 08:27:00.287176 6252 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0101 08:27:00.287183 6252 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0101 08:27:00.287199 6252 handler.go:208] Removed *v1.Node event handler 7\\\\nI0101 08:27:00.287211 6252 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0101 08:27:00.287213 6252 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0101 08:27:00.287267 6252 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0101 08:27:00.287292 6252 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0101 08:27:00.287309 6252 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0101 08:27:00.287336 6252 factory.go:656] Stopping watch factory\\\\nI0101 08:27:00.287334 6252 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0101 08:27:00.287348 6252 ovnkube.go:599] Stopped ovnkube\\\\nI0101 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0210cc190fc2af5c125315618c94a0254f72a47309a2d1bb03b3a2b36b0f82b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-01T08:27:24Z\\\",\\\"message\\\":\\\"o:208] Removed *v1.EgressFirewall event handler 9\\\\nI0101 08:27:24.083933 6525 shared_informer.go:313] Waiting for caches to sync for ef_node_controller\\\\nI0101 08:27:24.083941 6525 shared_informer.go:320] Caches are synced for ef_node_controller\\\\nI0101 08:27:24.083949 6525 controller.go:156] Starting controller ef_node_controller with 1 workers\\\\nI0101 08:27:24.083964 6525 egressqos.go:192] Setting up event handlers for EgressQoS\\\\nI0101 08:27:24.083969 6525 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084032 6525 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084082 6525 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084143 6525 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084317 6525 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084775 6525 ovnkube.go:599] Stopped ovnkube\\\\nI0101 08:27:24.084818 6525 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0101 08:27:24.084955 6525 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-01T08:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:24Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.723276 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zs59x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db3b0fa-02f9-475b-a6ca-8ac262cbe337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40419cc0c7e84f74407395a89899e4b3107697ef63704b804b426d1cf7652d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99n2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f320736d0565d6bdbe13a7f7f6bc59048d54de5b289822d570df98a517bb4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99n2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:27:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zs59x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:24Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.743129 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb276ae-66de-4cb8-8237-1036b73042d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3a6b291f30c7815be13fde52bdef7ef22ee57e9c8be80809cf8a90029b8dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9b9aae16cc1c29ffb288ab01b54fa559cfe599c48f3ed97fe62bcc6e5b3288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d92095d9119537b08f6c16f41499ea77d353bebdf97681d1078af6cf5d24be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67682952747bac1bee9a88d0d4960e1b723a69088fc0dfc6ad9a11d66be35066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:24Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.761049 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97f622a4-8715-4813-9226-cd87ad7c38e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbef8e941e36563a6e489876fe03f03a6305edb8acbf6d31b1d098be4b23ebd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90768dce0ec952afe36a613a6e7ba00fe58331f820e40afff933da92ce33d762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9bcc967783fd9c73b9bdbb32623a3b3400a1489841b37b696810844be5c5686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea87c868a609b4ad7b0ac9fee0e9335bd9d5c640479e575ccdefc2777e74558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea87c868a609b4ad7b0ac9fee0e9335bd9d5c640479e575ccdefc2777e74558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:24Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.769393 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.769442 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.769459 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.769481 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.769496 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:24Z","lastTransitionTime":"2026-01-01T08:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.783492 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:24Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.802315 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275cdf8433aa5cc7dc4bfa21e80bafb4b9960156aa9d0f7dd23b5c120dfee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:24Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.821421 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wkbs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da72a722-a2a3-459e-875a-e1605b442e05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wjm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wkbs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:24Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.838713 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kv8wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28af0def-191f-4949-b617-a7a07dd8145b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:27:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kv8wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:24Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.872221 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.872315 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.872335 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.872362 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.872382 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:24Z","lastTransitionTime":"2026-01-01T08:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.975207 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.975260 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.975276 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.975301 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:24 crc kubenswrapper[4867]: I0101 08:27:24.975318 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:24Z","lastTransitionTime":"2026-01-01T08:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.078281 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.078382 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.078402 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.078428 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.078446 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:25Z","lastTransitionTime":"2026-01-01T08:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.128111 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.128179 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:27:25 crc kubenswrapper[4867]: E0101 08:27:25.128293 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.128395 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:27:25 crc kubenswrapper[4867]: E0101 08:27:25.128540 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:27:25 crc kubenswrapper[4867]: E0101 08:27:25.128624 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.181263 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.181312 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.181328 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.181347 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.181359 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:25Z","lastTransitionTime":"2026-01-01T08:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.284386 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.284431 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.284442 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.284459 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.284471 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:25Z","lastTransitionTime":"2026-01-01T08:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.387354 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.387410 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.387426 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.387449 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.387466 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:25Z","lastTransitionTime":"2026-01-01T08:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.489981 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.490708 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.490759 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.490799 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.490819 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:25Z","lastTransitionTime":"2026-01-01T08:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.494316 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nftn_2d26a65b-86d6-4603-bdeb-ffcb2f086fda/ovnkube-controller/2.log" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.500486 4867 scope.go:117] "RemoveContainer" containerID="e0210cc190fc2af5c125315618c94a0254f72a47309a2d1bb03b3a2b36b0f82b" Jan 01 08:27:25 crc kubenswrapper[4867]: E0101 08:27:25.500742 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nftn_openshift-ovn-kubernetes(2d26a65b-86d6-4603-bdeb-ffcb2f086fda)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.534373 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:25Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.554313 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:25Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.573172 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:25Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.592650 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:25Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.594068 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.594140 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.594165 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.594194 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.594217 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:25Z","lastTransitionTime":"2026-01-01T08:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.615310 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a93d40-ed12-413d-b8fa-1c683a35a7e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cbb4d600b45c1a7f207f05502281886ef2861173b5ca6aa86a73a0ab8c2afcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh66z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:25Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.631607 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tg4nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"778253a2-b732-4460-994a-9543f533383f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dcfcdcf5aaf1d45a445b50f5ec520543620fc85992894681c627a2fd8ad4ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tg4nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:25Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.651839 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:25Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.673320 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:25Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.685935 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4608a141-23bd-4286-8607-ad4b16b5ee11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee898ddead9a02fda6e950236b68e556221e707ee2a7c1a2d204194cc334124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-69jph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:25Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.697029 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.697076 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.697094 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.697117 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.697135 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:25Z","lastTransitionTime":"2026-01-01T08:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.707801 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0210cc190fc2af5c125315618c94a0254f72a47309a2d1bb03b3a2b36b0f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0210cc190fc2af5c125315618c94a0254f72a47309a2d1bb03b3a2b36b0f82b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-01T08:27:24Z\\\",\\\"message\\\":\\\"o:208] Removed *v1.EgressFirewall event handler 9\\\\nI0101 08:27:24.083933 6525 shared_informer.go:313] Waiting for caches to sync for ef_node_controller\\\\nI0101 08:27:24.083941 6525 shared_informer.go:320] Caches are synced for ef_node_controller\\\\nI0101 08:27:24.083949 6525 controller.go:156] Starting controller ef_node_controller with 1 workers\\\\nI0101 08:27:24.083964 6525 egressqos.go:192] Setting up event handlers for EgressQoS\\\\nI0101 08:27:24.083969 6525 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084032 6525 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084082 6525 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084143 6525 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084317 6525 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084775 6525 ovnkube.go:599] Stopped ovnkube\\\\nI0101 08:27:24.084818 6525 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0101 08:27:24.084955 6525 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-01T08:27:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nftn_openshift-ovn-kubernetes(2d26a65b-86d6-4603-bdeb-ffcb2f086fda)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:25Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.724480 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zs59x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db3b0fa-02f9-475b-a6ca-8ac262cbe337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40419cc0c7e84f74407395a89899e4b3107697ef63704b804b426d1cf7652d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99n2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f320736d0565d6bdbe13a7f7f6bc59048d54de5b289822d570df98a517bb4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99n2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:27:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zs59x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:25Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.739621 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb276ae-66de-4cb8-8237-1036b73042d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3a6b291f30c7815be13fde52bdef7ef22ee57e9c8be80809cf8a90029b8dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9b9aae16cc1c29ffb288ab01b54fa559cfe599c48f3ed97fe62bcc6e5b3288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d92095d9119537b08f6c16f41499ea77d353bebdf97681d1078af6cf5d24be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67682952747bac1bee9a88d0d4960e1b723a69088fc0dfc6ad9a11d66be35066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:25Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.753131 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97f622a4-8715-4813-9226-cd87ad7c38e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbef8e941e36563a6e489876fe03f03a6305edb8acbf6d31b1d098be4b23ebd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90768dce0ec952afe36a613a6e7ba00fe58331f820e40afff933da92ce33d762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9bcc967783fd9c73b9bdbb32623a3b3400a1489841b37b696810844be5c5686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea87c868a609b4ad7b0ac9fee0e9335bd9d5c640479e575ccdefc2777e74558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea87c868a609b4ad7b0ac9fee0e9335bd9d5c640479e575ccdefc2777e74558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:25Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.769215 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:25Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.787207 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275cdf8433aa5cc7dc4bfa21e80bafb4b9960156aa9d0f7dd23b5c120dfee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:25Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.800428 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.800470 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.800481 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.800498 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.800509 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:25Z","lastTransitionTime":"2026-01-01T08:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.808384 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wkbs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da72a722-a2a3-459e-875a-e1605b442e05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wjm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wkbs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:25Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.823311 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kv8wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28af0def-191f-4949-b617-a7a07dd8145b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:27:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kv8wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:25Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.902958 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.903031 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.903055 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.903082 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:25 crc kubenswrapper[4867]: I0101 08:27:25.903106 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:25Z","lastTransitionTime":"2026-01-01T08:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.005662 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.005726 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.005748 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.005775 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.005798 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:26Z","lastTransitionTime":"2026-01-01T08:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.108415 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.108474 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.108492 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.108521 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.108541 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:26Z","lastTransitionTime":"2026-01-01T08:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.128431 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:27:26 crc kubenswrapper[4867]: E0101 08:27:26.128629 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.211802 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.211860 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.211914 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.211940 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.211958 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:26Z","lastTransitionTime":"2026-01-01T08:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.315527 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.315575 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.315589 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.315604 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.315615 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:26Z","lastTransitionTime":"2026-01-01T08:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.418443 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.418490 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.418503 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.418517 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.418527 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:26Z","lastTransitionTime":"2026-01-01T08:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.520432 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.520504 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.520523 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.520548 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.520566 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:26Z","lastTransitionTime":"2026-01-01T08:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.623960 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.624014 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.624027 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.624044 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.624057 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:26Z","lastTransitionTime":"2026-01-01T08:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.726941 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.727009 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.727032 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.727060 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.727084 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:26Z","lastTransitionTime":"2026-01-01T08:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.830797 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.830859 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.830876 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.830926 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.830944 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:26Z","lastTransitionTime":"2026-01-01T08:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.934296 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.934360 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.934379 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.934403 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:26 crc kubenswrapper[4867]: I0101 08:27:26.934421 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:26Z","lastTransitionTime":"2026-01-01T08:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.037467 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.037597 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.037624 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.037653 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.037675 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:27Z","lastTransitionTime":"2026-01-01T08:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.128579 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.128622 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.128638 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:27:27 crc kubenswrapper[4867]: E0101 08:27:27.128746 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:27:27 crc kubenswrapper[4867]: E0101 08:27:27.128792 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:27:27 crc kubenswrapper[4867]: E0101 08:27:27.128854 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.140755 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.140805 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.140837 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.140864 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.140912 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:27Z","lastTransitionTime":"2026-01-01T08:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.244767 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.244840 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.244861 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.244917 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.244949 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:27Z","lastTransitionTime":"2026-01-01T08:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.348227 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.348296 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.348321 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.348352 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.348377 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:27Z","lastTransitionTime":"2026-01-01T08:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.451468 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.451544 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.451565 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.451589 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.451605 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:27Z","lastTransitionTime":"2026-01-01T08:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.553777 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.553833 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.553850 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.553874 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.553921 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:27Z","lastTransitionTime":"2026-01-01T08:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.657119 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.657186 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.657212 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.657236 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.657252 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:27Z","lastTransitionTime":"2026-01-01T08:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.760309 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.760423 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.760449 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.760479 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.760501 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:27Z","lastTransitionTime":"2026-01-01T08:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.863655 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.863718 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.863740 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.863769 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.863789 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:27Z","lastTransitionTime":"2026-01-01T08:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.966429 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.966473 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.966482 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.966499 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:27 crc kubenswrapper[4867]: I0101 08:27:27.966512 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:27Z","lastTransitionTime":"2026-01-01T08:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.069264 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.069320 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.069337 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.069361 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.069378 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:28Z","lastTransitionTime":"2026-01-01T08:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.128053 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:27:28 crc kubenswrapper[4867]: E0101 08:27:28.128238 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.171785 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.171827 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.171838 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.171855 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.171917 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:28Z","lastTransitionTime":"2026-01-01T08:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.274648 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.274739 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.274758 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.274783 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.274801 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:28Z","lastTransitionTime":"2026-01-01T08:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.378537 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.378591 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.378612 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.378638 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.378659 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:28Z","lastTransitionTime":"2026-01-01T08:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.482599 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.482657 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.482675 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.482698 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.482718 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:28Z","lastTransitionTime":"2026-01-01T08:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.585228 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.585294 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.585311 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.585335 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.585353 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:28Z","lastTransitionTime":"2026-01-01T08:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.688554 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.688651 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.688674 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.688703 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.688724 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:28Z","lastTransitionTime":"2026-01-01T08:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.791298 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.791442 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.791764 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.791798 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.791815 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:28Z","lastTransitionTime":"2026-01-01T08:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.894846 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.894957 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.894974 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.895006 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.895023 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:28Z","lastTransitionTime":"2026-01-01T08:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.998073 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.998143 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.998166 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.998196 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:28 crc kubenswrapper[4867]: I0101 08:27:28.998221 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:28Z","lastTransitionTime":"2026-01-01T08:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.101813 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.101874 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.101917 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.101944 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.101962 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:29Z","lastTransitionTime":"2026-01-01T08:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.127965 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.127994 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:27:29 crc kubenswrapper[4867]: E0101 08:27:29.128114 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.128215 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:27:29 crc kubenswrapper[4867]: E0101 08:27:29.128417 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:27:29 crc kubenswrapper[4867]: E0101 08:27:29.128508 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.204566 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.204619 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.204635 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.204657 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.204678 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:29Z","lastTransitionTime":"2026-01-01T08:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.307611 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.307657 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.307667 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.307683 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.307694 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:29Z","lastTransitionTime":"2026-01-01T08:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.410216 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.410262 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.410275 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.410293 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.410305 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:29Z","lastTransitionTime":"2026-01-01T08:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.512500 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.512527 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.512535 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.512547 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.512556 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:29Z","lastTransitionTime":"2026-01-01T08:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.615586 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.615641 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.615658 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.615682 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.615699 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:29Z","lastTransitionTime":"2026-01-01T08:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.719250 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.719315 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.719334 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.719358 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.719376 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:29Z","lastTransitionTime":"2026-01-01T08:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.822122 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.822177 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.822232 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.822262 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.822282 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:29Z","lastTransitionTime":"2026-01-01T08:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.925101 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.925167 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.925190 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.925217 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:29 crc kubenswrapper[4867]: I0101 08:27:29.925236 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:29Z","lastTransitionTime":"2026-01-01T08:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.027602 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.027665 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.027681 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.027722 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.027739 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:30Z","lastTransitionTime":"2026-01-01T08:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.128197 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:27:30 crc kubenswrapper[4867]: E0101 08:27:30.128396 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.131643 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.131706 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.131723 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.131746 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.131763 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:30Z","lastTransitionTime":"2026-01-01T08:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.234561 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.234603 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.234615 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.234629 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.234639 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:30Z","lastTransitionTime":"2026-01-01T08:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.336444 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.336479 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.336489 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.336504 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.336515 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:30Z","lastTransitionTime":"2026-01-01T08:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.440744 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.441139 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.441287 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.441449 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.441595 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:30Z","lastTransitionTime":"2026-01-01T08:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.544351 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.544444 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.544466 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.544489 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.544505 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:30Z","lastTransitionTime":"2026-01-01T08:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.647687 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.647735 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.647748 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.647764 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.647775 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:30Z","lastTransitionTime":"2026-01-01T08:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.750928 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.750992 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.751009 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.751045 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.751060 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:30Z","lastTransitionTime":"2026-01-01T08:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.854582 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.854641 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.854651 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.854672 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.854685 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:30Z","lastTransitionTime":"2026-01-01T08:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.957953 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.958022 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.958044 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.958073 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:30 crc kubenswrapper[4867]: I0101 08:27:30.958097 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:30Z","lastTransitionTime":"2026-01-01T08:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.061328 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.061397 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.061422 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.061452 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.061480 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:31Z","lastTransitionTime":"2026-01-01T08:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.127631 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.127726 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:27:31 crc kubenswrapper[4867]: E0101 08:27:31.127924 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.128087 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:27:31 crc kubenswrapper[4867]: E0101 08:27:31.128282 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:27:31 crc kubenswrapper[4867]: E0101 08:27:31.128442 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.144022 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wkbs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da72a722-a2a3-459e-875a-e1605b442e05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wjm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wkbs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:31Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.158186 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kv8wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28af0def-191f-4949-b617-a7a07dd8145b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:27:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kv8wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:31Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.164060 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.164102 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.164116 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.164163 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.164196 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:31Z","lastTransitionTime":"2026-01-01T08:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.173474 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb276ae-66de-4cb8-8237-1036b73042d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3a6b291f30c7815be13fde52bdef7ef22ee57e9c8be80809cf8a90029b8dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9b9aae16cc1c29ffb288ab01b54fa559cfe599c48f3ed97fe62bcc6e5b3288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d92095d9119537b08f6c16f41499ea77d353bebdf97681d1078af6cf5d24be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67682952747bac1bee9a88d0d4960e1b723a69088fc0dfc6ad9a11d66be35066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:31Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.194066 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97f622a4-8715-4813-9226-cd87ad7c38e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbef8e941e36563a6e489876fe03f03a6305edb8acbf6d31b1d098be4b23ebd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90768dce0ec952afe36a613a6e7ba00fe58331f820e40afff933da92ce33d762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9bcc967783fd9c73b9bdbb32623a3b3400a1489841b37b696810844be5c5686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea87c868a609b4ad7b0ac9fee0e9335bd9d5c640479e575ccdefc2777e74558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea87c868a609b4ad7b0ac9fee0e9335bd9d5c640479e575ccdefc2777e74558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:31Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.210052 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:31Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.222337 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275cdf8433aa5cc7dc4bfa21e80bafb4b9960156aa9d0f7dd23b5c120dfee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:31Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.238803 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a93d40-ed12-413d-b8fa-1c683a35a7e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cbb4d600b45c1a7f207f05502281886ef2861173b5ca6aa86a73a0ab8c2afcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh66z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:31Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.252711 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tg4nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"778253a2-b732-4460-994a-9543f533383f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dcfcdcf5aaf1d45a445b50f5ec520543620fc85992894681c627a2fd8ad4ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tg4nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:31Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.266975 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.267006 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.267015 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.267029 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.267038 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:31Z","lastTransitionTime":"2026-01-01T08:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.270304 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:31Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.284398 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:31Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.300863 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:31Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.315312 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:31Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.337806 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0210cc190fc2af5c125315618c94a0254f72a47309a2d1bb03b3a2b36b0f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0210cc190fc2af5c125315618c94a0254f72a47309a2d1bb03b3a2b36b0f82b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-01T08:27:24Z\\\",\\\"message\\\":\\\"o:208] Removed *v1.EgressFirewall event handler 9\\\\nI0101 08:27:24.083933 6525 shared_informer.go:313] Waiting for caches to sync for ef_node_controller\\\\nI0101 08:27:24.083941 6525 shared_informer.go:320] Caches are synced for ef_node_controller\\\\nI0101 08:27:24.083949 6525 controller.go:156] Starting controller ef_node_controller with 1 workers\\\\nI0101 08:27:24.083964 6525 egressqos.go:192] Setting up event handlers for EgressQoS\\\\nI0101 08:27:24.083969 6525 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084032 6525 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084082 6525 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084143 6525 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084317 6525 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084775 6525 ovnkube.go:599] Stopped ovnkube\\\\nI0101 08:27:24.084818 6525 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0101 08:27:24.084955 6525 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-01T08:27:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nftn_openshift-ovn-kubernetes(2d26a65b-86d6-4603-bdeb-ffcb2f086fda)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:31Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.354220 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zs59x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db3b0fa-02f9-475b-a6ca-8ac262cbe337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40419cc0c7e84f74407395a89899e4b3107697ef63704b804b426d1cf7652d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99n2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f320736d0565d6bdbe13a7f7f6bc59048d54de5b289822d570df98a517bb4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99n2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:27:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zs59x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:31Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.369317 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:31Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.369936 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.369978 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.369990 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.370008 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.370022 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:31Z","lastTransitionTime":"2026-01-01T08:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.380882 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:31Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.396400 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4608a141-23bd-4286-8607-ad4b16b5ee11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee898ddead9a02fda6e950236b68e556221e707ee2a7c1a2d204194cc334124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-69jph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:31Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.471976 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.472251 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.472321 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.472384 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.472481 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:31Z","lastTransitionTime":"2026-01-01T08:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.575493 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.575537 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.575547 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.575564 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.575575 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:31Z","lastTransitionTime":"2026-01-01T08:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.678970 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.679046 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.679059 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.679078 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.679092 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:31Z","lastTransitionTime":"2026-01-01T08:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.781464 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.781534 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.781554 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.781586 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.781602 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:31Z","lastTransitionTime":"2026-01-01T08:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.884333 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.884391 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.884408 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.884426 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.884435 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:31Z","lastTransitionTime":"2026-01-01T08:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.988039 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.988123 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.988148 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.988186 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:31 crc kubenswrapper[4867]: I0101 08:27:31.988209 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:31Z","lastTransitionTime":"2026-01-01T08:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.091457 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.092119 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.092299 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.092493 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.092637 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:32Z","lastTransitionTime":"2026-01-01T08:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.128287 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:27:32 crc kubenswrapper[4867]: E0101 08:27:32.128596 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.195183 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.195503 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.195634 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.195758 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.195936 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:32Z","lastTransitionTime":"2026-01-01T08:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.299130 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.299179 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.299195 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.299219 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.299235 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:32Z","lastTransitionTime":"2026-01-01T08:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.402413 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.402510 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.402529 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.402587 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.402605 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:32Z","lastTransitionTime":"2026-01-01T08:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.506250 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.506557 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.506687 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.506832 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.506987 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:32Z","lastTransitionTime":"2026-01-01T08:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.632630 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.633111 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.633303 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.633471 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.633681 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:32Z","lastTransitionTime":"2026-01-01T08:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.736408 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.736508 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.736525 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.736547 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.736568 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:32Z","lastTransitionTime":"2026-01-01T08:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.840375 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.840438 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.840455 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.840483 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.840501 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:32Z","lastTransitionTime":"2026-01-01T08:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.942981 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.943015 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.943026 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.943041 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:32 crc kubenswrapper[4867]: I0101 08:27:32.943052 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:32Z","lastTransitionTime":"2026-01-01T08:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.046333 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.046367 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.046377 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.046390 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.046400 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:33Z","lastTransitionTime":"2026-01-01T08:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.127951 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.127965 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.128080 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:27:33 crc kubenswrapper[4867]: E0101 08:27:33.128518 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:27:33 crc kubenswrapper[4867]: E0101 08:27:33.128866 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:27:33 crc kubenswrapper[4867]: E0101 08:27:33.129053 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.149455 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.149514 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.149527 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.149564 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.149578 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:33Z","lastTransitionTime":"2026-01-01T08:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.252581 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.252632 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.252646 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.252661 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.252674 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:33Z","lastTransitionTime":"2026-01-01T08:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.355477 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.355526 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.355538 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.355555 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.355567 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:33Z","lastTransitionTime":"2026-01-01T08:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.458196 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.458267 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.458291 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.458320 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.458342 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:33Z","lastTransitionTime":"2026-01-01T08:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.561141 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.561206 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.561228 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.561255 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.561277 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:33Z","lastTransitionTime":"2026-01-01T08:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.611917 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.611960 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.611972 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.611988 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.611999 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:33Z","lastTransitionTime":"2026-01-01T08:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:33 crc kubenswrapper[4867]: E0101 08:27:33.628132 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:33Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.632851 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.633010 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.633111 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.633201 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.633296 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:33Z","lastTransitionTime":"2026-01-01T08:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:33 crc kubenswrapper[4867]: E0101 08:27:33.644873 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:33Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.648818 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.648872 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.648909 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.648926 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.648937 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:33Z","lastTransitionTime":"2026-01-01T08:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:33 crc kubenswrapper[4867]: E0101 08:27:33.665540 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:33Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.669274 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.669307 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.669317 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.669334 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.669344 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:33Z","lastTransitionTime":"2026-01-01T08:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:33 crc kubenswrapper[4867]: E0101 08:27:33.685568 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:33Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.690320 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.690366 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.690383 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.690404 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.690421 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:33Z","lastTransitionTime":"2026-01-01T08:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:33 crc kubenswrapper[4867]: E0101 08:27:33.708595 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:33Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:33 crc kubenswrapper[4867]: E0101 08:27:33.709047 4867 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.711856 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.711980 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.712003 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.712031 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.712055 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:33Z","lastTransitionTime":"2026-01-01T08:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.815697 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.816112 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.816256 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.816400 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.816536 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:33Z","lastTransitionTime":"2026-01-01T08:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.920303 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.920612 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.920750 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.920929 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:33 crc kubenswrapper[4867]: I0101 08:27:33.921087 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:33Z","lastTransitionTime":"2026-01-01T08:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.024484 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.024792 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.025076 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.025129 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.025149 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:34Z","lastTransitionTime":"2026-01-01T08:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.127765 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:27:34 crc kubenswrapper[4867]: E0101 08:27:34.128041 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.128237 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.128295 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.128316 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.128343 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.128365 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:34Z","lastTransitionTime":"2026-01-01T08:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.231059 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.231119 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.231136 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.231159 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.231177 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:34Z","lastTransitionTime":"2026-01-01T08:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.332909 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.332971 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.332991 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.333015 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.333033 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:34Z","lastTransitionTime":"2026-01-01T08:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.435330 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.435391 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.435410 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.435435 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.435452 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:34Z","lastTransitionTime":"2026-01-01T08:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.537407 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.537451 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.537461 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.537478 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.537489 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:34Z","lastTransitionTime":"2026-01-01T08:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.640402 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.640445 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.640456 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.640470 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.640479 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:34Z","lastTransitionTime":"2026-01-01T08:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.742902 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.742954 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.742969 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.742988 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.743002 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:34Z","lastTransitionTime":"2026-01-01T08:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.845566 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.845631 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.845651 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.845680 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.845726 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:34Z","lastTransitionTime":"2026-01-01T08:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.948351 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.948386 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.948395 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.948409 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:34 crc kubenswrapper[4867]: I0101 08:27:34.948419 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:34Z","lastTransitionTime":"2026-01-01T08:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.052815 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.052922 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.052941 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.052998 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.053016 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:35Z","lastTransitionTime":"2026-01-01T08:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.128555 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.128668 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.128682 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:27:35 crc kubenswrapper[4867]: E0101 08:27:35.128826 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:27:35 crc kubenswrapper[4867]: E0101 08:27:35.128990 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:27:35 crc kubenswrapper[4867]: E0101 08:27:35.129202 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.155554 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.155593 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.155605 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.155618 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.155629 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:35Z","lastTransitionTime":"2026-01-01T08:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.258609 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.258653 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.258667 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.258686 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.258696 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:35Z","lastTransitionTime":"2026-01-01T08:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.361359 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.361393 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.361402 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.361416 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.361428 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:35Z","lastTransitionTime":"2026-01-01T08:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.463267 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.463300 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.463312 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.463325 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.463333 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:35Z","lastTransitionTime":"2026-01-01T08:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.566861 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.566940 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.566960 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.566999 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.567012 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:35Z","lastTransitionTime":"2026-01-01T08:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.669688 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.669776 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.669954 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.669986 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.670065 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:35Z","lastTransitionTime":"2026-01-01T08:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.773193 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.773240 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.773249 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.773263 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.773272 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:35Z","lastTransitionTime":"2026-01-01T08:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.875487 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.875555 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.875579 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.875606 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.875626 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:35Z","lastTransitionTime":"2026-01-01T08:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.978037 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.978074 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.978086 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.978100 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:35 crc kubenswrapper[4867]: I0101 08:27:35.978114 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:35Z","lastTransitionTime":"2026-01-01T08:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.082137 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.084716 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.084733 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.084755 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.084774 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:36Z","lastTransitionTime":"2026-01-01T08:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.128103 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:27:36 crc kubenswrapper[4867]: E0101 08:27:36.128299 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.188131 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.188196 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.188214 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.188237 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.188255 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:36Z","lastTransitionTime":"2026-01-01T08:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.291167 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.291254 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.291273 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.291296 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.291313 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:36Z","lastTransitionTime":"2026-01-01T08:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.298500 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28af0def-191f-4949-b617-a7a07dd8145b-metrics-certs\") pod \"network-metrics-daemon-kv8wr\" (UID: \"28af0def-191f-4949-b617-a7a07dd8145b\") " pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:27:36 crc kubenswrapper[4867]: E0101 08:27:36.298692 4867 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 01 08:27:36 crc kubenswrapper[4867]: E0101 08:27:36.298794 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28af0def-191f-4949-b617-a7a07dd8145b-metrics-certs podName:28af0def-191f-4949-b617-a7a07dd8145b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:08.298763643 +0000 UTC m=+97.434032452 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/28af0def-191f-4949-b617-a7a07dd8145b-metrics-certs") pod "network-metrics-daemon-kv8wr" (UID: "28af0def-191f-4949-b617-a7a07dd8145b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.394669 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.394721 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.394737 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.394762 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.394780 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:36Z","lastTransitionTime":"2026-01-01T08:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.496603 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.496645 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.496657 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.496673 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.496684 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:36Z","lastTransitionTime":"2026-01-01T08:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.533231 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wkbs8_da72a722-a2a3-459e-875a-e1605b442e05/kube-multus/0.log" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.533284 4867 generic.go:334] "Generic (PLEG): container finished" podID="da72a722-a2a3-459e-875a-e1605b442e05" containerID="d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f" exitCode=1 Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.533313 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wkbs8" event={"ID":"da72a722-a2a3-459e-875a-e1605b442e05","Type":"ContainerDied","Data":"d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f"} Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.533627 4867 scope.go:117] "RemoveContainer" containerID="d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.551714 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tg4nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"778253a2-b732-4460-994a-9543f533383f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dcfcdcf5aaf1d45a445b50f5ec520543620fc85992894681c627a2fd8ad4ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tg4nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:36Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.575180 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:36Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.596710 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:36Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.599407 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.599449 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.599470 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.599500 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.599556 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:36Z","lastTransitionTime":"2026-01-01T08:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.619259 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:36Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.634918 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:36Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.678518 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a93d40-ed12-413d-b8fa-1c683a35a7e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cbb4d600b45c1a7f207f05502281886ef2861173b5ca6aa86a73a0ab8c2afcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh66z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:36Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.698157 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zs59x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db3b0fa-02f9-475b-a6ca-8ac262cbe337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40419cc0c7e84f74407395a89899e4b3107697ef63704b804b426d1cf7652d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99n2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f320736d0565d6bdbe13a7f7f6bc59048d54de5b289822d570df98a517bb4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99n2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:27:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zs59x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:36Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.701822 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.701855 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.701863 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.701877 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.701902 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:36Z","lastTransitionTime":"2026-01-01T08:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.713676 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:36Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.725770 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:36Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.735781 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4608a141-23bd-4286-8607-ad4b16b5ee11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee898ddead9a02fda6e950236b68e556221e707ee2a7c1a2d204194cc334124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-69jph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:36Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.756282 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0210cc190fc2af5c125315618c94a0254f72a47309a2d1bb03b3a2b36b0f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0210cc190fc2af5c125315618c94a0254f72a47309a2d1bb03b3a2b36b0f82b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-01T08:27:24Z\\\",\\\"message\\\":\\\"o:208] Removed *v1.EgressFirewall event handler 9\\\\nI0101 08:27:24.083933 6525 shared_informer.go:313] Waiting for caches to sync for ef_node_controller\\\\nI0101 08:27:24.083941 6525 shared_informer.go:320] Caches are synced for ef_node_controller\\\\nI0101 08:27:24.083949 6525 controller.go:156] Starting controller ef_node_controller with 1 workers\\\\nI0101 08:27:24.083964 6525 egressqos.go:192] Setting up event handlers for EgressQoS\\\\nI0101 08:27:24.083969 6525 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084032 6525 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084082 6525 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084143 6525 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084317 6525 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084775 6525 ovnkube.go:599] Stopped ovnkube\\\\nI0101 08:27:24.084818 6525 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0101 08:27:24.084955 6525 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-01T08:27:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nftn_openshift-ovn-kubernetes(2d26a65b-86d6-4603-bdeb-ffcb2f086fda)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:36Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.765432 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kv8wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28af0def-191f-4949-b617-a7a07dd8145b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:27:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kv8wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:36Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.775964 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb276ae-66de-4cb8-8237-1036b73042d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3a6b291f30c7815be13fde52bdef7ef22ee57e9c8be80809cf8a90029b8dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9b9aae16cc1c29ffb288ab01b54fa559cfe599c48f3ed97fe62bcc6e5b3288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d92095d9119537b08f6c16f41499ea77d353bebdf97681d1078af6cf5d24be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67682952747bac1bee9a88d0d4960e1b723a69088fc0dfc6ad9a11d66be35066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:36Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.784737 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97f622a4-8715-4813-9226-cd87ad7c38e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbef8e941e36563a6e489876fe03f03a6305edb8acbf6d31b1d098be4b23ebd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90768dce0ec952afe36a613a6e7ba00fe58331f820e40afff933da92ce33d762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9bcc967783fd9c73b9bdbb32623a3b3400a1489841b37b696810844be5c5686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea87c868a609b4ad7b0ac9fee0e9335bd9d5c640479e575ccdefc2777e74558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea87c868a609b4ad7b0ac9fee0e9335bd9d5c640479e575ccdefc2777e74558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:36Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.794648 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:36Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.803826 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275cdf8433aa5cc7dc4bfa21e80bafb4b9960156aa9d0f7dd23b5c120dfee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:36Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.804595 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.804618 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.804628 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.804642 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.804652 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:36Z","lastTransitionTime":"2026-01-01T08:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.814784 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wkbs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da72a722-a2a3-459e-875a-e1605b442e05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-01T08:27:36Z\\\",\\\"message\\\":\\\"2026-01-01T08:26:51+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_72574e5b-652f-4310-b96e-b6484e9bfd6e\\\\n2026-01-01T08:26:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_72574e5b-652f-4310-b96e-b6484e9bfd6e to /host/opt/cni/bin/\\\\n2026-01-01T08:26:51Z [verbose] multus-daemon started\\\\n2026-01-01T08:26:51Z [verbose] Readiness Indicator file check\\\\n2026-01-01T08:27:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wjm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wkbs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:36Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.906951 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.907010 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.907028 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.907057 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:36 crc kubenswrapper[4867]: I0101 08:27:36.907078 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:36Z","lastTransitionTime":"2026-01-01T08:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.009128 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.009176 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.009190 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.009209 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.009220 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:37Z","lastTransitionTime":"2026-01-01T08:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.112680 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.112738 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.112756 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.112778 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.112795 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:37Z","lastTransitionTime":"2026-01-01T08:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.128160 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:27:37 crc kubenswrapper[4867]: E0101 08:27:37.128313 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.128426 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:27:37 crc kubenswrapper[4867]: E0101 08:27:37.128546 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.128160 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:27:37 crc kubenswrapper[4867]: E0101 08:27:37.128651 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.215478 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.215548 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.215568 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.215591 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.215608 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:37Z","lastTransitionTime":"2026-01-01T08:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.317504 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.317570 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.317593 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.317619 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.317640 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:37Z","lastTransitionTime":"2026-01-01T08:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.420015 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.420056 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.420072 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.420094 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.420111 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:37Z","lastTransitionTime":"2026-01-01T08:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.521901 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.521935 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.521947 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.521961 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.521972 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:37Z","lastTransitionTime":"2026-01-01T08:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.536958 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wkbs8_da72a722-a2a3-459e-875a-e1605b442e05/kube-multus/0.log" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.537064 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wkbs8" event={"ID":"da72a722-a2a3-459e-875a-e1605b442e05","Type":"ContainerStarted","Data":"602fa2ee8eb9678b61a838c41ada5620972c139005d78b06dd99cf10077d9b12"} Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.551745 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:37Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.567986 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:37Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.587099 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:37Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.606955 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a93d40-ed12-413d-b8fa-1c683a35a7e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cbb4d600b45c1a7f207f05502281886ef2861173b5ca6aa86a73a0ab8c2afcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh66z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:37Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.621592 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tg4nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"778253a2-b732-4460-994a-9543f533383f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dcfcdcf5aaf1d45a445b50f5ec520543620fc85992894681c627a2fd8ad4ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tg4nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:37Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.624710 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.624767 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.624785 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.624807 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.624826 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:37Z","lastTransitionTime":"2026-01-01T08:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.642925 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:37Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.660585 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:37Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.673660 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:37Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.688319 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4608a141-23bd-4286-8607-ad4b16b5ee11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee898ddead9a02fda6e950236b68e556221e707ee2a7c1a2d204194cc334124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-69jph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:37Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.715598 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0210cc190fc2af5c125315618c94a0254f72a47309a2d1bb03b3a2b36b0f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0210cc190fc2af5c125315618c94a0254f72a47309a2d1bb03b3a2b36b0f82b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-01T08:27:24Z\\\",\\\"message\\\":\\\"o:208] Removed *v1.EgressFirewall event handler 9\\\\nI0101 08:27:24.083933 6525 shared_informer.go:313] Waiting for caches to sync for ef_node_controller\\\\nI0101 08:27:24.083941 6525 shared_informer.go:320] Caches are synced for ef_node_controller\\\\nI0101 08:27:24.083949 6525 controller.go:156] Starting controller ef_node_controller with 1 workers\\\\nI0101 08:27:24.083964 6525 egressqos.go:192] Setting up event handlers for EgressQoS\\\\nI0101 08:27:24.083969 6525 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084032 6525 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084082 6525 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084143 6525 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084317 6525 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084775 6525 ovnkube.go:599] Stopped ovnkube\\\\nI0101 08:27:24.084818 6525 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0101 08:27:24.084955 6525 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-01T08:27:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nftn_openshift-ovn-kubernetes(2d26a65b-86d6-4603-bdeb-ffcb2f086fda)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:37Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.727795 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.727952 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.727979 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.728042 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.728072 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:37Z","lastTransitionTime":"2026-01-01T08:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.731254 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zs59x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db3b0fa-02f9-475b-a6ca-8ac262cbe337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40419cc0c7e84f74407395a89899e4b3107697ef63704b804b426d1cf7652d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99n2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f320736d0565d6bdbe13a7f7f6bc59048d54de5b289822d570df98a517bb4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99n2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:27:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zs59x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:37Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.743637 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97f622a4-8715-4813-9226-cd87ad7c38e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbef8e941e36563a6e489876fe03f03a6305edb8acbf6d31b1d098be4b23ebd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90768dce0ec952afe36a613a6e7ba00fe58331f820e40afff933da92ce33d762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9bcc967783fd9c73b9bdbb32623a3b3400a1489841b37b696810844be5c5686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea87c868a609b4ad7b0ac9fee0e9335bd9d5c640479e575ccdefc2777e74558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea87c868a609b4ad7b0ac9fee0e9335bd9d5c640479e575ccdefc2777e74558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:37Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.761249 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:37Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.773632 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275cdf8433aa5cc7dc4bfa21e80bafb4b9960156aa9d0f7dd23b5c120dfee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:37Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.788206 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wkbs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da72a722-a2a3-459e-875a-e1605b442e05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602fa2ee8eb9678b61a838c41ada5620972c139005d78b06dd99cf10077d9b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-01T08:27:36Z\\\",\\\"message\\\":\\\"2026-01-01T08:26:51+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_72574e5b-652f-4310-b96e-b6484e9bfd6e\\\\n2026-01-01T08:26:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_72574e5b-652f-4310-b96e-b6484e9bfd6e to /host/opt/cni/bin/\\\\n2026-01-01T08:26:51Z [verbose] multus-daemon started\\\\n2026-01-01T08:26:51Z [verbose] Readiness Indicator file check\\\\n2026-01-01T08:27:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wjm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wkbs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:37Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.801302 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kv8wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28af0def-191f-4949-b617-a7a07dd8145b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:27:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kv8wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:37Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.813831 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb276ae-66de-4cb8-8237-1036b73042d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3a6b291f30c7815be13fde52bdef7ef22ee57e9c8be80809cf8a90029b8dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9b9aae16cc1c29ffb288ab01b54fa559cfe599c48f3ed97fe62bcc6e5b3288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d92095d9119537b08f6c16f41499ea77d353bebdf97681d1078af6cf5d24be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67682952747bac1bee9a88d0d4960e1b723a69088fc0dfc6ad9a11d66be35066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:37Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.830556 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.830598 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.830610 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.830628 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.830642 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:37Z","lastTransitionTime":"2026-01-01T08:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.932645 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.932679 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.932687 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.932700 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:37 crc kubenswrapper[4867]: I0101 08:27:37.932709 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:37Z","lastTransitionTime":"2026-01-01T08:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.035170 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.035214 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.035223 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.035237 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.035247 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:38Z","lastTransitionTime":"2026-01-01T08:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.128523 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:27:38 crc kubenswrapper[4867]: E0101 08:27:38.129053 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.129358 4867 scope.go:117] "RemoveContainer" containerID="e0210cc190fc2af5c125315618c94a0254f72a47309a2d1bb03b3a2b36b0f82b" Jan 01 08:27:38 crc kubenswrapper[4867]: E0101 08:27:38.129599 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nftn_openshift-ovn-kubernetes(2d26a65b-86d6-4603-bdeb-ffcb2f086fda)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.137228 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.137266 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.137280 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.137296 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.137309 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:38Z","lastTransitionTime":"2026-01-01T08:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.239287 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.239322 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.239332 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.239347 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.239356 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:38Z","lastTransitionTime":"2026-01-01T08:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.341424 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.341468 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.341479 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.341494 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.341504 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:38Z","lastTransitionTime":"2026-01-01T08:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.443337 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.443380 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.443394 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.443409 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.443419 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:38Z","lastTransitionTime":"2026-01-01T08:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.547999 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.548046 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.548059 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.548076 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.548092 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:38Z","lastTransitionTime":"2026-01-01T08:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.651171 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.651224 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.651236 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.651253 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.651266 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:38Z","lastTransitionTime":"2026-01-01T08:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.754357 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.754419 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.754442 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.754469 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.754490 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:38Z","lastTransitionTime":"2026-01-01T08:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.858216 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.858253 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.858264 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.858279 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.858288 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:38Z","lastTransitionTime":"2026-01-01T08:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.960979 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.961049 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.961066 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.961090 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:38 crc kubenswrapper[4867]: I0101 08:27:38.961106 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:38Z","lastTransitionTime":"2026-01-01T08:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.064269 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.064301 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.064309 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.064323 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.064332 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:39Z","lastTransitionTime":"2026-01-01T08:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.130838 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:27:39 crc kubenswrapper[4867]: E0101 08:27:39.131012 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.131206 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:27:39 crc kubenswrapper[4867]: E0101 08:27:39.131271 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.131403 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:27:39 crc kubenswrapper[4867]: E0101 08:27:39.131461 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.167015 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.167076 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.167088 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.167104 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.167115 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:39Z","lastTransitionTime":"2026-01-01T08:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.269330 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.269594 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.269694 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.269798 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.269880 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:39Z","lastTransitionTime":"2026-01-01T08:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.372855 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.372968 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.372991 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.373016 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.373038 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:39Z","lastTransitionTime":"2026-01-01T08:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.475677 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.475740 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.475756 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.475779 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.475794 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:39Z","lastTransitionTime":"2026-01-01T08:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.578239 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.578310 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.578331 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.578364 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.578382 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:39Z","lastTransitionTime":"2026-01-01T08:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.681815 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.681857 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.681874 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.681920 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.681937 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:39Z","lastTransitionTime":"2026-01-01T08:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.784118 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.784158 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.784170 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.784185 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.784196 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:39Z","lastTransitionTime":"2026-01-01T08:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.887183 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.887223 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.887241 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.887261 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.887278 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:39Z","lastTransitionTime":"2026-01-01T08:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.990052 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.990096 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.990112 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.990136 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:39 crc kubenswrapper[4867]: I0101 08:27:39.990151 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:39Z","lastTransitionTime":"2026-01-01T08:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.093389 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.093436 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.093448 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.093472 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.093486 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:40Z","lastTransitionTime":"2026-01-01T08:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.127860 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:27:40 crc kubenswrapper[4867]: E0101 08:27:40.128090 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.196300 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.196377 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.196402 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.196434 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.196457 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:40Z","lastTransitionTime":"2026-01-01T08:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.299645 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.299702 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.299721 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.299752 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.299775 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:40Z","lastTransitionTime":"2026-01-01T08:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.402507 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.402583 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.402606 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.402631 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.402654 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:40Z","lastTransitionTime":"2026-01-01T08:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.505461 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.505506 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.505517 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.505530 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.505540 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:40Z","lastTransitionTime":"2026-01-01T08:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.608285 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.608415 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.608440 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.608466 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.608488 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:40Z","lastTransitionTime":"2026-01-01T08:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.710773 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.710806 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.710814 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.710828 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.710837 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:40Z","lastTransitionTime":"2026-01-01T08:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.812407 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.812503 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.812521 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.812545 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.812562 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:40Z","lastTransitionTime":"2026-01-01T08:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.914284 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.914318 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.914326 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.914336 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:40 crc kubenswrapper[4867]: I0101 08:27:40.914344 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:40Z","lastTransitionTime":"2026-01-01T08:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.016701 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.016749 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.016762 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.016778 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.016788 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:41Z","lastTransitionTime":"2026-01-01T08:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.119293 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.119331 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.119341 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.119356 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.119365 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:41Z","lastTransitionTime":"2026-01-01T08:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.127760 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.127791 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:27:41 crc kubenswrapper[4867]: E0101 08:27:41.127866 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:27:41 crc kubenswrapper[4867]: E0101 08:27:41.128018 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.128050 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:27:41 crc kubenswrapper[4867]: E0101 08:27:41.128221 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.148224 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:41Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.163930 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:41Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.183905 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:41Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.201104 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:41Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.219574 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a93d40-ed12-413d-b8fa-1c683a35a7e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cbb4d600b45c1a7f207f05502281886ef2861173b5ca6aa86a73a0ab8c2afcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh66z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:41Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.221176 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.221207 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.221216 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.221231 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.221240 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:41Z","lastTransitionTime":"2026-01-01T08:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.235064 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tg4nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"778253a2-b732-4460-994a-9543f533383f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dcfcdcf5aaf1d45a445b50f5ec520543620fc85992894681c627a2fd8ad4ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tg4nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:41Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.254658 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:41Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.267785 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:41Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.285230 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4608a141-23bd-4286-8607-ad4b16b5ee11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee898ddead9a02fda6e950236b68e556221e707ee2a7c1a2d204194cc334124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-69jph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:41Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.310382 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0210cc190fc2af5c125315618c94a0254f72a47309a2d1bb03b3a2b36b0f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0210cc190fc2af5c125315618c94a0254f72a47309a2d1bb03b3a2b36b0f82b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-01T08:27:24Z\\\",\\\"message\\\":\\\"o:208] Removed *v1.EgressFirewall event handler 9\\\\nI0101 08:27:24.083933 6525 shared_informer.go:313] Waiting for caches to sync for ef_node_controller\\\\nI0101 08:27:24.083941 6525 shared_informer.go:320] Caches are synced for ef_node_controller\\\\nI0101 08:27:24.083949 6525 controller.go:156] Starting controller ef_node_controller with 1 workers\\\\nI0101 08:27:24.083964 6525 egressqos.go:192] Setting up event handlers for EgressQoS\\\\nI0101 08:27:24.083969 6525 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084032 6525 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084082 6525 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084143 6525 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084317 6525 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084775 6525 ovnkube.go:599] Stopped ovnkube\\\\nI0101 08:27:24.084818 6525 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0101 08:27:24.084955 6525 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-01T08:27:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nftn_openshift-ovn-kubernetes(2d26a65b-86d6-4603-bdeb-ffcb2f086fda)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:41Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.323708 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.323757 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.323769 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.323788 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.323800 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:41Z","lastTransitionTime":"2026-01-01T08:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.326338 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zs59x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db3b0fa-02f9-475b-a6ca-8ac262cbe337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40419cc0c7e84f74407395a89899e4b3107697ef63704b804b426d1cf7652d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99n2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f320736d0565d6bdbe13a7f7f6bc59048d54de5b289822d570df98a517bb4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99n2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:27:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zs59x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:41Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.340698 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb276ae-66de-4cb8-8237-1036b73042d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3a6b291f30c7815be13fde52bdef7ef22ee57e9c8be80809cf8a90029b8dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9b9aae16cc1c29ffb288ab01b54fa559cfe599c48f3ed97fe62bcc6e5b3288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d92095d9119537b08f6c16f41499ea77d353bebdf97681d1078af6cf5d24be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67682952747bac1bee9a88d0d4960e1b723a69088fc0dfc6ad9a11d66be35066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:41Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.353360 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97f622a4-8715-4813-9226-cd87ad7c38e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbef8e941e36563a6e489876fe03f03a6305edb8acbf6d31b1d098be4b23ebd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90768dce0ec952afe36a613a6e7ba00fe58331f820e40afff933da92ce33d762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9bcc967783fd9c73b9bdbb32623a3b3400a1489841b37b696810844be5c5686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea87c868a609b4ad7b0ac9fee0e9335bd9d5c640479e575ccdefc2777e74558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea87c868a609b4ad7b0ac9fee0e9335bd9d5c640479e575ccdefc2777e74558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:41Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.366248 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:41Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.379450 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275cdf8433aa5cc7dc4bfa21e80bafb4b9960156aa9d0f7dd23b5c120dfee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:41Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.396506 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wkbs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da72a722-a2a3-459e-875a-e1605b442e05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602fa2ee8eb9678b61a838c41ada5620972c139005d78b06dd99cf10077d9b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-01T08:27:36Z\\\",\\\"message\\\":\\\"2026-01-01T08:26:51+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_72574e5b-652f-4310-b96e-b6484e9bfd6e\\\\n2026-01-01T08:26:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_72574e5b-652f-4310-b96e-b6484e9bfd6e to /host/opt/cni/bin/\\\\n2026-01-01T08:26:51Z [verbose] multus-daemon started\\\\n2026-01-01T08:26:51Z [verbose] Readiness Indicator file check\\\\n2026-01-01T08:27:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wjm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wkbs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:41Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.409760 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kv8wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28af0def-191f-4949-b617-a7a07dd8145b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:27:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kv8wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:41Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.425876 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.425963 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.425988 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.426018 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.426043 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:41Z","lastTransitionTime":"2026-01-01T08:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.528295 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.528326 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.528334 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.528347 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.528357 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:41Z","lastTransitionTime":"2026-01-01T08:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.629917 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.629953 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.629962 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.629975 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.629985 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:41Z","lastTransitionTime":"2026-01-01T08:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.736070 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.736142 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.736164 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.736190 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.736209 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:41Z","lastTransitionTime":"2026-01-01T08:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.839671 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.839706 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.839716 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.839732 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.839742 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:41Z","lastTransitionTime":"2026-01-01T08:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.942866 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.942986 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.943009 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.943035 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:41 crc kubenswrapper[4867]: I0101 08:27:41.943054 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:41Z","lastTransitionTime":"2026-01-01T08:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.045185 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.045222 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.045230 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.045244 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.045254 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:42Z","lastTransitionTime":"2026-01-01T08:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.128096 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:27:42 crc kubenswrapper[4867]: E0101 08:27:42.128260 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.148092 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.148233 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.148337 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.148373 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.148454 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:42Z","lastTransitionTime":"2026-01-01T08:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.251546 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.251602 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.251621 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.251644 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.251662 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:42Z","lastTransitionTime":"2026-01-01T08:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.354734 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.354790 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.354803 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.354820 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.354832 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:42Z","lastTransitionTime":"2026-01-01T08:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.456975 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.457005 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.457015 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.457030 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.457039 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:42Z","lastTransitionTime":"2026-01-01T08:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.559486 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.559516 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.559527 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.559539 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.559548 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:42Z","lastTransitionTime":"2026-01-01T08:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.662541 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.662591 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.662607 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.662629 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.662646 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:42Z","lastTransitionTime":"2026-01-01T08:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.765424 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.765490 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.765507 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.765535 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.765552 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:42Z","lastTransitionTime":"2026-01-01T08:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.867648 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.867729 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.867767 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.867797 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.867815 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:42Z","lastTransitionTime":"2026-01-01T08:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.969467 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.969545 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.969564 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.969589 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:42 crc kubenswrapper[4867]: I0101 08:27:42.969605 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:42Z","lastTransitionTime":"2026-01-01T08:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.072022 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.072089 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.072107 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.072131 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.072147 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:43Z","lastTransitionTime":"2026-01-01T08:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.127738 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.127788 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.127845 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:27:43 crc kubenswrapper[4867]: E0101 08:27:43.127937 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:27:43 crc kubenswrapper[4867]: E0101 08:27:43.128052 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:27:43 crc kubenswrapper[4867]: E0101 08:27:43.128241 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.174826 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.174915 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.174936 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.174960 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.174978 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:43Z","lastTransitionTime":"2026-01-01T08:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.276913 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.276953 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.276964 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.276977 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.276985 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:43Z","lastTransitionTime":"2026-01-01T08:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.378787 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.378825 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.378835 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.378850 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.378859 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:43Z","lastTransitionTime":"2026-01-01T08:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.482313 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.482391 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.482417 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.482449 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.482470 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:43Z","lastTransitionTime":"2026-01-01T08:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.584988 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.585050 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.585059 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.585075 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.585085 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:43Z","lastTransitionTime":"2026-01-01T08:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.687745 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.687782 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.687791 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.687804 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.687813 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:43Z","lastTransitionTime":"2026-01-01T08:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.791003 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.791073 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.791094 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.791122 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.791143 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:43Z","lastTransitionTime":"2026-01-01T08:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.893139 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.893168 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.893176 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.893189 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.893200 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:43Z","lastTransitionTime":"2026-01-01T08:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.996449 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.996489 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.996500 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.996515 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:43 crc kubenswrapper[4867]: I0101 08:27:43.996524 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:43Z","lastTransitionTime":"2026-01-01T08:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.044782 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.044840 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.044858 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.044881 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.044926 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:44Z","lastTransitionTime":"2026-01-01T08:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:44 crc kubenswrapper[4867]: E0101 08:27:44.061452 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:44Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.066252 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.066284 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.066292 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.066305 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.066314 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:44Z","lastTransitionTime":"2026-01-01T08:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:44 crc kubenswrapper[4867]: E0101 08:27:44.084114 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:44Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.088523 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.088588 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.088606 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.088632 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.088650 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:44Z","lastTransitionTime":"2026-01-01T08:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:44 crc kubenswrapper[4867]: E0101 08:27:44.102726 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:44Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.106822 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.106873 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.106913 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.106934 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.106952 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:44Z","lastTransitionTime":"2026-01-01T08:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:44 crc kubenswrapper[4867]: E0101 08:27:44.120948 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:44Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.124447 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.124494 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.124513 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.124533 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.124548 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:44Z","lastTransitionTime":"2026-01-01T08:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.127497 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:27:44 crc kubenswrapper[4867]: E0101 08:27:44.127619 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:27:44 crc kubenswrapper[4867]: E0101 08:27:44.138535 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:44Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:44 crc kubenswrapper[4867]: E0101 08:27:44.138642 4867 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.140515 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.140568 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.140587 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.140609 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.140625 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:44Z","lastTransitionTime":"2026-01-01T08:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.243520 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.243575 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.243593 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.243616 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.243632 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:44Z","lastTransitionTime":"2026-01-01T08:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.346503 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.346582 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.346604 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.346629 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.346650 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:44Z","lastTransitionTime":"2026-01-01T08:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.448540 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.448574 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.448585 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.448602 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.448614 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:44Z","lastTransitionTime":"2026-01-01T08:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.549929 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.549997 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.550022 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.550049 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.550069 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:44Z","lastTransitionTime":"2026-01-01T08:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.652476 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.652515 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.652526 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.652541 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.652589 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:44Z","lastTransitionTime":"2026-01-01T08:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.755677 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.755770 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.756184 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.756271 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.756539 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:44Z","lastTransitionTime":"2026-01-01T08:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.859304 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.859358 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.859375 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.859399 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.859415 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:44Z","lastTransitionTime":"2026-01-01T08:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.962667 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.962756 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.962780 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.962810 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:44 crc kubenswrapper[4867]: I0101 08:27:44.962830 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:44Z","lastTransitionTime":"2026-01-01T08:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.065343 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.065381 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.065389 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.065407 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.065416 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:45Z","lastTransitionTime":"2026-01-01T08:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.128636 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.128638 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.128672 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:27:45 crc kubenswrapper[4867]: E0101 08:27:45.128859 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:27:45 crc kubenswrapper[4867]: E0101 08:27:45.128967 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:27:45 crc kubenswrapper[4867]: E0101 08:27:45.129053 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.167945 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.168006 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.168026 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.168053 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.168072 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:45Z","lastTransitionTime":"2026-01-01T08:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.270867 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.270972 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.270995 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.271023 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.271047 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:45Z","lastTransitionTime":"2026-01-01T08:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.373024 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.373084 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.373103 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.373126 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.373143 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:45Z","lastTransitionTime":"2026-01-01T08:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.476358 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.476416 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.476441 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.476472 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.476493 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:45Z","lastTransitionTime":"2026-01-01T08:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.578759 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.578818 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.578835 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.578864 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.578959 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:45Z","lastTransitionTime":"2026-01-01T08:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.681916 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.682006 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.682033 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.682063 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.682088 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:45Z","lastTransitionTime":"2026-01-01T08:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.784876 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.784946 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.784961 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.784981 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.784996 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:45Z","lastTransitionTime":"2026-01-01T08:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.888363 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.888404 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.888421 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.888443 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.888459 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:45Z","lastTransitionTime":"2026-01-01T08:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.990743 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.990796 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.990814 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.990838 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:45 crc kubenswrapper[4867]: I0101 08:27:45.990857 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:45Z","lastTransitionTime":"2026-01-01T08:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.097035 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.097086 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.097103 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.097126 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.097148 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:46Z","lastTransitionTime":"2026-01-01T08:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.127987 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:27:46 crc kubenswrapper[4867]: E0101 08:27:46.128166 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.200857 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.200949 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.200966 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.200989 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.201009 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:46Z","lastTransitionTime":"2026-01-01T08:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.303544 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.303608 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.303630 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.303658 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.303679 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:46Z","lastTransitionTime":"2026-01-01T08:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.406994 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.407051 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.407067 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.407091 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.407109 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:46Z","lastTransitionTime":"2026-01-01T08:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.509990 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.510045 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.510064 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.510086 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.510104 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:46Z","lastTransitionTime":"2026-01-01T08:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.612624 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.612667 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.612682 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.612703 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.612720 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:46Z","lastTransitionTime":"2026-01-01T08:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.716088 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.716136 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.716152 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.716175 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.716192 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:46Z","lastTransitionTime":"2026-01-01T08:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.819413 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.819477 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.819492 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.819510 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.819521 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:46Z","lastTransitionTime":"2026-01-01T08:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.922446 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.922520 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.922542 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.922570 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:46 crc kubenswrapper[4867]: I0101 08:27:46.922594 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:46Z","lastTransitionTime":"2026-01-01T08:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.025007 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.025076 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.025099 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.025127 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.025148 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:47Z","lastTransitionTime":"2026-01-01T08:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.128019 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.128025 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.128174 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.128289 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.128338 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:47 crc kubenswrapper[4867]: E0101 08:27:47.128347 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:27:47 crc kubenswrapper[4867]: E0101 08:27:47.128485 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:27:47 crc kubenswrapper[4867]: E0101 08:27:47.128618 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.129029 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.129103 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.129127 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:47Z","lastTransitionTime":"2026-01-01T08:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.232643 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.232709 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.232732 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.232829 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.232862 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:47Z","lastTransitionTime":"2026-01-01T08:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.336052 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.336106 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.336124 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.336146 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.336166 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:47Z","lastTransitionTime":"2026-01-01T08:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.440337 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.440396 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.440412 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.440435 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.440453 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:47Z","lastTransitionTime":"2026-01-01T08:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.542952 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.543007 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.543024 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.543054 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.543088 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:47Z","lastTransitionTime":"2026-01-01T08:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.645948 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.645999 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.646015 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.646037 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.646053 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:47Z","lastTransitionTime":"2026-01-01T08:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.748966 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.749028 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.749051 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.749079 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.749104 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:47Z","lastTransitionTime":"2026-01-01T08:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.851923 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.851982 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.851999 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.852024 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.852042 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:47Z","lastTransitionTime":"2026-01-01T08:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.955186 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.955242 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.955264 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.955292 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:47 crc kubenswrapper[4867]: I0101 08:27:47.955316 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:47Z","lastTransitionTime":"2026-01-01T08:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.059048 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.059111 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.059129 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.059152 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.059169 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:48Z","lastTransitionTime":"2026-01-01T08:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.128308 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:27:48 crc kubenswrapper[4867]: E0101 08:27:48.128548 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.164002 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.164060 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.164077 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.164102 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.164121 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:48Z","lastTransitionTime":"2026-01-01T08:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.266986 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.267061 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.267078 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.267102 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.267119 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:48Z","lastTransitionTime":"2026-01-01T08:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.370694 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.370793 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.370813 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.370845 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.370867 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:48Z","lastTransitionTime":"2026-01-01T08:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.473878 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.473966 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.473983 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.474005 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.474024 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:48Z","lastTransitionTime":"2026-01-01T08:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.576349 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.576422 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.576448 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.576475 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.576492 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:48Z","lastTransitionTime":"2026-01-01T08:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.679376 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.679421 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.679439 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.679461 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.679477 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:48Z","lastTransitionTime":"2026-01-01T08:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.781666 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.781751 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.781778 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.781810 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.781833 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:48Z","lastTransitionTime":"2026-01-01T08:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.885313 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.885379 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.885400 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.885425 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.885446 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:48Z","lastTransitionTime":"2026-01-01T08:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.988217 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.988280 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.988301 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.988325 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:48 crc kubenswrapper[4867]: I0101 08:27:48.988343 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:48Z","lastTransitionTime":"2026-01-01T08:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.091819 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.091876 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.091922 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.091950 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.091967 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:49Z","lastTransitionTime":"2026-01-01T08:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.128366 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.128424 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.128443 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:27:49 crc kubenswrapper[4867]: E0101 08:27:49.128556 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:27:49 crc kubenswrapper[4867]: E0101 08:27:49.128660 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:27:49 crc kubenswrapper[4867]: E0101 08:27:49.128796 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.195343 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.195408 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.195425 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.195448 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.195467 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:49Z","lastTransitionTime":"2026-01-01T08:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.298354 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.298413 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.298430 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.298455 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.298471 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:49Z","lastTransitionTime":"2026-01-01T08:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.401367 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.401426 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.401444 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.401468 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.401485 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:49Z","lastTransitionTime":"2026-01-01T08:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.505376 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.505452 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.505475 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.505506 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.505533 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:49Z","lastTransitionTime":"2026-01-01T08:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.608145 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.608227 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.608253 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.608284 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.608308 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:49Z","lastTransitionTime":"2026-01-01T08:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.712196 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.712260 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.712278 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.712302 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.712322 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:49Z","lastTransitionTime":"2026-01-01T08:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.815305 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.815401 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.815419 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.815443 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.815464 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:49Z","lastTransitionTime":"2026-01-01T08:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.918589 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.918670 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.918739 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.918766 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:49 crc kubenswrapper[4867]: I0101 08:27:49.918783 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:49Z","lastTransitionTime":"2026-01-01T08:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.022017 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.022858 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.023033 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.023190 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.023324 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:50Z","lastTransitionTime":"2026-01-01T08:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.126540 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.126614 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.126663 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.126690 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.126712 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:50Z","lastTransitionTime":"2026-01-01T08:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.127649 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:27:50 crc kubenswrapper[4867]: E0101 08:27:50.127971 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.229570 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.230070 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.230280 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.230503 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.230705 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:50Z","lastTransitionTime":"2026-01-01T08:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.334476 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.334823 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.334993 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.335124 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.335280 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:50Z","lastTransitionTime":"2026-01-01T08:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.438892 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.438978 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.438995 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.439018 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.439036 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:50Z","lastTransitionTime":"2026-01-01T08:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.541946 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.542345 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.542521 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.542665 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.542799 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:50Z","lastTransitionTime":"2026-01-01T08:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.646121 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.646170 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.646183 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.646200 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.646213 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:50Z","lastTransitionTime":"2026-01-01T08:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.761694 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.761753 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.761772 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.761799 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.761818 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:50Z","lastTransitionTime":"2026-01-01T08:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.864591 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.864669 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.864692 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.864724 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.864747 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:50Z","lastTransitionTime":"2026-01-01T08:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.968336 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.968397 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.968416 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.968440 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:50 crc kubenswrapper[4867]: I0101 08:27:50.968457 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:50Z","lastTransitionTime":"2026-01-01T08:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.072308 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.072380 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.072403 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.072431 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.072453 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:51Z","lastTransitionTime":"2026-01-01T08:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.128595 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.128709 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:27:51 crc kubenswrapper[4867]: E0101 08:27:51.128799 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.128838 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:27:51 crc kubenswrapper[4867]: E0101 08:27:51.129260 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:27:51 crc kubenswrapper[4867]: E0101 08:27:51.129381 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.153820 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.175279 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.175505 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.175555 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.175579 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.175608 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.175629 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:51Z","lastTransitionTime":"2026-01-01T08:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.193521 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.211524 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.228199 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a93d40-ed12-413d-b8fa-1c683a35a7e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cbb4d600b45c1a7f207f05502281886ef2861173b5ca6aa86a73a0ab8c2afcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh66z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.239785 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tg4nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"778253a2-b732-4460-994a-9543f533383f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dcfcdcf5aaf1d45a445b50f5ec520543620fc85992894681c627a2fd8ad4ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tg4nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.261350 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.275147 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.279765 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.279827 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.279848 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.279871 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.279936 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:51Z","lastTransitionTime":"2026-01-01T08:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.291987 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4608a141-23bd-4286-8607-ad4b16b5ee11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee898ddead9a02fda6e950236b68e556221e707ee2a7c1a2d204194cc334124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-69jph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.315357 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0210cc190fc2af5c125315618c94a0254f72a47309a2d1bb03b3a2b36b0f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0210cc190fc2af5c125315618c94a0254f72a47309a2d1bb03b3a2b36b0f82b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-01T08:27:24Z\\\",\\\"message\\\":\\\"o:208] Removed *v1.EgressFirewall event handler 9\\\\nI0101 08:27:24.083933 6525 shared_informer.go:313] Waiting for caches to sync for ef_node_controller\\\\nI0101 08:27:24.083941 6525 shared_informer.go:320] Caches are synced for ef_node_controller\\\\nI0101 08:27:24.083949 6525 controller.go:156] Starting controller ef_node_controller with 1 workers\\\\nI0101 08:27:24.083964 6525 egressqos.go:192] Setting up event handlers for EgressQoS\\\\nI0101 08:27:24.083969 6525 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084032 6525 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084082 6525 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084143 6525 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084317 6525 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084775 6525 ovnkube.go:599] Stopped ovnkube\\\\nI0101 08:27:24.084818 6525 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0101 08:27:24.084955 6525 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-01T08:27:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nftn_openshift-ovn-kubernetes(2d26a65b-86d6-4603-bdeb-ffcb2f086fda)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.330698 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zs59x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db3b0fa-02f9-475b-a6ca-8ac262cbe337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40419cc0c7e84f74407395a89899e4b3107697ef63704b804b426d1cf7652d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99n2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f320736d0565d6bdbe13a7f7f6bc59048d54de5b289822d570df98a517bb4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99n2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:27:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zs59x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.349555 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb276ae-66de-4cb8-8237-1036b73042d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3a6b291f30c7815be13fde52bdef7ef22ee57e9c8be80809cf8a90029b8dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9b9aae16cc1c29ffb288ab01b54fa559cfe599c48f3ed97fe62bcc6e5b3288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d92095d9119537b08f6c16f41499ea77d353bebdf97681d1078af6cf5d24be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67682952747bac1bee9a88d0d4960e1b723a69088fc0dfc6ad9a11d66be35066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.368024 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97f622a4-8715-4813-9226-cd87ad7c38e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbef8e941e36563a6e489876fe03f03a6305edb8acbf6d31b1d098be4b23ebd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90768dce0ec952afe36a613a6e7ba00fe58331f820e40afff933da92ce33d762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9bcc967783fd9c73b9bdbb32623a3b3400a1489841b37b696810844be5c5686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea87c868a609b4ad7b0ac9fee0e9335bd9d5c640479e575ccdefc2777e74558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea87c868a609b4ad7b0ac9fee0e9335bd9d5c640479e575ccdefc2777e74558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.383351 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.383401 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.383418 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.383443 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.383462 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:51Z","lastTransitionTime":"2026-01-01T08:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.386691 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.402366 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275cdf8433aa5cc7dc4bfa21e80bafb4b9960156aa9d0f7dd23b5c120dfee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.422603 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wkbs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da72a722-a2a3-459e-875a-e1605b442e05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602fa2ee8eb9678b61a838c41ada5620972c139005d78b06dd99cf10077d9b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-01T08:27:36Z\\\",\\\"message\\\":\\\"2026-01-01T08:26:51+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_72574e5b-652f-4310-b96e-b6484e9bfd6e\\\\n2026-01-01T08:26:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_72574e5b-652f-4310-b96e-b6484e9bfd6e to /host/opt/cni/bin/\\\\n2026-01-01T08:26:51Z [verbose] multus-daemon started\\\\n2026-01-01T08:26:51Z [verbose] Readiness Indicator file check\\\\n2026-01-01T08:27:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wjm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wkbs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.437330 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kv8wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28af0def-191f-4949-b617-a7a07dd8145b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:27:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kv8wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:51Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.487092 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.487173 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.487197 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.487222 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.487239 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:51Z","lastTransitionTime":"2026-01-01T08:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.589078 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.589155 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.589178 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.589208 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.589231 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:51Z","lastTransitionTime":"2026-01-01T08:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.692323 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.692400 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.692421 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.692445 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.692462 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:51Z","lastTransitionTime":"2026-01-01T08:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.795455 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.795509 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.795527 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.795551 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.795566 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:51Z","lastTransitionTime":"2026-01-01T08:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.898638 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.898695 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.898714 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.898737 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:51 crc kubenswrapper[4867]: I0101 08:27:51.898759 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:51Z","lastTransitionTime":"2026-01-01T08:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.001709 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.001772 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.001790 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.001819 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.001840 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:52Z","lastTransitionTime":"2026-01-01T08:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.105102 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.105196 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.105215 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.105241 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.105260 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:52Z","lastTransitionTime":"2026-01-01T08:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.127964 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:27:52 crc kubenswrapper[4867]: E0101 08:27:52.128152 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.208722 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.208776 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.208795 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.208816 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.208833 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:52Z","lastTransitionTime":"2026-01-01T08:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.311927 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.312049 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.312068 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.312092 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.312110 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:52Z","lastTransitionTime":"2026-01-01T08:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.414601 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.414673 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.414696 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.414729 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.414750 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:52Z","lastTransitionTime":"2026-01-01T08:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.518254 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.518315 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.518336 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.518360 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.518377 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:52Z","lastTransitionTime":"2026-01-01T08:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.621881 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.621981 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.621998 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.622024 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.622041 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:52Z","lastTransitionTime":"2026-01-01T08:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.725767 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.725820 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.725838 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.725862 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.725910 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:52Z","lastTransitionTime":"2026-01-01T08:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.828995 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.829065 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.829087 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.829112 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.829129 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:52Z","lastTransitionTime":"2026-01-01T08:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.932844 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.932948 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.932971 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.932999 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.933020 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:52Z","lastTransitionTime":"2026-01-01T08:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:52 crc kubenswrapper[4867]: I0101 08:27:52.979684 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:27:52 crc kubenswrapper[4867]: E0101 08:27:52.980000 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:56.979964051 +0000 UTC m=+146.115232850 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.035445 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.035707 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.035864 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.036137 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.036281 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:53Z","lastTransitionTime":"2026-01-01T08:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.081609 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.081667 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.081758 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.081798 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:27:53 crc kubenswrapper[4867]: E0101 08:27:53.081914 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 01 08:27:53 crc kubenswrapper[4867]: E0101 08:27:53.081944 4867 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 01 08:27:53 crc kubenswrapper[4867]: E0101 08:27:53.082030 4867 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 01 08:27:53 crc kubenswrapper[4867]: E0101 08:27:53.082039 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-01 08:28:57.082010421 +0000 UTC m=+146.217279230 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 01 08:27:53 crc kubenswrapper[4867]: E0101 08:27:53.082096 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-01 08:28:57.082072243 +0000 UTC m=+146.217341052 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 01 08:27:53 crc kubenswrapper[4867]: E0101 08:27:53.081952 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 01 08:27:53 crc kubenswrapper[4867]: E0101 08:27:53.082139 4867 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 01 08:27:53 crc kubenswrapper[4867]: E0101 08:27:53.082202 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-01 08:28:57.082184116 +0000 UTC m=+146.217452925 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 01 08:27:53 crc kubenswrapper[4867]: E0101 08:27:53.082302 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 01 08:27:53 crc kubenswrapper[4867]: E0101 08:27:53.082354 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 01 08:27:53 crc kubenswrapper[4867]: E0101 08:27:53.082377 4867 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 01 08:27:53 crc kubenswrapper[4867]: E0101 08:27:53.082475 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-01 08:28:57.082445583 +0000 UTC m=+146.217714392 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.128362 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.128424 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.128458 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:27:53 crc kubenswrapper[4867]: E0101 08:27:53.128618 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:27:53 crc kubenswrapper[4867]: E0101 08:27:53.128871 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:27:53 crc kubenswrapper[4867]: E0101 08:27:53.129544 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.130105 4867 scope.go:117] "RemoveContainer" containerID="e0210cc190fc2af5c125315618c94a0254f72a47309a2d1bb03b3a2b36b0f82b" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.139518 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.139606 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.139630 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.139656 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.139680 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:53Z","lastTransitionTime":"2026-01-01T08:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.242774 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.242914 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.242943 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.242972 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.242994 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:53Z","lastTransitionTime":"2026-01-01T08:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.347428 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.347805 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.347826 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.347853 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.347875 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:53Z","lastTransitionTime":"2026-01-01T08:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.452011 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.452059 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.452076 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.452098 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.452117 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:53Z","lastTransitionTime":"2026-01-01T08:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.555297 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.555343 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.555359 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.555382 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.555399 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:53Z","lastTransitionTime":"2026-01-01T08:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.593323 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nftn_2d26a65b-86d6-4603-bdeb-ffcb2f086fda/ovnkube-controller/2.log" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.597778 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" event={"ID":"2d26a65b-86d6-4603-bdeb-ffcb2f086fda","Type":"ContainerStarted","Data":"f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b"} Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.598460 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.621219 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.641293 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.658790 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.658835 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.658847 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.658865 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.658876 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:53Z","lastTransitionTime":"2026-01-01T08:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.669265 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.693419 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a93d40-ed12-413d-b8fa-1c683a35a7e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cbb4d600b45c1a7f207f05502281886ef2861173b5ca6aa86a73a0ab8c2afcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh66z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.705992 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tg4nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"778253a2-b732-4460-994a-9543f533383f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dcfcdcf5aaf1d45a445b50f5ec520543620fc85992894681c627a2fd8ad4ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tg4nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.717202 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.731105 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.741132 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.753786 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4608a141-23bd-4286-8607-ad4b16b5ee11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee898ddead9a02fda6e950236b68e556221e707ee2a7c1a2d204194cc334124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-69jph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.761289 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.761327 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.761341 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.761359 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.761370 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:53Z","lastTransitionTime":"2026-01-01T08:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.773218 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0210cc190fc2af5c125315618c94a0254f72a47309a2d1bb03b3a2b36b0f82b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-01T08:27:24Z\\\",\\\"message\\\":\\\"o:208] Removed *v1.EgressFirewall event handler 9\\\\nI0101 08:27:24.083933 6525 shared_informer.go:313] Waiting for caches to sync for ef_node_controller\\\\nI0101 08:27:24.083941 6525 shared_informer.go:320] Caches are synced for ef_node_controller\\\\nI0101 08:27:24.083949 6525 controller.go:156] Starting controller ef_node_controller with 1 workers\\\\nI0101 08:27:24.083964 6525 egressqos.go:192] Setting up event handlers for EgressQoS\\\\nI0101 08:27:24.083969 6525 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084032 6525 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084082 6525 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084143 6525 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084317 6525 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084775 6525 ovnkube.go:599] Stopped ovnkube\\\\nI0101 08:27:24.084818 6525 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0101 08:27:24.084955 6525 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-01T08:27:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.785758 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zs59x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db3b0fa-02f9-475b-a6ca-8ac262cbe337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40419cc0c7e84f74407395a89899e4b3107697ef63704b804b426d1cf7652d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99n2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f320736d0565d6bdbe13a7f7f6bc59048d54de5b289822d570df98a517bb4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99n2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:27:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zs59x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.798351 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97f622a4-8715-4813-9226-cd87ad7c38e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbef8e941e36563a6e489876fe03f03a6305edb8acbf6d31b1d098be4b23ebd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90768dce0ec952afe36a613a6e7ba00fe58331f820e40afff933da92ce33d762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9bcc967783fd9c73b9bdbb32623a3b3400a1489841b37b696810844be5c5686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea87c868a609b4ad7b0ac9fee0e9335bd9d5c640479e575ccdefc2777e74558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea87c868a609b4ad7b0ac9fee0e9335bd9d5c640479e575ccdefc2777e74558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.812986 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.830639 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275cdf8433aa5cc7dc4bfa21e80bafb4b9960156aa9d0f7dd23b5c120dfee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.852011 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wkbs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da72a722-a2a3-459e-875a-e1605b442e05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602fa2ee8eb9678b61a838c41ada5620972c139005d78b06dd99cf10077d9b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-01T08:27:36Z\\\",\\\"message\\\":\\\"2026-01-01T08:26:51+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_72574e5b-652f-4310-b96e-b6484e9bfd6e\\\\n2026-01-01T08:26:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_72574e5b-652f-4310-b96e-b6484e9bfd6e to /host/opt/cni/bin/\\\\n2026-01-01T08:26:51Z [verbose] multus-daemon started\\\\n2026-01-01T08:26:51Z [verbose] Readiness Indicator file check\\\\n2026-01-01T08:27:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wjm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wkbs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.864272 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.864335 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.864354 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.864377 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.864395 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:53Z","lastTransitionTime":"2026-01-01T08:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.867566 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kv8wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28af0def-191f-4949-b617-a7a07dd8145b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:27:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kv8wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.885929 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb276ae-66de-4cb8-8237-1036b73042d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3a6b291f30c7815be13fde52bdef7ef22ee57e9c8be80809cf8a90029b8dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9b9aae16cc1c29ffb288ab01b54fa559cfe599c48f3ed97fe62bcc6e5b3288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d92095d9119537b08f6c16f41499ea77d353bebdf97681d1078af6cf5d24be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67682952747bac1bee9a88d0d4960e1b723a69088fc0dfc6ad9a11d66be35066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:53Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.966415 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.966458 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.966468 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.966481 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:53 crc kubenswrapper[4867]: I0101 08:27:53.966490 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:53Z","lastTransitionTime":"2026-01-01T08:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.069022 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.069067 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.069079 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.069099 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.069111 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:54Z","lastTransitionTime":"2026-01-01T08:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.127557 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:27:54 crc kubenswrapper[4867]: E0101 08:27:54.127711 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.160768 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.160802 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.160811 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.160824 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.160834 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:54Z","lastTransitionTime":"2026-01-01T08:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:54 crc kubenswrapper[4867]: E0101 08:27:54.172288 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:54Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.176190 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.176245 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.176263 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.176285 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.176306 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:54Z","lastTransitionTime":"2026-01-01T08:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:54 crc kubenswrapper[4867]: E0101 08:27:54.187721 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:54Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.191153 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.191184 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.191196 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.191212 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.191223 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:54Z","lastTransitionTime":"2026-01-01T08:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:54 crc kubenswrapper[4867]: E0101 08:27:54.205131 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:54Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.208569 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.208606 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.208619 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.208641 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.208652 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:54Z","lastTransitionTime":"2026-01-01T08:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:54 crc kubenswrapper[4867]: E0101 08:27:54.223581 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:54Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.227190 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.227232 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.227246 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.227261 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.227272 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:54Z","lastTransitionTime":"2026-01-01T08:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:54 crc kubenswrapper[4867]: E0101 08:27:54.242124 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:54Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:54 crc kubenswrapper[4867]: E0101 08:27:54.242276 4867 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.243556 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.243607 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.243622 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.243637 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.243647 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:54Z","lastTransitionTime":"2026-01-01T08:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.346665 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.346780 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.346810 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.346854 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.346877 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:54Z","lastTransitionTime":"2026-01-01T08:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.449573 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.449631 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.449648 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.449671 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.449689 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:54Z","lastTransitionTime":"2026-01-01T08:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.552354 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.552421 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.552446 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.552473 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.552491 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:54Z","lastTransitionTime":"2026-01-01T08:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.604516 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nftn_2d26a65b-86d6-4603-bdeb-ffcb2f086fda/ovnkube-controller/3.log" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.605584 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nftn_2d26a65b-86d6-4603-bdeb-ffcb2f086fda/ovnkube-controller/2.log" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.609995 4867 generic.go:334] "Generic (PLEG): container finished" podID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerID="f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b" exitCode=1 Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.610048 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" event={"ID":"2d26a65b-86d6-4603-bdeb-ffcb2f086fda","Type":"ContainerDied","Data":"f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b"} Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.610100 4867 scope.go:117] "RemoveContainer" containerID="e0210cc190fc2af5c125315618c94a0254f72a47309a2d1bb03b3a2b36b0f82b" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.613340 4867 scope.go:117] "RemoveContainer" containerID="f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b" Jan 01 08:27:54 crc kubenswrapper[4867]: E0101 08:27:54.613785 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6nftn_openshift-ovn-kubernetes(2d26a65b-86d6-4603-bdeb-ffcb2f086fda)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.633431 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:54Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.650501 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:54Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.662383 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.662472 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.662514 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.662549 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.662575 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:54Z","lastTransitionTime":"2026-01-01T08:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.672417 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4608a141-23bd-4286-8607-ad4b16b5ee11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee898ddead9a02fda6e950236b68e556221e707ee2a7c1a2d204194cc334124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-69jph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:54Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.704542 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0210cc190fc2af5c125315618c94a0254f72a47309a2d1bb03b3a2b36b0f82b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-01T08:27:24Z\\\",\\\"message\\\":\\\"o:208] Removed *v1.EgressFirewall event handler 9\\\\nI0101 08:27:24.083933 6525 shared_informer.go:313] Waiting for caches to sync for ef_node_controller\\\\nI0101 08:27:24.083941 6525 shared_informer.go:320] Caches are synced for ef_node_controller\\\\nI0101 08:27:24.083949 6525 controller.go:156] Starting controller ef_node_controller with 1 workers\\\\nI0101 08:27:24.083964 6525 egressqos.go:192] Setting up event handlers for EgressQoS\\\\nI0101 08:27:24.083969 6525 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084032 6525 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084082 6525 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084143 6525 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084317 6525 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:24.084775 6525 ovnkube.go:599] Stopped ovnkube\\\\nI0101 08:27:24.084818 6525 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0101 08:27:24.084955 6525 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-01T08:27:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"message\\\":\\\"rom sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0101 08:27:54.133081 6952 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0101 08:27:54.133251 6952 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:54.133664 6952 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0101 08:27:54.133724 6952 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0101 08:27:54.133734 6952 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0101 08:27:54.133750 6952 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0101 08:27:54.133756 6952 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0101 08:27:54.133780 6952 factory.go:656] Stopping watch factory\\\\nI0101 08:27:54.133793 6952 ovnkube.go:599] Stopped ovnkube\\\\nI0101 08:27:54.133821 6952 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0101 08:27:54.133840 6952 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0101 08:27:54.133850 6952 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0101 08:27:54.133861 6952 handler.go:208] Removed *v1.Node event handler 2\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-01T08:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:54Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.723329 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zs59x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db3b0fa-02f9-475b-a6ca-8ac262cbe337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40419cc0c7e84f74407395a89899e4b3107697ef63704b804b426d1cf7652d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99n2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f320736d0565d6bdbe13a7f7f6bc59048d54de5b289822d570df98a517bb4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99n2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:27:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zs59x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:54Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.745189 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb276ae-66de-4cb8-8237-1036b73042d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3a6b291f30c7815be13fde52bdef7ef22ee57e9c8be80809cf8a90029b8dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9b9aae16cc1c29ffb288ab01b54fa559cfe599c48f3ed97fe62bcc6e5b3288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d92095d9119537b08f6c16f41499ea77d353bebdf97681d1078af6cf5d24be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67682952747bac1bee9a88d0d4960e1b723a69088fc0dfc6ad9a11d66be35066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:54Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.763245 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97f622a4-8715-4813-9226-cd87ad7c38e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbef8e941e36563a6e489876fe03f03a6305edb8acbf6d31b1d098be4b23ebd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90768dce0ec952afe36a613a6e7ba00fe58331f820e40afff933da92ce33d762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9bcc967783fd9c73b9bdbb32623a3b3400a1489841b37b696810844be5c5686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea87c868a609b4ad7b0ac9fee0e9335bd9d5c640479e575ccdefc2777e74558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea87c868a609b4ad7b0ac9fee0e9335bd9d5c640479e575ccdefc2777e74558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:54Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.766643 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.766694 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.766713 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.766741 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.766761 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:54Z","lastTransitionTime":"2026-01-01T08:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.785926 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:54Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.805147 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275cdf8433aa5cc7dc4bfa21e80bafb4b9960156aa9d0f7dd23b5c120dfee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:54Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.823705 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wkbs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da72a722-a2a3-459e-875a-e1605b442e05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602fa2ee8eb9678b61a838c41ada5620972c139005d78b06dd99cf10077d9b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-01T08:27:36Z\\\",\\\"message\\\":\\\"2026-01-01T08:26:51+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_72574e5b-652f-4310-b96e-b6484e9bfd6e\\\\n2026-01-01T08:26:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_72574e5b-652f-4310-b96e-b6484e9bfd6e to /host/opt/cni/bin/\\\\n2026-01-01T08:26:51Z [verbose] multus-daemon started\\\\n2026-01-01T08:26:51Z [verbose] Readiness Indicator file check\\\\n2026-01-01T08:27:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wjm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wkbs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:54Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.840508 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kv8wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28af0def-191f-4949-b617-a7a07dd8145b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:27:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kv8wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:54Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.861710 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:54Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.869887 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.869971 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.869990 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.870014 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.870032 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:54Z","lastTransitionTime":"2026-01-01T08:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.879053 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:54Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.894838 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:54Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.914795 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:54Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.935955 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a93d40-ed12-413d-b8fa-1c683a35a7e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cbb4d600b45c1a7f207f05502281886ef2861173b5ca6aa86a73a0ab8c2afcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh66z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:54Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.952417 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tg4nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"778253a2-b732-4460-994a-9543f533383f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dcfcdcf5aaf1d45a445b50f5ec520543620fc85992894681c627a2fd8ad4ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tg4nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:54Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.972435 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.972515 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.972534 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.972557 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:54 crc kubenswrapper[4867]: I0101 08:27:54.972575 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:54Z","lastTransitionTime":"2026-01-01T08:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.075657 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.075716 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.075735 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.075759 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.075775 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:55Z","lastTransitionTime":"2026-01-01T08:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.128479 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.128517 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.128528 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:27:55 crc kubenswrapper[4867]: E0101 08:27:55.128643 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:27:55 crc kubenswrapper[4867]: E0101 08:27:55.128771 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:27:55 crc kubenswrapper[4867]: E0101 08:27:55.128955 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.179293 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.179363 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.179381 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.179407 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.179424 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:55Z","lastTransitionTime":"2026-01-01T08:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.282573 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.282639 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.282662 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.282694 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.282924 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:55Z","lastTransitionTime":"2026-01-01T08:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.385648 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.385691 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.385702 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.385718 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.385730 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:55Z","lastTransitionTime":"2026-01-01T08:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.488230 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.488322 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.488347 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.488377 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.488400 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:55Z","lastTransitionTime":"2026-01-01T08:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.591258 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.591326 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.591343 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.591368 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.591390 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:55Z","lastTransitionTime":"2026-01-01T08:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.616819 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nftn_2d26a65b-86d6-4603-bdeb-ffcb2f086fda/ovnkube-controller/3.log" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.622452 4867 scope.go:117] "RemoveContainer" containerID="f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b" Jan 01 08:27:55 crc kubenswrapper[4867]: E0101 08:27:55.622724 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6nftn_openshift-ovn-kubernetes(2d26a65b-86d6-4603-bdeb-ffcb2f086fda)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.647090 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.672369 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.692189 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.694687 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.694799 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.694821 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.694879 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.694962 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:55Z","lastTransitionTime":"2026-01-01T08:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.713408 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.734677 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a93d40-ed12-413d-b8fa-1c683a35a7e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cbb4d600b45c1a7f207f05502281886ef2861173b5ca6aa86a73a0ab8c2afcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh66z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.749128 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tg4nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"778253a2-b732-4460-994a-9543f533383f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dcfcdcf5aaf1d45a445b50f5ec520543620fc85992894681c627a2fd8ad4ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tg4nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.767062 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.785016 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.798371 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.798432 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.798451 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.798477 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.798500 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:55Z","lastTransitionTime":"2026-01-01T08:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.800106 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4608a141-23bd-4286-8607-ad4b16b5ee11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee898ddead9a02fda6e950236b68e556221e707ee2a7c1a2d204194cc334124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-69jph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.820687 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"message\\\":\\\"rom sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0101 08:27:54.133081 6952 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0101 08:27:54.133251 6952 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:54.133664 6952 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0101 08:27:54.133724 6952 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0101 08:27:54.133734 6952 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0101 08:27:54.133750 6952 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0101 08:27:54.133756 6952 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0101 08:27:54.133780 6952 factory.go:656] Stopping watch factory\\\\nI0101 08:27:54.133793 6952 ovnkube.go:599] Stopped ovnkube\\\\nI0101 08:27:54.133821 6952 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0101 08:27:54.133840 6952 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0101 08:27:54.133850 6952 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0101 08:27:54.133861 6952 handler.go:208] Removed *v1.Node event handler 2\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-01T08:27:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6nftn_openshift-ovn-kubernetes(2d26a65b-86d6-4603-bdeb-ffcb2f086fda)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.831336 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zs59x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db3b0fa-02f9-475b-a6ca-8ac262cbe337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40419cc0c7e84f74407395a89899e4b3107697ef63704b804b426d1cf7652d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99n2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f320736d0565d6bdbe13a7f7f6bc59048d54de5b289822d570df98a517bb4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99n2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:27:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zs59x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.851572 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb276ae-66de-4cb8-8237-1036b73042d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3a6b291f30c7815be13fde52bdef7ef22ee57e9c8be80809cf8a90029b8dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9b9aae16cc1c29ffb288ab01b54fa559cfe599c48f3ed97fe62bcc6e5b3288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d92095d9119537b08f6c16f41499ea77d353bebdf97681d1078af6cf5d24be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67682952747bac1bee9a88d0d4960e1b723a69088fc0dfc6ad9a11d66be35066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.863471 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97f622a4-8715-4813-9226-cd87ad7c38e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbef8e941e36563a6e489876fe03f03a6305edb8acbf6d31b1d098be4b23ebd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90768dce0ec952afe36a613a6e7ba00fe58331f820e40afff933da92ce33d762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9bcc967783fd9c73b9bdbb32623a3b3400a1489841b37b696810844be5c5686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea87c868a609b4ad7b0ac9fee0e9335bd9d5c640479e575ccdefc2777e74558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea87c868a609b4ad7b0ac9fee0e9335bd9d5c640479e575ccdefc2777e74558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.875607 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.885278 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275cdf8433aa5cc7dc4bfa21e80bafb4b9960156aa9d0f7dd23b5c120dfee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.901208 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wkbs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da72a722-a2a3-459e-875a-e1605b442e05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602fa2ee8eb9678b61a838c41ada5620972c139005d78b06dd99cf10077d9b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-01T08:27:36Z\\\",\\\"message\\\":\\\"2026-01-01T08:26:51+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_72574e5b-652f-4310-b96e-b6484e9bfd6e\\\\n2026-01-01T08:26:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_72574e5b-652f-4310-b96e-b6484e9bfd6e to /host/opt/cni/bin/\\\\n2026-01-01T08:26:51Z [verbose] multus-daemon started\\\\n2026-01-01T08:26:51Z [verbose] Readiness Indicator file check\\\\n2026-01-01T08:27:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wjm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wkbs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.902286 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.902321 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.902333 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.902350 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.902363 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:55Z","lastTransitionTime":"2026-01-01T08:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:55 crc kubenswrapper[4867]: I0101 08:27:55.913413 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kv8wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28af0def-191f-4949-b617-a7a07dd8145b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:27:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kv8wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:27:55Z is after 2025-08-24T17:21:41Z" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.005185 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.005232 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.005243 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.005261 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.005274 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:56Z","lastTransitionTime":"2026-01-01T08:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.108661 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.108720 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.108739 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.108763 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.108780 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:56Z","lastTransitionTime":"2026-01-01T08:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.127644 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:27:56 crc kubenswrapper[4867]: E0101 08:27:56.127845 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.211796 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.211857 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.211878 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.211930 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.211948 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:56Z","lastTransitionTime":"2026-01-01T08:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.314319 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.314377 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.314397 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.314421 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.314438 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:56Z","lastTransitionTime":"2026-01-01T08:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.417580 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.417664 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.417685 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.417712 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.417732 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:56Z","lastTransitionTime":"2026-01-01T08:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.520342 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.520395 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.520411 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.520433 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.520451 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:56Z","lastTransitionTime":"2026-01-01T08:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.623661 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.623727 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.623745 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.623768 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.623786 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:56Z","lastTransitionTime":"2026-01-01T08:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.726777 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.726835 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.726856 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.726909 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.726934 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:56Z","lastTransitionTime":"2026-01-01T08:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.830144 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.830206 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.830223 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.830246 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.830264 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:56Z","lastTransitionTime":"2026-01-01T08:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.933012 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.933074 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.933092 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.933118 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:56 crc kubenswrapper[4867]: I0101 08:27:56.933142 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:56Z","lastTransitionTime":"2026-01-01T08:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.035978 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.036054 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.036074 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.036101 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.036120 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:57Z","lastTransitionTime":"2026-01-01T08:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.127813 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.127940 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:27:57 crc kubenswrapper[4867]: E0101 08:27:57.128009 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.127946 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:27:57 crc kubenswrapper[4867]: E0101 08:27:57.128116 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:27:57 crc kubenswrapper[4867]: E0101 08:27:57.128225 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.140131 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.140180 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.140200 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.140222 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.140241 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:57Z","lastTransitionTime":"2026-01-01T08:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.243476 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.243552 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.243576 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.243605 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.243625 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:57Z","lastTransitionTime":"2026-01-01T08:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.347094 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.347157 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.347180 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.347212 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.347233 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:57Z","lastTransitionTime":"2026-01-01T08:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.450595 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.451354 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.451392 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.451429 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.451448 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:57Z","lastTransitionTime":"2026-01-01T08:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.554584 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.554644 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.554662 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.554687 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.554707 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:57Z","lastTransitionTime":"2026-01-01T08:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.657869 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.657974 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.657993 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.658018 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.658036 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:57Z","lastTransitionTime":"2026-01-01T08:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.760696 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.760767 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.760786 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.760837 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.760859 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:57Z","lastTransitionTime":"2026-01-01T08:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.864205 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.864277 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.864300 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.864329 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.864350 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:57Z","lastTransitionTime":"2026-01-01T08:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.966945 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.966976 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.966987 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.967002 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:57 crc kubenswrapper[4867]: I0101 08:27:57.967014 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:57Z","lastTransitionTime":"2026-01-01T08:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.069357 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.069462 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.069479 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.069506 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.069525 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:58Z","lastTransitionTime":"2026-01-01T08:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.128174 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:27:58 crc kubenswrapper[4867]: E0101 08:27:58.128340 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.173268 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.173364 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.173383 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.173405 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.173425 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:58Z","lastTransitionTime":"2026-01-01T08:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.276559 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.276621 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.276637 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.276662 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.276685 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:58Z","lastTransitionTime":"2026-01-01T08:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.380575 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.381052 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.381270 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.381541 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.381687 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:58Z","lastTransitionTime":"2026-01-01T08:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.484684 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.484766 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.484792 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.484823 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.485069 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:58Z","lastTransitionTime":"2026-01-01T08:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.587636 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.587696 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.587713 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.587736 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.587754 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:58Z","lastTransitionTime":"2026-01-01T08:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.690400 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.690471 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.690493 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.690525 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.690543 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:58Z","lastTransitionTime":"2026-01-01T08:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.793463 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.793572 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.793590 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.793615 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.793633 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:58Z","lastTransitionTime":"2026-01-01T08:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.896847 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.896946 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.896970 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.897001 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:58 crc kubenswrapper[4867]: I0101 08:27:58.897023 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:58Z","lastTransitionTime":"2026-01-01T08:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.001005 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.001065 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.001084 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.001110 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.001128 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:59Z","lastTransitionTime":"2026-01-01T08:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.103579 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.103638 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.103654 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.103678 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.103695 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:59Z","lastTransitionTime":"2026-01-01T08:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.128370 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.128444 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:27:59 crc kubenswrapper[4867]: E0101 08:27:59.128548 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:27:59 crc kubenswrapper[4867]: E0101 08:27:59.128704 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.128721 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:27:59 crc kubenswrapper[4867]: E0101 08:27:59.128948 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.208214 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.208319 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.208345 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.208377 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.208395 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:59Z","lastTransitionTime":"2026-01-01T08:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.312012 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.312075 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.312115 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.312152 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.312176 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:59Z","lastTransitionTime":"2026-01-01T08:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.415517 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.415572 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.415593 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.415615 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.415631 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:59Z","lastTransitionTime":"2026-01-01T08:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.518509 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.518846 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.519054 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.519114 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.519135 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:59Z","lastTransitionTime":"2026-01-01T08:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.623988 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.624126 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.624153 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.624218 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.624247 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:59Z","lastTransitionTime":"2026-01-01T08:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.727911 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.727979 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.727995 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.728039 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.728053 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:59Z","lastTransitionTime":"2026-01-01T08:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.831558 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.831702 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.831733 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.831766 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.831827 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:59Z","lastTransitionTime":"2026-01-01T08:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.935173 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.935260 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.935279 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.935307 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:27:59 crc kubenswrapper[4867]: I0101 08:27:59.935325 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:27:59Z","lastTransitionTime":"2026-01-01T08:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.038966 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.039035 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.039053 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.039083 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.039102 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:00Z","lastTransitionTime":"2026-01-01T08:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.127680 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:28:00 crc kubenswrapper[4867]: E0101 08:28:00.128079 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.143152 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.143273 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.143329 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.143346 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.143369 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.143387 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:00Z","lastTransitionTime":"2026-01-01T08:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.247431 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.247678 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.247708 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.247766 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.247788 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:00Z","lastTransitionTime":"2026-01-01T08:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.350722 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.350833 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.350856 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.350882 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.350921 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:00Z","lastTransitionTime":"2026-01-01T08:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.454328 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.454393 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.454410 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.454434 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.454454 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:00Z","lastTransitionTime":"2026-01-01T08:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.557764 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.557957 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.557994 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.558081 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.558153 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:00Z","lastTransitionTime":"2026-01-01T08:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.661198 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.661268 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.661288 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.661316 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.661336 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:00Z","lastTransitionTime":"2026-01-01T08:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.764490 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.764627 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.764656 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.764771 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.764799 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:00Z","lastTransitionTime":"2026-01-01T08:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.868098 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.868153 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.868176 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.868206 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.868229 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:00Z","lastTransitionTime":"2026-01-01T08:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.976188 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.976331 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.976353 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.976383 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:00 crc kubenswrapper[4867]: I0101 08:28:00.976415 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:00Z","lastTransitionTime":"2026-01-01T08:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.079979 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.080041 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.080061 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.080084 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.080101 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:01Z","lastTransitionTime":"2026-01-01T08:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.128319 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.129170 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:28:01 crc kubenswrapper[4867]: E0101 08:28:01.129363 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.129437 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:28:01 crc kubenswrapper[4867]: E0101 08:28:01.129611 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:28:01 crc kubenswrapper[4867]: E0101 08:28:01.130062 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.147283 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4608a141-23bd-4286-8607-ad4b16b5ee11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee898ddead9a02fda6e950236b68e556221e707ee2a7c1a2d204194cc334124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-69jph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.171406 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"message\\\":\\\"rom sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0101 08:27:54.133081 6952 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0101 08:27:54.133251 6952 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:54.133664 6952 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0101 08:27:54.133724 6952 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0101 08:27:54.133734 6952 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0101 08:27:54.133750 6952 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0101 08:27:54.133756 6952 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0101 08:27:54.133780 6952 factory.go:656] Stopping watch factory\\\\nI0101 08:27:54.133793 6952 ovnkube.go:599] Stopped ovnkube\\\\nI0101 08:27:54.133821 6952 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0101 08:27:54.133840 6952 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0101 08:27:54.133850 6952 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0101 08:27:54.133861 6952 handler.go:208] Removed *v1.Node event handler 2\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-01T08:27:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6nftn_openshift-ovn-kubernetes(2d26a65b-86d6-4603-bdeb-ffcb2f086fda)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.183047 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.183087 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.183102 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.183124 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.183143 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:01Z","lastTransitionTime":"2026-01-01T08:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.190757 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zs59x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db3b0fa-02f9-475b-a6ca-8ac262cbe337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40419cc0c7e84f74407395a89899e4b3107697ef63704b804b426d1cf7652d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99n2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f320736d0565d6bdbe13a7f7f6bc59048d54de5b289822d570df98a517bb4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99n2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:27:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zs59x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.210564 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.225591 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.246815 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275cdf8433aa5cc7dc4bfa21e80bafb4b9960156aa9d0f7dd23b5c120dfee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.267262 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wkbs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da72a722-a2a3-459e-875a-e1605b442e05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602fa2ee8eb9678b61a838c41ada5620972c139005d78b06dd99cf10077d9b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-01T08:27:36Z\\\",\\\"message\\\":\\\"2026-01-01T08:26:51+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_72574e5b-652f-4310-b96e-b6484e9bfd6e\\\\n2026-01-01T08:26:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_72574e5b-652f-4310-b96e-b6484e9bfd6e to /host/opt/cni/bin/\\\\n2026-01-01T08:26:51Z [verbose] multus-daemon started\\\\n2026-01-01T08:26:51Z [verbose] Readiness Indicator file check\\\\n2026-01-01T08:27:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wjm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wkbs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.281921 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kv8wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28af0def-191f-4949-b617-a7a07dd8145b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:27:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kv8wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.286164 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.286206 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.286222 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.286243 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.286258 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:01Z","lastTransitionTime":"2026-01-01T08:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.301881 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb276ae-66de-4cb8-8237-1036b73042d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3a6b291f30c7815be13fde52bdef7ef22ee57e9c8be80809cf8a90029b8dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9b9aae16cc1c29ffb288ab01b54fa559cfe599c48f3ed97fe62bcc6e5b3288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d92095d9119537b08f6c16f41499ea77d353bebdf97681d1078af6cf5d24be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67682952747bac1bee9a88d0d4960e1b723a69088fc0dfc6ad9a11d66be35066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.316466 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97f622a4-8715-4813-9226-cd87ad7c38e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbef8e941e36563a6e489876fe03f03a6305edb8acbf6d31b1d098be4b23ebd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90768dce0ec952afe36a613a6e7ba00fe58331f820e40afff933da92ce33d762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9bcc967783fd9c73b9bdbb32623a3b3400a1489841b37b696810844be5c5686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea87c868a609b4ad7b0ac9fee0e9335bd9d5c640479e575ccdefc2777e74558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea87c868a609b4ad7b0ac9fee0e9335bd9d5c640479e575ccdefc2777e74558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.329904 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.339666 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"609f38d7-20bd-476b-9bde-3a8c8876b43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d769179e4504fd8f8dffde50dc5e4e944e232fa5a431ff933e28d41385fb7e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7af3c1c378c65ecb77fe867cfcfcb8e90466f1cddf579a1b7c386dc8eefc204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7af3c1c378c65ecb77fe867cfcfcb8e90466f1cddf579a1b7c386dc8eefc204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.352538 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.367791 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a93d40-ed12-413d-b8fa-1c683a35a7e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cbb4d600b45c1a7f207f05502281886ef2861173b5ca6aa86a73a0ab8c2afcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh66z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.378578 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tg4nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"778253a2-b732-4460-994a-9543f533383f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dcfcdcf5aaf1d45a445b50f5ec520543620fc85992894681c627a2fd8ad4ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tg4nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.388988 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.389034 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.389049 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.389068 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.389082 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:01Z","lastTransitionTime":"2026-01-01T08:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.394595 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.408641 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.422941 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:01Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.492625 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.492691 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.492711 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.492736 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.492753 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:01Z","lastTransitionTime":"2026-01-01T08:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.595725 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.595939 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.595975 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.596006 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.596029 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:01Z","lastTransitionTime":"2026-01-01T08:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.699379 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.699429 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.699446 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.699466 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.699483 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:01Z","lastTransitionTime":"2026-01-01T08:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.802937 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.803014 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.803035 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.803063 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.803082 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:01Z","lastTransitionTime":"2026-01-01T08:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.906456 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.906521 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.906564 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.906595 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:01 crc kubenswrapper[4867]: I0101 08:28:01.906734 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:01Z","lastTransitionTime":"2026-01-01T08:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.009743 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.009786 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.009797 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.009813 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.009824 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:02Z","lastTransitionTime":"2026-01-01T08:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.112648 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.112713 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.112729 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.112753 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.112771 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:02Z","lastTransitionTime":"2026-01-01T08:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.128296 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:28:02 crc kubenswrapper[4867]: E0101 08:28:02.128576 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.217062 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.217114 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.217137 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.217160 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.217175 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:02Z","lastTransitionTime":"2026-01-01T08:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.319727 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.319799 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.319860 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.319936 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.319958 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:02Z","lastTransitionTime":"2026-01-01T08:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.422714 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.423921 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.423949 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.423975 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.423993 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:02Z","lastTransitionTime":"2026-01-01T08:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.526229 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.526313 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.526340 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.526370 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.526395 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:02Z","lastTransitionTime":"2026-01-01T08:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.630119 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.630175 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.630189 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.630214 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.630237 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:02Z","lastTransitionTime":"2026-01-01T08:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.733187 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.733234 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.733245 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.733347 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.733400 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:02Z","lastTransitionTime":"2026-01-01T08:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.836387 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.836435 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.836455 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.836480 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.836497 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:02Z","lastTransitionTime":"2026-01-01T08:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.939946 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.940081 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.940156 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.940189 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:02 crc kubenswrapper[4867]: I0101 08:28:02.940255 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:02Z","lastTransitionTime":"2026-01-01T08:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.042839 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.042934 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.042953 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.042976 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.042993 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:03Z","lastTransitionTime":"2026-01-01T08:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.128318 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.128373 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.128422 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:28:03 crc kubenswrapper[4867]: E0101 08:28:03.128495 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:28:03 crc kubenswrapper[4867]: E0101 08:28:03.128549 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:28:03 crc kubenswrapper[4867]: E0101 08:28:03.128609 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.145194 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.145341 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.145365 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.145388 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.145440 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:03Z","lastTransitionTime":"2026-01-01T08:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.248010 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.248064 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.248080 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.248102 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.248121 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:03Z","lastTransitionTime":"2026-01-01T08:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.351968 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.352022 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.352038 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.352111 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.352135 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:03Z","lastTransitionTime":"2026-01-01T08:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.455258 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.455339 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.455373 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.455449 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.455475 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:03Z","lastTransitionTime":"2026-01-01T08:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.558805 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.558941 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.558975 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.559004 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.559027 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:03Z","lastTransitionTime":"2026-01-01T08:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.661589 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.661645 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.661664 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.661688 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.661705 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:03Z","lastTransitionTime":"2026-01-01T08:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.764865 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.764933 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.764948 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.764968 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.764982 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:03Z","lastTransitionTime":"2026-01-01T08:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.868177 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.868225 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.868242 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.868264 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.868282 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:03Z","lastTransitionTime":"2026-01-01T08:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.971398 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.971459 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.971480 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.971502 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:03 crc kubenswrapper[4867]: I0101 08:28:03.971522 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:03Z","lastTransitionTime":"2026-01-01T08:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.073520 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.073566 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.073581 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.073603 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.073615 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:04Z","lastTransitionTime":"2026-01-01T08:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.127534 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:28:04 crc kubenswrapper[4867]: E0101 08:28:04.127670 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.176282 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.176361 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.176396 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.176478 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.176504 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:04Z","lastTransitionTime":"2026-01-01T08:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.280548 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.280601 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.280631 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.280657 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.280676 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:04Z","lastTransitionTime":"2026-01-01T08:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.384000 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.384059 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.384084 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.384110 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.384127 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:04Z","lastTransitionTime":"2026-01-01T08:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.487411 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.487461 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.487483 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.487511 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.487532 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:04Z","lastTransitionTime":"2026-01-01T08:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.590559 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.590621 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.590647 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.590674 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.590694 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:04Z","lastTransitionTime":"2026-01-01T08:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.643680 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.643832 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.643867 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.643931 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.643957 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:04Z","lastTransitionTime":"2026-01-01T08:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:04 crc kubenswrapper[4867]: E0101 08:28:04.665538 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:04Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.669594 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.669646 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.669658 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.669678 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.669694 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:04Z","lastTransitionTime":"2026-01-01T08:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:04 crc kubenswrapper[4867]: E0101 08:28:04.686853 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:04Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.690551 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.690655 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.690674 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.690698 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.690715 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:04Z","lastTransitionTime":"2026-01-01T08:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:04 crc kubenswrapper[4867]: E0101 08:28:04.709341 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:04Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.713609 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.713676 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.713702 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.713733 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.713759 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:04Z","lastTransitionTime":"2026-01-01T08:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:04 crc kubenswrapper[4867]: E0101 08:28:04.731374 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:04Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.736430 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.736486 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.736504 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.736526 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.736544 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:04Z","lastTransitionTime":"2026-01-01T08:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:04 crc kubenswrapper[4867]: E0101 08:28:04.758681 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:28:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:04Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:04 crc kubenswrapper[4867]: E0101 08:28:04.758831 4867 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.760922 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.761018 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.761039 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.761069 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.761088 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:04Z","lastTransitionTime":"2026-01-01T08:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.864443 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.864519 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.864539 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.864566 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.864589 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:04Z","lastTransitionTime":"2026-01-01T08:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.967798 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.967852 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.967867 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.967916 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:04 crc kubenswrapper[4867]: I0101 08:28:04.967932 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:04Z","lastTransitionTime":"2026-01-01T08:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.071249 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.071321 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.071342 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.071371 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.071394 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:05Z","lastTransitionTime":"2026-01-01T08:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.127774 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.127808 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.127942 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:28:05 crc kubenswrapper[4867]: E0101 08:28:05.128023 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:28:05 crc kubenswrapper[4867]: E0101 08:28:05.128054 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:28:05 crc kubenswrapper[4867]: E0101 08:28:05.128169 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.174878 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.174993 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.175046 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.175072 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.175090 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:05Z","lastTransitionTime":"2026-01-01T08:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.277844 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.277935 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.277954 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.277982 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.278002 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:05Z","lastTransitionTime":"2026-01-01T08:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.380517 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.380592 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.380610 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.380635 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.380656 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:05Z","lastTransitionTime":"2026-01-01T08:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.484302 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.484386 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.484408 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.484442 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.484468 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:05Z","lastTransitionTime":"2026-01-01T08:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.588070 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.588151 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.588179 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.588214 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.588238 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:05Z","lastTransitionTime":"2026-01-01T08:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.690796 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.690854 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.690871 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.690925 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.690944 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:05Z","lastTransitionTime":"2026-01-01T08:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.793528 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.793596 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.793613 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.793638 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.793657 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:05Z","lastTransitionTime":"2026-01-01T08:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.896490 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.896545 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.896563 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.896586 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:05 crc kubenswrapper[4867]: I0101 08:28:05.896605 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:05Z","lastTransitionTime":"2026-01-01T08:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.003992 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.004058 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.004076 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.004114 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.004133 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:06Z","lastTransitionTime":"2026-01-01T08:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.106622 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.106680 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.106698 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.106722 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.106739 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:06Z","lastTransitionTime":"2026-01-01T08:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.128190 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:28:06 crc kubenswrapper[4867]: E0101 08:28:06.128382 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.209751 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.209811 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.209837 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.209917 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.209949 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:06Z","lastTransitionTime":"2026-01-01T08:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.313081 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.313130 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.313146 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.313166 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.313186 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:06Z","lastTransitionTime":"2026-01-01T08:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.415600 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.415673 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.415700 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.415730 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.415750 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:06Z","lastTransitionTime":"2026-01-01T08:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.519247 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.519298 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.519315 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.519340 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.519357 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:06Z","lastTransitionTime":"2026-01-01T08:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.623036 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.623139 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.623165 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.623199 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.623225 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:06Z","lastTransitionTime":"2026-01-01T08:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.727313 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.727414 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.727474 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.727500 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.727557 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:06Z","lastTransitionTime":"2026-01-01T08:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.830569 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.831068 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.831086 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.831112 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.831131 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:06Z","lastTransitionTime":"2026-01-01T08:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.934732 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.934812 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.934836 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.934867 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:06 crc kubenswrapper[4867]: I0101 08:28:06.934922 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:06Z","lastTransitionTime":"2026-01-01T08:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.038503 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.038555 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.038573 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.038596 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.038612 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:07Z","lastTransitionTime":"2026-01-01T08:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.128013 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.128033 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:28:07 crc kubenswrapper[4867]: E0101 08:28:07.128387 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.128462 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:28:07 crc kubenswrapper[4867]: E0101 08:28:07.128613 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:28:07 crc kubenswrapper[4867]: E0101 08:28:07.128680 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.141007 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.141056 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.141074 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.141095 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.141112 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:07Z","lastTransitionTime":"2026-01-01T08:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.243850 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.243966 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.243990 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.244016 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.244033 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:07Z","lastTransitionTime":"2026-01-01T08:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.346919 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.346978 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.346996 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.347022 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.347040 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:07Z","lastTransitionTime":"2026-01-01T08:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.450978 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.451039 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.451057 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.451083 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.451099 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:07Z","lastTransitionTime":"2026-01-01T08:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.554749 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.554830 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.554855 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.554918 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.554945 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:07Z","lastTransitionTime":"2026-01-01T08:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.657747 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.657814 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.657831 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.657857 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.657875 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:07Z","lastTransitionTime":"2026-01-01T08:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.761140 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.761244 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.761262 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.761287 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.761305 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:07Z","lastTransitionTime":"2026-01-01T08:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.864521 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.864613 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.864650 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.864683 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.864707 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:07Z","lastTransitionTime":"2026-01-01T08:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.968390 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.968450 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.968469 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.968492 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:07 crc kubenswrapper[4867]: I0101 08:28:07.968509 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:07Z","lastTransitionTime":"2026-01-01T08:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.071586 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.071646 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.071663 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.071686 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.071706 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:08Z","lastTransitionTime":"2026-01-01T08:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.127741 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:28:08 crc kubenswrapper[4867]: E0101 08:28:08.127966 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.174755 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.174829 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.174850 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.174875 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.174934 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:08Z","lastTransitionTime":"2026-01-01T08:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.277819 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.277920 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.277940 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.277965 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.277984 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:08Z","lastTransitionTime":"2026-01-01T08:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.343158 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28af0def-191f-4949-b617-a7a07dd8145b-metrics-certs\") pod \"network-metrics-daemon-kv8wr\" (UID: \"28af0def-191f-4949-b617-a7a07dd8145b\") " pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:28:08 crc kubenswrapper[4867]: E0101 08:28:08.343348 4867 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 01 08:28:08 crc kubenswrapper[4867]: E0101 08:28:08.343416 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28af0def-191f-4949-b617-a7a07dd8145b-metrics-certs podName:28af0def-191f-4949-b617-a7a07dd8145b nodeName:}" failed. No retries permitted until 2026-01-01 08:29:12.343396819 +0000 UTC m=+161.478665588 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/28af0def-191f-4949-b617-a7a07dd8145b-metrics-certs") pod "network-metrics-daemon-kv8wr" (UID: "28af0def-191f-4949-b617-a7a07dd8145b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.381456 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.381517 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.381541 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.381567 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.381585 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:08Z","lastTransitionTime":"2026-01-01T08:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.484951 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.485025 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.485047 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.485076 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.485107 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:08Z","lastTransitionTime":"2026-01-01T08:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.587848 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.587937 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.587953 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.587975 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.587989 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:08Z","lastTransitionTime":"2026-01-01T08:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.691232 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.691318 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.691349 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.691380 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.691403 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:08Z","lastTransitionTime":"2026-01-01T08:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.794867 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.794952 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.794970 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.794993 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.795013 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:08Z","lastTransitionTime":"2026-01-01T08:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.898723 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.898803 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.898820 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.898845 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:08 crc kubenswrapper[4867]: I0101 08:28:08.898864 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:08Z","lastTransitionTime":"2026-01-01T08:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.002278 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.002405 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.002478 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.002504 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.002521 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:09Z","lastTransitionTime":"2026-01-01T08:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.104964 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.105039 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.105063 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.105094 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.105116 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:09Z","lastTransitionTime":"2026-01-01T08:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.127853 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.127944 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:28:09 crc kubenswrapper[4867]: E0101 08:28:09.128039 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.128057 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:28:09 crc kubenswrapper[4867]: E0101 08:28:09.128173 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:28:09 crc kubenswrapper[4867]: E0101 08:28:09.128298 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.208792 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.208935 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.208955 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.208988 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.209008 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:09Z","lastTransitionTime":"2026-01-01T08:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.312340 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.312401 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.312419 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.312442 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.312458 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:09Z","lastTransitionTime":"2026-01-01T08:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.415992 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.416055 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.416073 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.416099 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.416121 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:09Z","lastTransitionTime":"2026-01-01T08:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.519381 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.519440 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.519457 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.519482 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.519497 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:09Z","lastTransitionTime":"2026-01-01T08:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.622285 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.622347 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.622369 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.622400 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.622466 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:09Z","lastTransitionTime":"2026-01-01T08:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.725811 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.725870 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.725915 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.725940 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.725958 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:09Z","lastTransitionTime":"2026-01-01T08:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.829300 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.829355 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.829375 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.829398 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.829416 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:09Z","lastTransitionTime":"2026-01-01T08:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.932729 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.932784 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.932804 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.932829 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:09 crc kubenswrapper[4867]: I0101 08:28:09.932847 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:09Z","lastTransitionTime":"2026-01-01T08:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.035198 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.035240 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.035251 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.035267 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.035278 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:10Z","lastTransitionTime":"2026-01-01T08:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.127728 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:28:10 crc kubenswrapper[4867]: E0101 08:28:10.127983 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.129478 4867 scope.go:117] "RemoveContainer" containerID="f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b" Jan 01 08:28:10 crc kubenswrapper[4867]: E0101 08:28:10.129729 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6nftn_openshift-ovn-kubernetes(2d26a65b-86d6-4603-bdeb-ffcb2f086fda)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.138743 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.138809 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.138828 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.138856 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.138875 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:10Z","lastTransitionTime":"2026-01-01T08:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.241996 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.242035 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.242054 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.242076 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.242094 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:10Z","lastTransitionTime":"2026-01-01T08:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.344992 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.345076 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.345101 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.345132 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.345155 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:10Z","lastTransitionTime":"2026-01-01T08:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.448351 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.448424 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.448447 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.448475 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.448497 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:10Z","lastTransitionTime":"2026-01-01T08:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.552427 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.552492 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.552515 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.552546 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.552567 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:10Z","lastTransitionTime":"2026-01-01T08:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.656083 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.656197 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.656219 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.656243 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.656263 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:10Z","lastTransitionTime":"2026-01-01T08:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.759934 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.759988 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.760002 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.760021 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.760035 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:10Z","lastTransitionTime":"2026-01-01T08:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.863050 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.863106 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.863126 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.863172 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.863200 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:10Z","lastTransitionTime":"2026-01-01T08:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.966560 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.966628 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.966647 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.966672 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:10 crc kubenswrapper[4867]: I0101 08:28:10.966689 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:10Z","lastTransitionTime":"2026-01-01T08:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.070054 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.070127 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.070151 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.070184 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.070205 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:11Z","lastTransitionTime":"2026-01-01T08:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.128521 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.128732 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.128778 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:28:11 crc kubenswrapper[4867]: E0101 08:28:11.128970 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:28:11 crc kubenswrapper[4867]: E0101 08:28:11.129100 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:28:11 crc kubenswrapper[4867]: E0101 08:28:11.129246 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.148799 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275cdf8433aa5cc7dc4bfa21e80bafb4b9960156aa9d0f7dd23b5c120dfee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:11Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.169678 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wkbs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da72a722-a2a3-459e-875a-e1605b442e05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602fa2ee8eb9678b61a838c41ada5620972c139005d78b06dd99cf10077d9b12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-01T08:27:36Z\\\",\\\"message\\\":\\\"2026-01-01T08:26:51+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_72574e5b-652f-4310-b96e-b6484e9bfd6e\\\\n2026-01-01T08:26:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_72574e5b-652f-4310-b96e-b6484e9bfd6e to /host/opt/cni/bin/\\\\n2026-01-01T08:26:51Z [verbose] multus-daemon started\\\\n2026-01-01T08:26:51Z [verbose] Readiness Indicator file check\\\\n2026-01-01T08:27:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wjm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wkbs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:11Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.172818 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.172868 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.172937 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.172966 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.172984 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:11Z","lastTransitionTime":"2026-01-01T08:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.186380 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kv8wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28af0def-191f-4949-b617-a7a07dd8145b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:27:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kv8wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:11Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.207637 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bb276ae-66de-4cb8-8237-1036b73042d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3a6b291f30c7815be13fde52bdef7ef22ee57e9c8be80809cf8a90029b8dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb9b9aae16cc1c29ffb288ab01b54fa559cfe599c48f3ed97fe62bcc6e5b3288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d92095d9119537b08f6c16f41499ea77d353bebdf97681d1078af6cf5d24be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67682952747bac1bee9a88d0d4960e1b723a69088fc0dfc6ad9a11d66be35066\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:11Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.226076 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97f622a4-8715-4813-9226-cd87ad7c38e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbef8e941e36563a6e489876fe03f03a6305edb8acbf6d31b1d098be4b23ebd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90768dce0ec952afe36a613a6e7ba00fe58331f820e40afff933da92ce33d762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9bcc967783fd9c73b9bdbb32623a3b3400a1489841b37b696810844be5c5686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea87c868a609b4ad7b0ac9fee0e9335bd9d5c640479e575ccdefc2777e74558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea87c868a609b4ad7b0ac9fee0e9335bd9d5c640479e575ccdefc2777e74558\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:11Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.249835 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a6d985facd3be13c23b33ba5a969136c6d6a7c29dffd0edd13bde1d95078b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:11Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.266668 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"609f38d7-20bd-476b-9bde-3a8c8876b43e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d769179e4504fd8f8dffde50dc5e4e944e232fa5a431ff933e28d41385fb7e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7af3c1c378c65ecb77fe867cfcfcb8e90466f1cddf579a1b7c386dc8eefc204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7af3c1c378c65ecb77fe867cfcfcb8e90466f1cddf579a1b7c386dc8eefc204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:11Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.275708 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.275771 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.275788 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.275810 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.275828 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:11Z","lastTransitionTime":"2026-01-01T08:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.287076 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5acc6c55d865897b405288085474c3aca9bbb573bb48fcf91e2466bf7dd02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0682a1bb503d0ab0f11381800a009726a4876b2184ff20183119d500ab81c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:11Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.311203 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh66z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a93d40-ed12-413d-b8fa-1c683a35a7e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cbb4d600b45c1a7f207f05502281886ef2861173b5ca6aa86a73a0ab8c2afcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d54b0a00f70ec9ff5c3119e1ceff5f5a3384933ea2bc2712948b912396cdc37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516a53067863b077cccfb4e563b5cbd9ae279c22ffdcbeebb47293897c6e1770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d52dfe9178d93f4da34f0bd3c4451cae3e25f934b269b6e718f5a6c8af6f54ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5562d6fe7e347312d61cc065beae5d152d35439e964bf84fae6aa734bf78694e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c54d6955579dc827dcfd5a44cc87392c127edb474df5d2eb7e5aa8f3a7032ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e32fa6d5e987c967f66122b3daeccc0a6e54d16910bee0600a13ed99fdc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nw7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh66z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:11Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.327652 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tg4nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"778253a2-b732-4460-994a-9543f533383f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dcfcdcf5aaf1d45a445b50f5ec520543620fc85992894681c627a2fd8ad4ada\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tg4nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:11Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.348083 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04fcb27f-6a52-491e-ad08-b0c273c9ff52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:11Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.368132 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:11Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.379055 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.379105 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.379122 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.379148 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.379169 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:11Z","lastTransitionTime":"2026-01-01T08:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.387740 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:11Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.406104 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4608a141-23bd-4286-8607-ad4b16b5ee11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee898ddead9a02fda6e950236b68e556221e707ee2a7c1a2d204194cc334124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z8tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-69jph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:11Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.439285 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-01T08:27:54Z\\\",\\\"message\\\":\\\"rom sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0101 08:27:54.133081 6952 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0101 08:27:54.133251 6952 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0101 08:27:54.133664 6952 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0101 08:27:54.133724 6952 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0101 08:27:54.133734 6952 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0101 08:27:54.133750 6952 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0101 08:27:54.133756 6952 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0101 08:27:54.133780 6952 factory.go:656] Stopping watch factory\\\\nI0101 08:27:54.133793 6952 ovnkube.go:599] Stopped ovnkube\\\\nI0101 08:27:54.133821 6952 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0101 08:27:54.133840 6952 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0101 08:27:54.133850 6952 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0101 08:27:54.133861 6952 handler.go:208] Removed *v1.Node event handler 2\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-01T08:27:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6nftn_openshift-ovn-kubernetes(2d26a65b-86d6-4603-bdeb-ffcb2f086fda)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:11Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.454102 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zs59x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db3b0fa-02f9-475b-a6ca-8ac262cbe337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:27:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40419cc0c7e84f74407395a89899e4b3107697ef63704b804b426d1cf7652d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99n2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f320736d0565d6bdbe13a7f7f6bc59048d54de5b289822d570df98a517bb4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99n2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:27:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zs59x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:11Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.476637 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:11Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.481821 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.481870 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.481879 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.481936 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.481946 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:11Z","lastTransitionTime":"2026-01-01T08:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.493146 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bqtdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2fab997-d36e-43d8-9030-11a0a7a27e41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-01T08:26:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd4a18c82b3aff898e0efac667ef6035ee4c6fd0d8024211a2e6e7ddf35a93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-01T08:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hscwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-01T08:26:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bqtdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:11Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.585217 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.585280 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.585298 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.585322 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.585339 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:11Z","lastTransitionTime":"2026-01-01T08:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.688059 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.688123 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.688142 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.688165 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.688182 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:11Z","lastTransitionTime":"2026-01-01T08:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.791097 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.791179 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.791201 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.791224 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.791243 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:11Z","lastTransitionTime":"2026-01-01T08:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.895079 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.895134 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.895151 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.895174 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.895192 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:11Z","lastTransitionTime":"2026-01-01T08:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.997862 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.997948 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.997965 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.997990 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:11 crc kubenswrapper[4867]: I0101 08:28:11.998008 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:11Z","lastTransitionTime":"2026-01-01T08:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.100283 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.100336 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.100354 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.100376 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.100394 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:12Z","lastTransitionTime":"2026-01-01T08:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.127675 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:28:12 crc kubenswrapper[4867]: E0101 08:28:12.128148 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.149986 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.203822 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.203920 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.203940 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.203963 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.203979 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:12Z","lastTransitionTime":"2026-01-01T08:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.306978 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.307047 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.307074 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.307102 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.307124 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:12Z","lastTransitionTime":"2026-01-01T08:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.410423 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.410501 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.410523 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.410551 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.410572 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:12Z","lastTransitionTime":"2026-01-01T08:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.514073 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.514149 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.514172 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.514203 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.514226 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:12Z","lastTransitionTime":"2026-01-01T08:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.617876 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.617995 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.618014 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.618037 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.618055 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:12Z","lastTransitionTime":"2026-01-01T08:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.721972 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.722028 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.722044 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.722067 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.722084 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:12Z","lastTransitionTime":"2026-01-01T08:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.825162 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.825235 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.825257 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.825285 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.825308 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:12Z","lastTransitionTime":"2026-01-01T08:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.929411 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.929492 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.929530 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.929559 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:12 crc kubenswrapper[4867]: I0101 08:28:12.929580 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:12Z","lastTransitionTime":"2026-01-01T08:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.032673 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.032736 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.032753 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.032777 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.032795 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:13Z","lastTransitionTime":"2026-01-01T08:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.127859 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.127936 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.128061 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:28:13 crc kubenswrapper[4867]: E0101 08:28:13.128289 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:28:13 crc kubenswrapper[4867]: E0101 08:28:13.128468 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:28:13 crc kubenswrapper[4867]: E0101 08:28:13.128602 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.135347 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.135417 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.135440 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.135467 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.135487 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:13Z","lastTransitionTime":"2026-01-01T08:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.240661 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.240747 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.240766 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.240820 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.240840 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:13Z","lastTransitionTime":"2026-01-01T08:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.343643 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.343739 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.343776 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.343810 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.343833 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:13Z","lastTransitionTime":"2026-01-01T08:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.447364 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.447423 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.447444 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.447472 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.447492 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:13Z","lastTransitionTime":"2026-01-01T08:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.549919 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.549991 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.550013 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.550044 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.550065 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:13Z","lastTransitionTime":"2026-01-01T08:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.653281 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.653382 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.653405 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.653429 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.653447 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:13Z","lastTransitionTime":"2026-01-01T08:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.755971 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.756058 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.756084 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.756114 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.756137 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:13Z","lastTransitionTime":"2026-01-01T08:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.858750 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.858840 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.858863 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.858925 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.858945 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:13Z","lastTransitionTime":"2026-01-01T08:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.962060 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.962132 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.962151 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.962176 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:13 crc kubenswrapper[4867]: I0101 08:28:13.962201 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:13Z","lastTransitionTime":"2026-01-01T08:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.065699 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.065760 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.065777 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.065800 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.065818 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:14Z","lastTransitionTime":"2026-01-01T08:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.127970 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:28:14 crc kubenswrapper[4867]: E0101 08:28:14.128145 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.169063 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.169132 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.169149 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.169173 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.169192 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:14Z","lastTransitionTime":"2026-01-01T08:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.272553 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.272613 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.272633 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.272656 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.272672 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:14Z","lastTransitionTime":"2026-01-01T08:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.381964 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.382069 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.382096 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.382129 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.382164 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:14Z","lastTransitionTime":"2026-01-01T08:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.485946 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.486020 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.486038 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.486062 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.486080 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:14Z","lastTransitionTime":"2026-01-01T08:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.589165 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.589260 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.589289 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.589322 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.589345 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:14Z","lastTransitionTime":"2026-01-01T08:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.693015 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.693084 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.693108 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.693138 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.693161 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:14Z","lastTransitionTime":"2026-01-01T08:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.796484 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.796546 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.796562 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.796586 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.796605 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:14Z","lastTransitionTime":"2026-01-01T08:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.899736 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.899790 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.899808 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.899829 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:14 crc kubenswrapper[4867]: I0101 08:28:14.899846 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:14Z","lastTransitionTime":"2026-01-01T08:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.002656 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.002738 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.002763 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.002797 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.002822 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:15Z","lastTransitionTime":"2026-01-01T08:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.106102 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.106216 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.106239 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.106268 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.106290 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:15Z","lastTransitionTime":"2026-01-01T08:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.127630 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.128026 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:28:15 crc kubenswrapper[4867]: E0101 08:28:15.128195 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.128218 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:28:15 crc kubenswrapper[4867]: E0101 08:28:15.128241 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:28:15 crc kubenswrapper[4867]: E0101 08:28:15.128300 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.152647 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.152712 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.152729 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.152755 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.152774 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:15Z","lastTransitionTime":"2026-01-01T08:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:15 crc kubenswrapper[4867]: E0101 08:28:15.173380 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:28:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:28:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:28:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:28:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:28:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:28:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:28:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:28:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:15Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.178903 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.178945 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.178957 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.178973 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.178987 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:15Z","lastTransitionTime":"2026-01-01T08:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:15 crc kubenswrapper[4867]: E0101 08:28:15.195703 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:28:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:28:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:28:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:28:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:28:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:28:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:28:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:28:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:15Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.200633 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.200694 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.200716 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.200739 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.200760 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:15Z","lastTransitionTime":"2026-01-01T08:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:15 crc kubenswrapper[4867]: E0101 08:28:15.223167 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:28:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:28:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:28:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:28:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:28:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:28:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:28:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-01T08:28:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"206ef261-50f6-4f09-a8e0-3b8f2babe599\\\",\\\"systemUUID\\\":\\\"e821d981-d45f-45c6-abaa-62a41c48c1e4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-01T08:28:15Z is after 2025-08-24T17:21:41Z" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.228133 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.228205 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.228223 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.228253 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.228273 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-01T08:28:15Z","lastTransitionTime":"2026-01-01T08:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.299858 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-blswm"] Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.300552 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-blswm" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.305629 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.306054 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.306243 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.306740 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.326725 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-bqtdc" podStartSLOduration=86.326632274 podStartE2EDuration="1m26.326632274s" podCreationTimestamp="2026-01-01 08:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:15.326207352 +0000 UTC m=+104.461476211" watchObservedRunningTime="2026-01-01 08:28:15.326632274 +0000 UTC m=+104.461901093" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.386762 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podStartSLOduration=86.386739755 podStartE2EDuration="1m26.386739755s" podCreationTimestamp="2026-01-01 08:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:15.34245961 +0000 UTC m=+104.477728469" watchObservedRunningTime="2026-01-01 08:28:15.386739755 +0000 UTC m=+104.522008534" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.428361 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1dbf6885-9c6c-4cac-8e33-3ccb4250d8d5-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-blswm\" (UID: \"1dbf6885-9c6c-4cac-8e33-3ccb4250d8d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-blswm" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.428510 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1dbf6885-9c6c-4cac-8e33-3ccb4250d8d5-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-blswm\" (UID: \"1dbf6885-9c6c-4cac-8e33-3ccb4250d8d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-blswm" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.428580 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1dbf6885-9c6c-4cac-8e33-3ccb4250d8d5-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-blswm\" (UID: \"1dbf6885-9c6c-4cac-8e33-3ccb4250d8d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-blswm" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.428667 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dbf6885-9c6c-4cac-8e33-3ccb4250d8d5-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-blswm\" (UID: \"1dbf6885-9c6c-4cac-8e33-3ccb4250d8d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-blswm" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.428726 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1dbf6885-9c6c-4cac-8e33-3ccb4250d8d5-service-ca\") pod \"cluster-version-operator-5c965bbfc6-blswm\" (UID: \"1dbf6885-9c6c-4cac-8e33-3ccb4250d8d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-blswm" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.435305 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zs59x" podStartSLOduration=85.43528602 podStartE2EDuration="1m25.43528602s" podCreationTimestamp="2026-01-01 08:26:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:15.410231145 +0000 UTC m=+104.545499934" watchObservedRunningTime="2026-01-01 08:28:15.43528602 +0000 UTC m=+104.570554789" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.453224 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=3.453209944 podStartE2EDuration="3.453209944s" podCreationTimestamp="2026-01-01 08:28:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:15.436103813 +0000 UTC m=+104.571372642" watchObservedRunningTime="2026-01-01 08:28:15.453209944 +0000 UTC m=+104.588478713" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.517259 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-wkbs8" podStartSLOduration=86.517238795 podStartE2EDuration="1m26.517238795s" podCreationTimestamp="2026-01-01 08:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:15.50283676 +0000 UTC m=+104.638105529" watchObservedRunningTime="2026-01-01 08:28:15.517238795 +0000 UTC m=+104.652507564" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.529327 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1dbf6885-9c6c-4cac-8e33-3ccb4250d8d5-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-blswm\" (UID: \"1dbf6885-9c6c-4cac-8e33-3ccb4250d8d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-blswm" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.529384 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1dbf6885-9c6c-4cac-8e33-3ccb4250d8d5-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-blswm\" (UID: \"1dbf6885-9c6c-4cac-8e33-3ccb4250d8d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-blswm" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.529416 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1dbf6885-9c6c-4cac-8e33-3ccb4250d8d5-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-blswm\" (UID: \"1dbf6885-9c6c-4cac-8e33-3ccb4250d8d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-blswm" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.529438 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1dbf6885-9c6c-4cac-8e33-3ccb4250d8d5-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-blswm\" (UID: \"1dbf6885-9c6c-4cac-8e33-3ccb4250d8d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-blswm" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.529505 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1dbf6885-9c6c-4cac-8e33-3ccb4250d8d5-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-blswm\" (UID: \"1dbf6885-9c6c-4cac-8e33-3ccb4250d8d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-blswm" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.529455 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1dbf6885-9c6c-4cac-8e33-3ccb4250d8d5-service-ca\") pod \"cluster-version-operator-5c965bbfc6-blswm\" (UID: \"1dbf6885-9c6c-4cac-8e33-3ccb4250d8d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-blswm" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.529713 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dbf6885-9c6c-4cac-8e33-3ccb4250d8d5-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-blswm\" (UID: \"1dbf6885-9c6c-4cac-8e33-3ccb4250d8d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-blswm" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.530633 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1dbf6885-9c6c-4cac-8e33-3ccb4250d8d5-service-ca\") pod \"cluster-version-operator-5c965bbfc6-blswm\" (UID: \"1dbf6885-9c6c-4cac-8e33-3ccb4250d8d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-blswm" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.535926 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dbf6885-9c6c-4cac-8e33-3ccb4250d8d5-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-blswm\" (UID: \"1dbf6885-9c6c-4cac-8e33-3ccb4250d8d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-blswm" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.549612 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1dbf6885-9c6c-4cac-8e33-3ccb4250d8d5-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-blswm\" (UID: \"1dbf6885-9c6c-4cac-8e33-3ccb4250d8d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-blswm" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.550333 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=58.550306915 podStartE2EDuration="58.550306915s" podCreationTimestamp="2026-01-01 08:27:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:15.550020567 +0000 UTC m=+104.685289336" watchObservedRunningTime="2026-01-01 08:28:15.550306915 +0000 UTC m=+104.685575724" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.551309 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=80.551300173 podStartE2EDuration="1m20.551300173s" podCreationTimestamp="2026-01-01 08:26:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:15.535528429 +0000 UTC m=+104.670797288" watchObservedRunningTime="2026-01-01 08:28:15.551300173 +0000 UTC m=+104.686568982" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.562939 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=15.562916769 podStartE2EDuration="15.562916769s" podCreationTimestamp="2026-01-01 08:28:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:15.561776087 +0000 UTC m=+104.697044896" watchObservedRunningTime="2026-01-01 08:28:15.562916769 +0000 UTC m=+104.698185568" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.606647 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-jh66z" podStartSLOduration=86.606625019 podStartE2EDuration="1m26.606625019s" podCreationTimestamp="2026-01-01 08:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:15.605018893 +0000 UTC m=+104.740287682" watchObservedRunningTime="2026-01-01 08:28:15.606625019 +0000 UTC m=+104.741893818" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.620100 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-blswm" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.646116 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-tg4nj" podStartSLOduration=86.646093159 podStartE2EDuration="1m26.646093159s" podCreationTimestamp="2026-01-01 08:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:15.620000425 +0000 UTC m=+104.755269204" watchObservedRunningTime="2026-01-01 08:28:15.646093159 +0000 UTC m=+104.781361978" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.662133 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=87.662112709 podStartE2EDuration="1m27.662112709s" podCreationTimestamp="2026-01-01 08:26:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:15.661478421 +0000 UTC m=+104.796747250" watchObservedRunningTime="2026-01-01 08:28:15.662112709 +0000 UTC m=+104.797381488" Jan 01 08:28:15 crc kubenswrapper[4867]: I0101 08:28:15.695908 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-blswm" event={"ID":"1dbf6885-9c6c-4cac-8e33-3ccb4250d8d5","Type":"ContainerStarted","Data":"20498f1ac3faf61b02af4fb32f597789a964bc85a80b354d537437ae79953cce"} Jan 01 08:28:16 crc kubenswrapper[4867]: I0101 08:28:16.127818 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:28:16 crc kubenswrapper[4867]: E0101 08:28:16.128273 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:28:16 crc kubenswrapper[4867]: I0101 08:28:16.709190 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-blswm" event={"ID":"1dbf6885-9c6c-4cac-8e33-3ccb4250d8d5","Type":"ContainerStarted","Data":"7aa8d731e9c0ba4e3e65ad529c759c627a914cdedbd029113e2e1d55860df30d"} Jan 01 08:28:16 crc kubenswrapper[4867]: I0101 08:28:16.737507 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-blswm" podStartSLOduration=87.737478481 podStartE2EDuration="1m27.737478481s" podCreationTimestamp="2026-01-01 08:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:16.736267067 +0000 UTC m=+105.871535936" watchObservedRunningTime="2026-01-01 08:28:16.737478481 +0000 UTC m=+105.872747280" Jan 01 08:28:17 crc kubenswrapper[4867]: I0101 08:28:17.127951 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:28:17 crc kubenswrapper[4867]: I0101 08:28:17.127951 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:28:17 crc kubenswrapper[4867]: I0101 08:28:17.128117 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:28:17 crc kubenswrapper[4867]: E0101 08:28:17.128797 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:28:17 crc kubenswrapper[4867]: E0101 08:28:17.129066 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:28:17 crc kubenswrapper[4867]: E0101 08:28:17.129263 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:28:18 crc kubenswrapper[4867]: I0101 08:28:18.128463 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:28:18 crc kubenswrapper[4867]: E0101 08:28:18.128679 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:28:19 crc kubenswrapper[4867]: I0101 08:28:19.127878 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:28:19 crc kubenswrapper[4867]: E0101 08:28:19.128061 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:28:19 crc kubenswrapper[4867]: I0101 08:28:19.128097 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:28:19 crc kubenswrapper[4867]: E0101 08:28:19.128388 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:28:19 crc kubenswrapper[4867]: I0101 08:28:19.128394 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:28:19 crc kubenswrapper[4867]: E0101 08:28:19.128503 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:28:20 crc kubenswrapper[4867]: I0101 08:28:20.128056 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:28:20 crc kubenswrapper[4867]: E0101 08:28:20.128450 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:28:21 crc kubenswrapper[4867]: I0101 08:28:21.128448 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:28:21 crc kubenswrapper[4867]: I0101 08:28:21.128457 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:28:21 crc kubenswrapper[4867]: E0101 08:28:21.130586 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:28:21 crc kubenswrapper[4867]: I0101 08:28:21.130645 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:28:21 crc kubenswrapper[4867]: E0101 08:28:21.130838 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:28:21 crc kubenswrapper[4867]: E0101 08:28:21.130949 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:28:22 crc kubenswrapper[4867]: I0101 08:28:22.128458 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:28:22 crc kubenswrapper[4867]: E0101 08:28:22.129213 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:28:22 crc kubenswrapper[4867]: I0101 08:28:22.733995 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wkbs8_da72a722-a2a3-459e-875a-e1605b442e05/kube-multus/1.log" Jan 01 08:28:22 crc kubenswrapper[4867]: I0101 08:28:22.734707 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wkbs8_da72a722-a2a3-459e-875a-e1605b442e05/kube-multus/0.log" Jan 01 08:28:22 crc kubenswrapper[4867]: I0101 08:28:22.734997 4867 generic.go:334] "Generic (PLEG): container finished" podID="da72a722-a2a3-459e-875a-e1605b442e05" containerID="602fa2ee8eb9678b61a838c41ada5620972c139005d78b06dd99cf10077d9b12" exitCode=1 Jan 01 08:28:22 crc kubenswrapper[4867]: I0101 08:28:22.735128 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wkbs8" event={"ID":"da72a722-a2a3-459e-875a-e1605b442e05","Type":"ContainerDied","Data":"602fa2ee8eb9678b61a838c41ada5620972c139005d78b06dd99cf10077d9b12"} Jan 01 08:28:22 crc kubenswrapper[4867]: I0101 08:28:22.735366 4867 scope.go:117] "RemoveContainer" containerID="d3df8c5c8fba3b60bb4e0512654c73032909c8a9deb28c4664dd08ec8231de3f" Jan 01 08:28:22 crc kubenswrapper[4867]: I0101 08:28:22.736252 4867 scope.go:117] "RemoveContainer" containerID="602fa2ee8eb9678b61a838c41ada5620972c139005d78b06dd99cf10077d9b12" Jan 01 08:28:22 crc kubenswrapper[4867]: E0101 08:28:22.736753 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-wkbs8_openshift-multus(da72a722-a2a3-459e-875a-e1605b442e05)\"" pod="openshift-multus/multus-wkbs8" podUID="da72a722-a2a3-459e-875a-e1605b442e05" Jan 01 08:28:23 crc kubenswrapper[4867]: I0101 08:28:23.127697 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:28:23 crc kubenswrapper[4867]: I0101 08:28:23.127831 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:28:23 crc kubenswrapper[4867]: E0101 08:28:23.128030 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:28:23 crc kubenswrapper[4867]: I0101 08:28:23.128081 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:28:23 crc kubenswrapper[4867]: E0101 08:28:23.128561 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:28:23 crc kubenswrapper[4867]: E0101 08:28:23.128634 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:28:23 crc kubenswrapper[4867]: I0101 08:28:23.740874 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wkbs8_da72a722-a2a3-459e-875a-e1605b442e05/kube-multus/1.log" Jan 01 08:28:24 crc kubenswrapper[4867]: I0101 08:28:24.127839 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:28:24 crc kubenswrapper[4867]: E0101 08:28:24.129353 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:28:24 crc kubenswrapper[4867]: I0101 08:28:24.129670 4867 scope.go:117] "RemoveContainer" containerID="f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b" Jan 01 08:28:24 crc kubenswrapper[4867]: E0101 08:28:24.129966 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6nftn_openshift-ovn-kubernetes(2d26a65b-86d6-4603-bdeb-ffcb2f086fda)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" Jan 01 08:28:25 crc kubenswrapper[4867]: I0101 08:28:25.128433 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:28:25 crc kubenswrapper[4867]: I0101 08:28:25.128569 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:28:25 crc kubenswrapper[4867]: E0101 08:28:25.129288 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:28:25 crc kubenswrapper[4867]: I0101 08:28:25.128570 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:28:25 crc kubenswrapper[4867]: E0101 08:28:25.129392 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:28:25 crc kubenswrapper[4867]: E0101 08:28:25.129553 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:28:26 crc kubenswrapper[4867]: I0101 08:28:26.128466 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:28:26 crc kubenswrapper[4867]: E0101 08:28:26.128645 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:28:27 crc kubenswrapper[4867]: I0101 08:28:27.128393 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:28:27 crc kubenswrapper[4867]: I0101 08:28:27.128509 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:28:27 crc kubenswrapper[4867]: I0101 08:28:27.128518 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:28:27 crc kubenswrapper[4867]: E0101 08:28:27.129322 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:28:27 crc kubenswrapper[4867]: E0101 08:28:27.130140 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:28:27 crc kubenswrapper[4867]: E0101 08:28:27.130256 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:28:28 crc kubenswrapper[4867]: I0101 08:28:28.128293 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:28:28 crc kubenswrapper[4867]: E0101 08:28:28.128446 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:28:29 crc kubenswrapper[4867]: I0101 08:28:29.127975 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:28:29 crc kubenswrapper[4867]: I0101 08:28:29.128041 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:28:29 crc kubenswrapper[4867]: I0101 08:28:29.128069 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:28:29 crc kubenswrapper[4867]: E0101 08:28:29.128174 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:28:29 crc kubenswrapper[4867]: E0101 08:28:29.128306 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:28:29 crc kubenswrapper[4867]: E0101 08:28:29.128476 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:28:30 crc kubenswrapper[4867]: I0101 08:28:30.128244 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:28:30 crc kubenswrapper[4867]: E0101 08:28:30.128442 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:28:31 crc kubenswrapper[4867]: E0101 08:28:31.076995 4867 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 01 08:28:31 crc kubenswrapper[4867]: I0101 08:28:31.128376 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:28:31 crc kubenswrapper[4867]: I0101 08:28:31.128531 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:28:31 crc kubenswrapper[4867]: I0101 08:28:31.128647 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:28:31 crc kubenswrapper[4867]: E0101 08:28:31.130286 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:28:31 crc kubenswrapper[4867]: E0101 08:28:31.130437 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:28:31 crc kubenswrapper[4867]: E0101 08:28:31.130627 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:28:31 crc kubenswrapper[4867]: E0101 08:28:31.224265 4867 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 01 08:28:32 crc kubenswrapper[4867]: I0101 08:28:32.128470 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:28:32 crc kubenswrapper[4867]: E0101 08:28:32.128699 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:28:33 crc kubenswrapper[4867]: I0101 08:28:33.127864 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:28:33 crc kubenswrapper[4867]: I0101 08:28:33.128122 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:28:33 crc kubenswrapper[4867]: I0101 08:28:33.128342 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:28:33 crc kubenswrapper[4867]: E0101 08:28:33.128335 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:28:33 crc kubenswrapper[4867]: E0101 08:28:33.128568 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:28:33 crc kubenswrapper[4867]: E0101 08:28:33.128759 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:28:34 crc kubenswrapper[4867]: I0101 08:28:34.127648 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:28:34 crc kubenswrapper[4867]: E0101 08:28:34.127848 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:28:35 crc kubenswrapper[4867]: I0101 08:28:35.128063 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:28:35 crc kubenswrapper[4867]: I0101 08:28:35.128099 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:28:35 crc kubenswrapper[4867]: E0101 08:28:35.128476 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:28:35 crc kubenswrapper[4867]: E0101 08:28:35.128623 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:28:35 crc kubenswrapper[4867]: I0101 08:28:35.128426 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:28:35 crc kubenswrapper[4867]: E0101 08:28:35.129721 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:28:35 crc kubenswrapper[4867]: I0101 08:28:35.130166 4867 scope.go:117] "RemoveContainer" containerID="f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b" Jan 01 08:28:35 crc kubenswrapper[4867]: I0101 08:28:35.789801 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nftn_2d26a65b-86d6-4603-bdeb-ffcb2f086fda/ovnkube-controller/3.log" Jan 01 08:28:35 crc kubenswrapper[4867]: I0101 08:28:35.793381 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" event={"ID":"2d26a65b-86d6-4603-bdeb-ffcb2f086fda","Type":"ContainerStarted","Data":"5e27cc3c277a229235b0ff61f0b87aa1c0c080cec7fd84c149bc2b47722102a7"} Jan 01 08:28:35 crc kubenswrapper[4867]: I0101 08:28:35.793761 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:28:35 crc kubenswrapper[4867]: I0101 08:28:35.828032 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" podStartSLOduration=106.828012527 podStartE2EDuration="1m46.828012527s" podCreationTimestamp="2026-01-01 08:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:35.826629878 +0000 UTC m=+124.961898677" watchObservedRunningTime="2026-01-01 08:28:35.828012527 +0000 UTC m=+124.963281306" Jan 01 08:28:36 crc kubenswrapper[4867]: I0101 08:28:36.128482 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:28:36 crc kubenswrapper[4867]: E0101 08:28:36.128667 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:28:36 crc kubenswrapper[4867]: I0101 08:28:36.129402 4867 scope.go:117] "RemoveContainer" containerID="602fa2ee8eb9678b61a838c41ada5620972c139005d78b06dd99cf10077d9b12" Jan 01 08:28:36 crc kubenswrapper[4867]: I0101 08:28:36.142433 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kv8wr"] Jan 01 08:28:36 crc kubenswrapper[4867]: E0101 08:28:36.225911 4867 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 01 08:28:36 crc kubenswrapper[4867]: I0101 08:28:36.799876 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wkbs8_da72a722-a2a3-459e-875a-e1605b442e05/kube-multus/1.log" Jan 01 08:28:36 crc kubenswrapper[4867]: I0101 08:28:36.800633 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:28:36 crc kubenswrapper[4867]: I0101 08:28:36.800592 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wkbs8" event={"ID":"da72a722-a2a3-459e-875a-e1605b442e05","Type":"ContainerStarted","Data":"6f96374fd054c235b06dbd37e3fde553db1ef9928046058431a727ac1da2bf50"} Jan 01 08:28:36 crc kubenswrapper[4867]: E0101 08:28:36.800784 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:28:37 crc kubenswrapper[4867]: I0101 08:28:37.128266 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:28:37 crc kubenswrapper[4867]: I0101 08:28:37.128277 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:28:37 crc kubenswrapper[4867]: E0101 08:28:37.128500 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:28:37 crc kubenswrapper[4867]: I0101 08:28:37.128605 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:28:37 crc kubenswrapper[4867]: E0101 08:28:37.128780 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:28:37 crc kubenswrapper[4867]: E0101 08:28:37.128931 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:28:39 crc kubenswrapper[4867]: I0101 08:28:39.127626 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:28:39 crc kubenswrapper[4867]: I0101 08:28:39.127714 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:28:39 crc kubenswrapper[4867]: I0101 08:28:39.127649 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:28:39 crc kubenswrapper[4867]: I0101 08:28:39.127775 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:28:39 crc kubenswrapper[4867]: E0101 08:28:39.128067 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:28:39 crc kubenswrapper[4867]: E0101 08:28:39.128265 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:28:39 crc kubenswrapper[4867]: E0101 08:28:39.128423 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:28:39 crc kubenswrapper[4867]: E0101 08:28:39.128577 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:28:41 crc kubenswrapper[4867]: I0101 08:28:41.128217 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:28:41 crc kubenswrapper[4867]: I0101 08:28:41.128303 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:28:41 crc kubenswrapper[4867]: I0101 08:28:41.128339 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:28:41 crc kubenswrapper[4867]: I0101 08:28:41.128375 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:28:41 crc kubenswrapper[4867]: E0101 08:28:41.130225 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 01 08:28:41 crc kubenswrapper[4867]: E0101 08:28:41.130481 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kv8wr" podUID="28af0def-191f-4949-b617-a7a07dd8145b" Jan 01 08:28:41 crc kubenswrapper[4867]: E0101 08:28:41.130613 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 01 08:28:41 crc kubenswrapper[4867]: E0101 08:28:41.130697 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 01 08:28:43 crc kubenswrapper[4867]: I0101 08:28:43.128430 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:28:43 crc kubenswrapper[4867]: I0101 08:28:43.128526 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:28:43 crc kubenswrapper[4867]: I0101 08:28:43.128558 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:28:43 crc kubenswrapper[4867]: I0101 08:28:43.128663 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:28:43 crc kubenswrapper[4867]: I0101 08:28:43.131329 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 01 08:28:43 crc kubenswrapper[4867]: I0101 08:28:43.131947 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 01 08:28:43 crc kubenswrapper[4867]: I0101 08:28:43.132153 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 01 08:28:43 crc kubenswrapper[4867]: I0101 08:28:43.132303 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 01 08:28:43 crc kubenswrapper[4867]: I0101 08:28:43.132348 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 01 08:28:43 crc kubenswrapper[4867]: I0101 08:28:43.132448 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.746855 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.802371 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bsfrw"] Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.803242 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.805260 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jjglf"] Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.806058 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jjglf" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.805953 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.807305 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.807402 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.807438 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.808142 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.808192 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.808792 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hmhl9"] Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.809625 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.809769 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hmhl9" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.812997 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngh5s"] Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.813475 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.813830 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngh5s" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.813997 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.817184 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gjdqd"] Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.817867 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjdqd" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.818198 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.819186 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.821172 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.821201 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.821365 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wn4kc"] Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.822508 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.822751 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.825102 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.825686 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.826066 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.826286 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.826451 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j8lqg"] Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.826533 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.826753 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.827229 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j8lqg" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.827990 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.831357 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.831539 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.831683 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.831831 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.832011 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.832365 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-pttmf"] Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.832690 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.833001 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.833526 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pttmf" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.835916 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.836288 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.836530 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.836777 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.837731 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.837810 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.838032 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.838203 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.857793 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.861142 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ks4bk"] Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.861395 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.861421 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.861695 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.861793 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.861959 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.862015 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.873454 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-nxcwg"] Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.875055 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1504048c-578a-42d2-a6de-9161ee1ebb82-audit-dir\") pod \"apiserver-76f77b778f-bsfrw\" (UID: \"1504048c-578a-42d2-a6de-9161ee1ebb82\") " pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.875103 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmfjs\" (UniqueName: \"kubernetes.io/projected/f92ddf87-e976-4f3c-9a8c-8a4dab665391-kube-api-access-vmfjs\") pod \"machine-approver-56656f9798-pttmf\" (UID: \"f92ddf87-e976-4f3c-9a8c-8a4dab665391\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pttmf" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.875134 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f92ddf87-e976-4f3c-9a8c-8a4dab665391-config\") pod \"machine-approver-56656f9798-pttmf\" (UID: \"f92ddf87-e976-4f3c-9a8c-8a4dab665391\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pttmf" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.875168 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.875194 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.875218 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.875243 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.875275 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02e8282b-37b6-4539-ad59-fae4c4c65a45-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-jjglf\" (UID: \"02e8282b-37b6-4539-ad59-fae4c4c65a45\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jjglf" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.875300 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.875323 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1504048c-578a-42d2-a6de-9161ee1ebb82-encryption-config\") pod \"apiserver-76f77b778f-bsfrw\" (UID: \"1504048c-578a-42d2-a6de-9161ee1ebb82\") " pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.875349 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khjct\" (UniqueName: \"kubernetes.io/projected/94b7004c-c318-4872-a1b7-f983c691a523-kube-api-access-khjct\") pod \"machine-api-operator-5694c8668f-hmhl9\" (UID: \"94b7004c-c318-4872-a1b7-f983c691a523\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hmhl9" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.875374 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02e8282b-37b6-4539-ad59-fae4c4c65a45-serving-cert\") pod \"controller-manager-879f6c89f-jjglf\" (UID: \"02e8282b-37b6-4539-ad59-fae4c4c65a45\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jjglf" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.875396 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1504048c-578a-42d2-a6de-9161ee1ebb82-audit\") pod \"apiserver-76f77b778f-bsfrw\" (UID: \"1504048c-578a-42d2-a6de-9161ee1ebb82\") " pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.875418 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f92ddf87-e976-4f3c-9a8c-8a4dab665391-machine-approver-tls\") pod \"machine-approver-56656f9798-pttmf\" (UID: \"f92ddf87-e976-4f3c-9a8c-8a4dab665391\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pttmf" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.875466 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02e8282b-37b6-4539-ad59-fae4c4c65a45-client-ca\") pod \"controller-manager-879f6c89f-jjglf\" (UID: \"02e8282b-37b6-4539-ad59-fae4c4c65a45\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jjglf" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.875501 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02e8282b-37b6-4539-ad59-fae4c4c65a45-config\") pod \"controller-manager-879f6c89f-jjglf\" (UID: \"02e8282b-37b6-4539-ad59-fae4c4c65a45\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jjglf" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.875525 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.875548 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1504048c-578a-42d2-a6de-9161ee1ebb82-etcd-client\") pod \"apiserver-76f77b778f-bsfrw\" (UID: \"1504048c-578a-42d2-a6de-9161ee1ebb82\") " pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.875570 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ba5c2565-7b3c-4c9b-8600-6a572cc363e0-etcd-client\") pod \"apiserver-7bbb656c7d-gjdqd\" (UID: \"ba5c2565-7b3c-4c9b-8600-6a572cc363e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjdqd" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.875595 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.875621 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ba5c2565-7b3c-4c9b-8600-6a572cc363e0-audit-policies\") pod \"apiserver-7bbb656c7d-gjdqd\" (UID: \"ba5c2565-7b3c-4c9b-8600-6a572cc363e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjdqd" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.875646 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94b7004c-c318-4872-a1b7-f983c691a523-config\") pod \"machine-api-operator-5694c8668f-hmhl9\" (UID: \"94b7004c-c318-4872-a1b7-f983c691a523\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hmhl9" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.875702 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1504048c-578a-42d2-a6de-9161ee1ebb82-config\") pod \"apiserver-76f77b778f-bsfrw\" (UID: \"1504048c-578a-42d2-a6de-9161ee1ebb82\") " pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.875726 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1504048c-578a-42d2-a6de-9161ee1ebb82-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bsfrw\" (UID: \"1504048c-578a-42d2-a6de-9161ee1ebb82\") " pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.875758 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ad652ed-d682-41ed-86e7-d49eaa1b6f0b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-ngh5s\" (UID: \"4ad652ed-d682-41ed-86e7-d49eaa1b6f0b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngh5s" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.875790 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1504048c-578a-42d2-a6de-9161ee1ebb82-serving-cert\") pod \"apiserver-76f77b778f-bsfrw\" (UID: \"1504048c-578a-42d2-a6de-9161ee1ebb82\") " pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.875814 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba5c2565-7b3c-4c9b-8600-6a572cc363e0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gjdqd\" (UID: \"ba5c2565-7b3c-4c9b-8600-6a572cc363e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjdqd" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.875845 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.875866 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.875911 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ba5c2565-7b3c-4c9b-8600-6a572cc363e0-encryption-config\") pod \"apiserver-7bbb656c7d-gjdqd\" (UID: \"ba5c2565-7b3c-4c9b-8600-6a572cc363e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjdqd" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.875952 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/94b7004c-c318-4872-a1b7-f983c691a523-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hmhl9\" (UID: \"94b7004c-c318-4872-a1b7-f983c691a523\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hmhl9" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.875981 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aff785a-03ef-4b1a-93d6-e2674725b053-serving-cert\") pod \"route-controller-manager-6576b87f9c-j8lqg\" (UID: \"9aff785a-03ef-4b1a-93d6-e2674725b053\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j8lqg" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.876040 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1504048c-578a-42d2-a6de-9161ee1ebb82-node-pullsecrets\") pod \"apiserver-76f77b778f-bsfrw\" (UID: \"1504048c-578a-42d2-a6de-9161ee1ebb82\") " pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.876220 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.876268 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hhnd\" (UniqueName: \"kubernetes.io/projected/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-kube-api-access-2hhnd\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.876300 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/94b7004c-c318-4872-a1b7-f983c691a523-images\") pod \"machine-api-operator-5694c8668f-hmhl9\" (UID: \"94b7004c-c318-4872-a1b7-f983c691a523\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hmhl9" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.876341 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.876384 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttrk2\" (UniqueName: \"kubernetes.io/projected/4ad652ed-d682-41ed-86e7-d49eaa1b6f0b-kube-api-access-ttrk2\") pod \"openshift-apiserver-operator-796bbdcf4f-ngh5s\" (UID: \"4ad652ed-d682-41ed-86e7-d49eaa1b6f0b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngh5s" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.876425 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba5c2565-7b3c-4c9b-8600-6a572cc363e0-serving-cert\") pod \"apiserver-7bbb656c7d-gjdqd\" (UID: \"ba5c2565-7b3c-4c9b-8600-6a572cc363e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjdqd" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.876463 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9aff785a-03ef-4b1a-93d6-e2674725b053-client-ca\") pod \"route-controller-manager-6576b87f9c-j8lqg\" (UID: \"9aff785a-03ef-4b1a-93d6-e2674725b053\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j8lqg" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.876467 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-ks4bk" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.876574 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmr9w\" (UniqueName: \"kubernetes.io/projected/ba5c2565-7b3c-4c9b-8600-6a572cc363e0-kube-api-access-gmr9w\") pod \"apiserver-7bbb656c7d-gjdqd\" (UID: \"ba5c2565-7b3c-4c9b-8600-6a572cc363e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjdqd" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.876608 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwd6x\" (UniqueName: \"kubernetes.io/projected/02e8282b-37b6-4539-ad59-fae4c4c65a45-kube-api-access-dwd6x\") pod \"controller-manager-879f6c89f-jjglf\" (UID: \"02e8282b-37b6-4539-ad59-fae4c4c65a45\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jjglf" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.876628 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1504048c-578a-42d2-a6de-9161ee1ebb82-etcd-serving-ca\") pod \"apiserver-76f77b778f-bsfrw\" (UID: \"1504048c-578a-42d2-a6de-9161ee1ebb82\") " pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.876672 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f92ddf87-e976-4f3c-9a8c-8a4dab665391-auth-proxy-config\") pod \"machine-approver-56656f9798-pttmf\" (UID: \"f92ddf87-e976-4f3c-9a8c-8a4dab665391\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pttmf" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.876709 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1504048c-578a-42d2-a6de-9161ee1ebb82-image-import-ca\") pod \"apiserver-76f77b778f-bsfrw\" (UID: \"1504048c-578a-42d2-a6de-9161ee1ebb82\") " pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.876733 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ad652ed-d682-41ed-86e7-d49eaa1b6f0b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-ngh5s\" (UID: \"4ad652ed-d682-41ed-86e7-d49eaa1b6f0b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngh5s" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.876755 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ba5c2565-7b3c-4c9b-8600-6a572cc363e0-audit-dir\") pod \"apiserver-7bbb656c7d-gjdqd\" (UID: \"ba5c2565-7b3c-4c9b-8600-6a572cc363e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjdqd" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.876808 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aff785a-03ef-4b1a-93d6-e2674725b053-config\") pod \"route-controller-manager-6576b87f9c-j8lqg\" (UID: \"9aff785a-03ef-4b1a-93d6-e2674725b053\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j8lqg" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.876832 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dfqd\" (UniqueName: \"kubernetes.io/projected/9aff785a-03ef-4b1a-93d6-e2674725b053-kube-api-access-6dfqd\") pod \"route-controller-manager-6576b87f9c-j8lqg\" (UID: \"9aff785a-03ef-4b1a-93d6-e2674725b053\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j8lqg" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.876868 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjw4k\" (UniqueName: \"kubernetes.io/projected/1504048c-578a-42d2-a6de-9161ee1ebb82-kube-api-access-bjw4k\") pod \"apiserver-76f77b778f-bsfrw\" (UID: \"1504048c-578a-42d2-a6de-9161ee1ebb82\") " pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.876924 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-audit-dir\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.877178 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-audit-policies\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.877327 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ba5c2565-7b3c-4c9b-8600-6a572cc363e0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gjdqd\" (UID: \"ba5c2565-7b3c-4c9b-8600-6a572cc363e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjdqd" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.896363 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-sqxbg"] Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.897175 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jjglf"] Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.897225 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hmhl9"] Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.897244 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bsfrw"] Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.897393 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-sqxbg" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.898149 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.898205 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.898362 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-nxcwg" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.898802 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.899001 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-fxjs9"] Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.899297 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.899331 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.899356 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.899412 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.899486 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.899497 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.899513 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.899548 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fbt8p"] Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.899996 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-fxjs9" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.901262 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fbt8p" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.902579 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.905282 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.907087 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.908066 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.910115 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.910229 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.910354 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.910450 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.910590 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.910641 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.910742 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.910809 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.910913 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.911035 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.911053 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.911138 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.911221 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.911240 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.911334 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.911389 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.911339 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.913066 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-6lsq2"] Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.913659 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6lsq2" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.913772 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lnc8j"] Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.914272 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lnc8j" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.915246 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.917472 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4nb85"] Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.917946 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.919680 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.919736 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.919847 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.919993 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.920094 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.920208 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.920258 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-t67cg"] Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.920402 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.920540 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.920662 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.920758 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.920809 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-t67cg" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.920851 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.921695 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.921986 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.940782 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.941170 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.942257 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.942360 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.943200 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.943241 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.943667 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.943970 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.944021 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.944068 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.944291 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.944412 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.945603 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.946710 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.948711 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.949973 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.951498 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8k4x7"] Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.952507 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8k4x7" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.953630 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t99qw"] Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.960868 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t99qw" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.962565 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-b2crx"] Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.963177 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-b2crx" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.965197 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.967909 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.968163 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gjdqd"] Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.971314 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5klc7"] Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.971791 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5klc7" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.977427 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-p7jh5"] Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.981677 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-5gnbj"] Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.982212 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j8lqg"] Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.982235 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nsfm2"] Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.982430 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p7jh5" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.982601 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nsfm2" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.982749 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-5gnbj" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.990303 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.992541 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/94b7004c-c318-4872-a1b7-f983c691a523-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hmhl9\" (UID: \"94b7004c-c318-4872-a1b7-f983c691a523\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hmhl9" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.992578 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18d7838a-e01c-42e1-ab20-a72878132ef1-serving-cert\") pod \"openshift-config-operator-7777fb866f-lnc8j\" (UID: \"18d7838a-e01c-42e1-ab20-a72878132ef1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lnc8j" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.992598 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aff785a-03ef-4b1a-93d6-e2674725b053-serving-cert\") pod \"route-controller-manager-6576b87f9c-j8lqg\" (UID: \"9aff785a-03ef-4b1a-93d6-e2674725b053\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j8lqg" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.992615 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1504048c-578a-42d2-a6de-9161ee1ebb82-node-pullsecrets\") pod \"apiserver-76f77b778f-bsfrw\" (UID: \"1504048c-578a-42d2-a6de-9161ee1ebb82\") " pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.992631 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hhnd\" (UniqueName: \"kubernetes.io/projected/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-kube-api-access-2hhnd\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.992647 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/94b7004c-c318-4872-a1b7-f983c691a523-images\") pod \"machine-api-operator-5694c8668f-hmhl9\" (UID: \"94b7004c-c318-4872-a1b7-f983c691a523\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hmhl9" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.992662 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.992682 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms9f6\" (UniqueName: \"kubernetes.io/projected/69dbb713-1149-4edd-899c-3fb77a8a36e2-kube-api-access-ms9f6\") pod \"multus-admission-controller-857f4d67dd-b2crx\" (UID: \"69dbb713-1149-4edd-899c-3fb77a8a36e2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-b2crx" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.992701 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/25d57f2f-1353-417b-ba47-a0ceb1a4577e-console-oauth-config\") pod \"console-f9d7485db-6lsq2\" (UID: \"25d57f2f-1353-417b-ba47-a0ceb1a4577e\") " pod="openshift-console/console-f9d7485db-6lsq2" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.992718 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.992733 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba5c2565-7b3c-4c9b-8600-6a572cc363e0-serving-cert\") pod \"apiserver-7bbb656c7d-gjdqd\" (UID: \"ba5c2565-7b3c-4c9b-8600-6a572cc363e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjdqd" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.992748 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/25d57f2f-1353-417b-ba47-a0ceb1a4577e-oauth-serving-cert\") pod \"console-f9d7485db-6lsq2\" (UID: \"25d57f2f-1353-417b-ba47-a0ceb1a4577e\") " pod="openshift-console/console-f9d7485db-6lsq2" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.992765 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttrk2\" (UniqueName: \"kubernetes.io/projected/4ad652ed-d682-41ed-86e7-d49eaa1b6f0b-kube-api-access-ttrk2\") pod \"openshift-apiserver-operator-796bbdcf4f-ngh5s\" (UID: \"4ad652ed-d682-41ed-86e7-d49eaa1b6f0b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngh5s" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.992782 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9aff785a-03ef-4b1a-93d6-e2674725b053-client-ca\") pod \"route-controller-manager-6576b87f9c-j8lqg\" (UID: \"9aff785a-03ef-4b1a-93d6-e2674725b053\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j8lqg" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.992803 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmr9w\" (UniqueName: \"kubernetes.io/projected/ba5c2565-7b3c-4c9b-8600-6a572cc363e0-kube-api-access-gmr9w\") pod \"apiserver-7bbb656c7d-gjdqd\" (UID: \"ba5c2565-7b3c-4c9b-8600-6a572cc363e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjdqd" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.992821 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82xsq\" (UniqueName: \"kubernetes.io/projected/18d7838a-e01c-42e1-ab20-a72878132ef1-kube-api-access-82xsq\") pod \"openshift-config-operator-7777fb866f-lnc8j\" (UID: \"18d7838a-e01c-42e1-ab20-a72878132ef1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lnc8j" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.992838 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42da4c7a-d738-4262-a395-1ff1c9d4f399-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8k4x7\" (UID: \"42da4c7a-d738-4262-a395-1ff1c9d4f399\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8k4x7" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.992853 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db27j\" (UniqueName: \"kubernetes.io/projected/42da4c7a-d738-4262-a395-1ff1c9d4f399-kube-api-access-db27j\") pod \"cluster-image-registry-operator-dc59b4c8b-8k4x7\" (UID: \"42da4c7a-d738-4262-a395-1ff1c9d4f399\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8k4x7" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.992870 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwd6x\" (UniqueName: \"kubernetes.io/projected/02e8282b-37b6-4539-ad59-fae4c4c65a45-kube-api-access-dwd6x\") pod \"controller-manager-879f6c89f-jjglf\" (UID: \"02e8282b-37b6-4539-ad59-fae4c4c65a45\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jjglf" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.992899 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1504048c-578a-42d2-a6de-9161ee1ebb82-etcd-serving-ca\") pod \"apiserver-76f77b778f-bsfrw\" (UID: \"1504048c-578a-42d2-a6de-9161ee1ebb82\") " pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.992916 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f92ddf87-e976-4f3c-9a8c-8a4dab665391-auth-proxy-config\") pod \"machine-approver-56656f9798-pttmf\" (UID: \"f92ddf87-e976-4f3c-9a8c-8a4dab665391\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pttmf" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.992931 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/42da4c7a-d738-4262-a395-1ff1c9d4f399-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8k4x7\" (UID: \"42da4c7a-d738-4262-a395-1ff1c9d4f399\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8k4x7" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.992948 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/25d57f2f-1353-417b-ba47-a0ceb1a4577e-service-ca\") pod \"console-f9d7485db-6lsq2\" (UID: \"25d57f2f-1353-417b-ba47-a0ceb1a4577e\") " pod="openshift-console/console-f9d7485db-6lsq2" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.992965 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1504048c-578a-42d2-a6de-9161ee1ebb82-image-import-ca\") pod \"apiserver-76f77b778f-bsfrw\" (UID: \"1504048c-578a-42d2-a6de-9161ee1ebb82\") " pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.992987 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87c3b9b3-d6b0-46cb-aed9-c58555214163-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-fbt8p\" (UID: \"87c3b9b3-d6b0-46cb-aed9-c58555214163\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fbt8p" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993002 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ad652ed-d682-41ed-86e7-d49eaa1b6f0b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-ngh5s\" (UID: \"4ad652ed-d682-41ed-86e7-d49eaa1b6f0b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngh5s" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993036 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ba5c2565-7b3c-4c9b-8600-6a572cc363e0-audit-dir\") pod \"apiserver-7bbb656c7d-gjdqd\" (UID: \"ba5c2565-7b3c-4c9b-8600-6a572cc363e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjdqd" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993055 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68rp2\" (UniqueName: \"kubernetes.io/projected/87c3b9b3-d6b0-46cb-aed9-c58555214163-kube-api-access-68rp2\") pod \"openshift-controller-manager-operator-756b6f6bc6-fbt8p\" (UID: \"87c3b9b3-d6b0-46cb-aed9-c58555214163\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fbt8p" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993077 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aff785a-03ef-4b1a-93d6-e2674725b053-config\") pod \"route-controller-manager-6576b87f9c-j8lqg\" (UID: \"9aff785a-03ef-4b1a-93d6-e2674725b053\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j8lqg" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993092 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dfqd\" (UniqueName: \"kubernetes.io/projected/9aff785a-03ef-4b1a-93d6-e2674725b053-kube-api-access-6dfqd\") pod \"route-controller-manager-6576b87f9c-j8lqg\" (UID: \"9aff785a-03ef-4b1a-93d6-e2674725b053\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j8lqg" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993107 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjw4k\" (UniqueName: \"kubernetes.io/projected/1504048c-578a-42d2-a6de-9161ee1ebb82-kube-api-access-bjw4k\") pod \"apiserver-76f77b778f-bsfrw\" (UID: \"1504048c-578a-42d2-a6de-9161ee1ebb82\") " pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993124 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wqxg\" (UniqueName: \"kubernetes.io/projected/01fa587e-a8a9-4092-9462-905cf90cf1dc-kube-api-access-9wqxg\") pod \"cluster-samples-operator-665b6dd947-sqxbg\" (UID: \"01fa587e-a8a9-4092-9462-905cf90cf1dc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-sqxbg" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993137 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/25d57f2f-1353-417b-ba47-a0ceb1a4577e-console-config\") pod \"console-f9d7485db-6lsq2\" (UID: \"25d57f2f-1353-417b-ba47-a0ceb1a4577e\") " pod="openshift-console/console-f9d7485db-6lsq2" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993152 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/851af35f-1738-49d8-855c-33e09731c8e3-metrics-tls\") pod \"dns-operator-744455d44c-t67cg\" (UID: \"851af35f-1738-49d8-855c-33e09731c8e3\") " pod="openshift-dns-operator/dns-operator-744455d44c-t67cg" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993168 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-audit-dir\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993182 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87c3b9b3-d6b0-46cb-aed9-c58555214163-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-fbt8p\" (UID: \"87c3b9b3-d6b0-46cb-aed9-c58555214163\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fbt8p" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993197 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-audit-policies\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993215 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ba5c2565-7b3c-4c9b-8600-6a572cc363e0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gjdqd\" (UID: \"ba5c2565-7b3c-4c9b-8600-6a572cc363e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjdqd" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993231 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1504048c-578a-42d2-a6de-9161ee1ebb82-audit-dir\") pod \"apiserver-76f77b778f-bsfrw\" (UID: \"1504048c-578a-42d2-a6de-9161ee1ebb82\") " pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993246 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmfjs\" (UniqueName: \"kubernetes.io/projected/f92ddf87-e976-4f3c-9a8c-8a4dab665391-kube-api-access-vmfjs\") pod \"machine-approver-56656f9798-pttmf\" (UID: \"f92ddf87-e976-4f3c-9a8c-8a4dab665391\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pttmf" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993261 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrpfd\" (UniqueName: \"kubernetes.io/projected/25d57f2f-1353-417b-ba47-a0ceb1a4577e-kube-api-access-vrpfd\") pod \"console-f9d7485db-6lsq2\" (UID: \"25d57f2f-1353-417b-ba47-a0ceb1a4577e\") " pod="openshift-console/console-f9d7485db-6lsq2" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993277 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7f5w\" (UniqueName: \"kubernetes.io/projected/851af35f-1738-49d8-855c-33e09731c8e3-kube-api-access-j7f5w\") pod \"dns-operator-744455d44c-t67cg\" (UID: \"851af35f-1738-49d8-855c-33e09731c8e3\") " pod="openshift-dns-operator/dns-operator-744455d44c-t67cg" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993300 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993316 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f92ddf87-e976-4f3c-9a8c-8a4dab665391-config\") pod \"machine-approver-56656f9798-pttmf\" (UID: \"f92ddf87-e976-4f3c-9a8c-8a4dab665391\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pttmf" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993332 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993357 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/655f7be8-97ed-4ff8-a91a-3f68d6e8cbf3-serving-cert\") pod \"console-operator-58897d9998-ks4bk\" (UID: \"655f7be8-97ed-4ff8-a91a-3f68d6e8cbf3\") " pod="openshift-console-operator/console-operator-58897d9998-ks4bk" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993372 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/18d7838a-e01c-42e1-ab20-a72878132ef1-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lnc8j\" (UID: \"18d7838a-e01c-42e1-ab20-a72878132ef1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lnc8j" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993390 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993405 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993420 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02e8282b-37b6-4539-ad59-fae4c4c65a45-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-jjglf\" (UID: \"02e8282b-37b6-4539-ad59-fae4c4c65a45\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jjglf" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993435 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993450 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1504048c-578a-42d2-a6de-9161ee1ebb82-encryption-config\") pod \"apiserver-76f77b778f-bsfrw\" (UID: \"1504048c-578a-42d2-a6de-9161ee1ebb82\") " pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993466 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khjct\" (UniqueName: \"kubernetes.io/projected/94b7004c-c318-4872-a1b7-f983c691a523-kube-api-access-khjct\") pod \"machine-api-operator-5694c8668f-hmhl9\" (UID: \"94b7004c-c318-4872-a1b7-f983c691a523\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hmhl9" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993480 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/01fa587e-a8a9-4092-9462-905cf90cf1dc-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-sqxbg\" (UID: \"01fa587e-a8a9-4092-9462-905cf90cf1dc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-sqxbg" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993495 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02e8282b-37b6-4539-ad59-fae4c4c65a45-serving-cert\") pod \"controller-manager-879f6c89f-jjglf\" (UID: \"02e8282b-37b6-4539-ad59-fae4c4c65a45\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jjglf" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993510 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1504048c-578a-42d2-a6de-9161ee1ebb82-audit\") pod \"apiserver-76f77b778f-bsfrw\" (UID: \"1504048c-578a-42d2-a6de-9161ee1ebb82\") " pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993525 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f92ddf87-e976-4f3c-9a8c-8a4dab665391-machine-approver-tls\") pod \"machine-approver-56656f9798-pttmf\" (UID: \"f92ddf87-e976-4f3c-9a8c-8a4dab665391\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pttmf" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993541 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25d57f2f-1353-417b-ba47-a0ceb1a4577e-trusted-ca-bundle\") pod \"console-f9d7485db-6lsq2\" (UID: \"25d57f2f-1353-417b-ba47-a0ceb1a4577e\") " pod="openshift-console/console-f9d7485db-6lsq2" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993556 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02e8282b-37b6-4539-ad59-fae4c4c65a45-config\") pod \"controller-manager-879f6c89f-jjglf\" (UID: \"02e8282b-37b6-4539-ad59-fae4c4c65a45\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jjglf" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993570 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02e8282b-37b6-4539-ad59-fae4c4c65a45-client-ca\") pod \"controller-manager-879f6c89f-jjglf\" (UID: \"02e8282b-37b6-4539-ad59-fae4c4c65a45\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jjglf" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993585 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/655f7be8-97ed-4ff8-a91a-3f68d6e8cbf3-config\") pod \"console-operator-58897d9998-ks4bk\" (UID: \"655f7be8-97ed-4ff8-a91a-3f68d6e8cbf3\") " pod="openshift-console-operator/console-operator-58897d9998-ks4bk" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993606 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993621 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1504048c-578a-42d2-a6de-9161ee1ebb82-etcd-client\") pod \"apiserver-76f77b778f-bsfrw\" (UID: \"1504048c-578a-42d2-a6de-9161ee1ebb82\") " pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993636 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ba5c2565-7b3c-4c9b-8600-6a572cc363e0-etcd-client\") pod \"apiserver-7bbb656c7d-gjdqd\" (UID: \"ba5c2565-7b3c-4c9b-8600-6a572cc363e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjdqd" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993652 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz96w\" (UniqueName: \"kubernetes.io/projected/655f7be8-97ed-4ff8-a91a-3f68d6e8cbf3-kube-api-access-bz96w\") pod \"console-operator-58897d9998-ks4bk\" (UID: \"655f7be8-97ed-4ff8-a91a-3f68d6e8cbf3\") " pod="openshift-console-operator/console-operator-58897d9998-ks4bk" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993667 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/69dbb713-1149-4edd-899c-3fb77a8a36e2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-b2crx\" (UID: \"69dbb713-1149-4edd-899c-3fb77a8a36e2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-b2crx" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993682 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ba5c2565-7b3c-4c9b-8600-6a572cc363e0-audit-policies\") pod \"apiserver-7bbb656c7d-gjdqd\" (UID: \"ba5c2565-7b3c-4c9b-8600-6a572cc363e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjdqd" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993697 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993713 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94b7004c-c318-4872-a1b7-f983c691a523-config\") pod \"machine-api-operator-5694c8668f-hmhl9\" (UID: \"94b7004c-c318-4872-a1b7-f983c691a523\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hmhl9" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993728 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1504048c-578a-42d2-a6de-9161ee1ebb82-config\") pod \"apiserver-76f77b778f-bsfrw\" (UID: \"1504048c-578a-42d2-a6de-9161ee1ebb82\") " pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993743 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1504048c-578a-42d2-a6de-9161ee1ebb82-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bsfrw\" (UID: \"1504048c-578a-42d2-a6de-9161ee1ebb82\") " pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993757 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/25d57f2f-1353-417b-ba47-a0ceb1a4577e-console-serving-cert\") pod \"console-f9d7485db-6lsq2\" (UID: \"25d57f2f-1353-417b-ba47-a0ceb1a4577e\") " pod="openshift-console/console-f9d7485db-6lsq2" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993773 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42da4c7a-d738-4262-a395-1ff1c9d4f399-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8k4x7\" (UID: \"42da4c7a-d738-4262-a395-1ff1c9d4f399\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8k4x7" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993788 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ad652ed-d682-41ed-86e7-d49eaa1b6f0b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-ngh5s\" (UID: \"4ad652ed-d682-41ed-86e7-d49eaa1b6f0b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngh5s" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993802 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1504048c-578a-42d2-a6de-9161ee1ebb82-serving-cert\") pod \"apiserver-76f77b778f-bsfrw\" (UID: \"1504048c-578a-42d2-a6de-9161ee1ebb82\") " pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993817 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba5c2565-7b3c-4c9b-8600-6a572cc363e0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gjdqd\" (UID: \"ba5c2565-7b3c-4c9b-8600-6a572cc363e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjdqd" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993832 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/655f7be8-97ed-4ff8-a91a-3f68d6e8cbf3-trusted-ca\") pod \"console-operator-58897d9998-ks4bk\" (UID: \"655f7be8-97ed-4ff8-a91a-3f68d6e8cbf3\") " pod="openshift-console-operator/console-operator-58897d9998-ks4bk" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993855 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993877 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ba5c2565-7b3c-4c9b-8600-6a572cc363e0-encryption-config\") pod \"apiserver-7bbb656c7d-gjdqd\" (UID: \"ba5c2565-7b3c-4c9b-8600-6a572cc363e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjdqd" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.993909 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.995737 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1504048c-578a-42d2-a6de-9161ee1ebb82-etcd-serving-ca\") pod \"apiserver-76f77b778f-bsfrw\" (UID: \"1504048c-578a-42d2-a6de-9161ee1ebb82\") " pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.995926 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f92ddf87-e976-4f3c-9a8c-8a4dab665391-auth-proxy-config\") pod \"machine-approver-56656f9798-pttmf\" (UID: \"f92ddf87-e976-4f3c-9a8c-8a4dab665391\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pttmf" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.996737 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1504048c-578a-42d2-a6de-9161ee1ebb82-image-import-ca\") pod \"apiserver-76f77b778f-bsfrw\" (UID: \"1504048c-578a-42d2-a6de-9161ee1ebb82\") " pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.997055 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ba5c2565-7b3c-4c9b-8600-6a572cc363e0-audit-dir\") pod \"apiserver-7bbb656c7d-gjdqd\" (UID: \"ba5c2565-7b3c-4c9b-8600-6a572cc363e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjdqd" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.997261 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-audit-dir\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.997867 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1504048c-578a-42d2-a6de-9161ee1ebb82-config\") pod \"apiserver-76f77b778f-bsfrw\" (UID: \"1504048c-578a-42d2-a6de-9161ee1ebb82\") " pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.997964 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02e8282b-37b6-4539-ad59-fae4c4c65a45-client-ca\") pod \"controller-manager-879f6c89f-jjglf\" (UID: \"02e8282b-37b6-4539-ad59-fae4c4c65a45\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jjglf" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.998003 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ba5c2565-7b3c-4c9b-8600-6a572cc363e0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gjdqd\" (UID: \"ba5c2565-7b3c-4c9b-8600-6a572cc363e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjdqd" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.998257 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02e8282b-37b6-4539-ad59-fae4c4c65a45-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-jjglf\" (UID: \"02e8282b-37b6-4539-ad59-fae4c4c65a45\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jjglf" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.998376 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aff785a-03ef-4b1a-93d6-e2674725b053-config\") pod \"route-controller-manager-6576b87f9c-j8lqg\" (UID: \"9aff785a-03ef-4b1a-93d6-e2674725b053\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j8lqg" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.998441 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1504048c-578a-42d2-a6de-9161ee1ebb82-audit-dir\") pod \"apiserver-76f77b778f-bsfrw\" (UID: \"1504048c-578a-42d2-a6de-9161ee1ebb82\") " pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.998470 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ba5c2565-7b3c-4c9b-8600-6a572cc363e0-audit-policies\") pod \"apiserver-7bbb656c7d-gjdqd\" (UID: \"ba5c2565-7b3c-4c9b-8600-6a572cc363e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjdqd" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.998717 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1504048c-578a-42d2-a6de-9161ee1ebb82-audit\") pod \"apiserver-76f77b778f-bsfrw\" (UID: \"1504048c-578a-42d2-a6de-9161ee1ebb82\") " pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.998721 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.999276 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f92ddf87-e976-4f3c-9a8c-8a4dab665391-config\") pod \"machine-approver-56656f9798-pttmf\" (UID: \"f92ddf87-e976-4f3c-9a8c-8a4dab665391\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pttmf" Jan 01 08:28:45 crc kubenswrapper[4867]: I0101 08:28:45.999713 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94b7004c-c318-4872-a1b7-f983c691a523-config\") pod \"machine-api-operator-5694c8668f-hmhl9\" (UID: \"94b7004c-c318-4872-a1b7-f983c691a523\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hmhl9" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.000011 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.000621 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9aff785a-03ef-4b1a-93d6-e2674725b053-client-ca\") pod \"route-controller-manager-6576b87f9c-j8lqg\" (UID: \"9aff785a-03ef-4b1a-93d6-e2674725b053\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j8lqg" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.003589 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba5c2565-7b3c-4c9b-8600-6a572cc363e0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gjdqd\" (UID: \"ba5c2565-7b3c-4c9b-8600-6a572cc363e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjdqd" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.003857 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1504048c-578a-42d2-a6de-9161ee1ebb82-node-pullsecrets\") pod \"apiserver-76f77b778f-bsfrw\" (UID: \"1504048c-578a-42d2-a6de-9161ee1ebb82\") " pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.004266 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.004973 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.005002 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/94b7004c-c318-4872-a1b7-f983c691a523-images\") pod \"machine-api-operator-5694c8668f-hmhl9\" (UID: \"94b7004c-c318-4872-a1b7-f983c691a523\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hmhl9" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.005214 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1504048c-578a-42d2-a6de-9161ee1ebb82-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bsfrw\" (UID: \"1504048c-578a-42d2-a6de-9161ee1ebb82\") " pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.005542 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/94b7004c-c318-4872-a1b7-f983c691a523-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hmhl9\" (UID: \"94b7004c-c318-4872-a1b7-f983c691a523\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hmhl9" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.005995 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ad652ed-d682-41ed-86e7-d49eaa1b6f0b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-ngh5s\" (UID: \"4ad652ed-d682-41ed-86e7-d49eaa1b6f0b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngh5s" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.006033 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.006262 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02e8282b-37b6-4539-ad59-fae4c4c65a45-config\") pod \"controller-manager-879f6c89f-jjglf\" (UID: \"02e8282b-37b6-4539-ad59-fae4c4c65a45\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jjglf" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.006454 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-audit-policies\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.022509 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ba5c2565-7b3c-4c9b-8600-6a572cc363e0-etcd-client\") pod \"apiserver-7bbb656c7d-gjdqd\" (UID: \"ba5c2565-7b3c-4c9b-8600-6a572cc363e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjdqd" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.022782 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ba5c2565-7b3c-4c9b-8600-6a572cc363e0-encryption-config\") pod \"apiserver-7bbb656c7d-gjdqd\" (UID: \"ba5c2565-7b3c-4c9b-8600-6a572cc363e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjdqd" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.023077 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.023087 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.023100 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f92ddf87-e976-4f3c-9a8c-8a4dab665391-machine-approver-tls\") pod \"machine-approver-56656f9798-pttmf\" (UID: \"f92ddf87-e976-4f3c-9a8c-8a4dab665391\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pttmf" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.023485 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aff785a-03ef-4b1a-93d6-e2674725b053-serving-cert\") pod \"route-controller-manager-6576b87f9c-j8lqg\" (UID: \"9aff785a-03ef-4b1a-93d6-e2674725b053\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j8lqg" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.024750 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-7v969"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.025368 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-7v969" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.026030 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zjdzc"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.026706 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zjdzc" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.029235 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.030870 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba5c2565-7b3c-4c9b-8600-6a572cc363e0-serving-cert\") pod \"apiserver-7bbb656c7d-gjdqd\" (UID: \"ba5c2565-7b3c-4c9b-8600-6a572cc363e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjdqd" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.031329 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-nxcwg"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.031372 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1504048c-578a-42d2-a6de-9161ee1ebb82-encryption-config\") pod \"apiserver-76f77b778f-bsfrw\" (UID: \"1504048c-578a-42d2-a6de-9161ee1ebb82\") " pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.031332 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ad652ed-d682-41ed-86e7-d49eaa1b6f0b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-ngh5s\" (UID: \"4ad652ed-d682-41ed-86e7-d49eaa1b6f0b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngh5s" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.031398 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.031578 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.031788 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.031943 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02e8282b-37b6-4539-ad59-fae4c4c65a45-serving-cert\") pod \"controller-manager-879f6c89f-jjglf\" (UID: \"02e8282b-37b6-4539-ad59-fae4c4c65a45\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jjglf" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.033769 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.034338 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1504048c-578a-42d2-a6de-9161ee1ebb82-etcd-client\") pod \"apiserver-76f77b778f-bsfrw\" (UID: \"1504048c-578a-42d2-a6de-9161ee1ebb82\") " pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.035185 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1504048c-578a-42d2-a6de-9161ee1ebb82-serving-cert\") pod \"apiserver-76f77b778f-bsfrw\" (UID: \"1504048c-578a-42d2-a6de-9161ee1ebb82\") " pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.035288 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-sqxbg"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.036038 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ks4bk"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.037475 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7qdzz"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.042465 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7qdzz" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.048844 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.048991 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ztzb7"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.049618 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ztzb7" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.051091 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-d48qm"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.053250 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wn4kc"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.053467 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8tlg5"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.053438 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-d48qm" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.054120 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-n955p"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.054554 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-n955p" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.054240 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8tlg5" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.054926 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-77tbv"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.055554 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-77tbv" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.056345 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29454255-49zxs"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.056866 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29454255-49zxs" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.057970 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-67p67"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.058472 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-67p67" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.059910 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-c47kg"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.060617 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c47kg" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.061119 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8mps5"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.061760 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8mps5" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.062867 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qqrkc"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.063310 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qqrkc" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.063751 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-sbfbv"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.065880 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-sbfbv" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.066430 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.066636 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-t67cg"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.072212 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-7qgnh"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.073551 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7qgnh" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.074551 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8k4x7"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.077497 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t99qw"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.078695 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nsfm2"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.079656 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-6lsq2"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.080804 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-5gnbj"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.082216 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fbt8p"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.084918 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.085035 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lnc8j"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.086737 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7qdzz"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.088071 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4nb85"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.089390 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-n955p"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.090781 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngh5s"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.093373 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-b2crx"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.094403 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/655f7be8-97ed-4ff8-a91a-3f68d6e8cbf3-serving-cert\") pod \"console-operator-58897d9998-ks4bk\" (UID: \"655f7be8-97ed-4ff8-a91a-3f68d6e8cbf3\") " pod="openshift-console-operator/console-operator-58897d9998-ks4bk" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.094428 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/18d7838a-e01c-42e1-ab20-a72878132ef1-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lnc8j\" (UID: \"18d7838a-e01c-42e1-ab20-a72878132ef1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lnc8j" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.094465 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/01fa587e-a8a9-4092-9462-905cf90cf1dc-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-sqxbg\" (UID: \"01fa587e-a8a9-4092-9462-905cf90cf1dc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-sqxbg" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.094484 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25d57f2f-1353-417b-ba47-a0ceb1a4577e-trusted-ca-bundle\") pod \"console-f9d7485db-6lsq2\" (UID: \"25d57f2f-1353-417b-ba47-a0ceb1a4577e\") " pod="openshift-console/console-f9d7485db-6lsq2" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.094501 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/655f7be8-97ed-4ff8-a91a-3f68d6e8cbf3-config\") pod \"console-operator-58897d9998-ks4bk\" (UID: \"655f7be8-97ed-4ff8-a91a-3f68d6e8cbf3\") " pod="openshift-console-operator/console-operator-58897d9998-ks4bk" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.094525 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz96w\" (UniqueName: \"kubernetes.io/projected/655f7be8-97ed-4ff8-a91a-3f68d6e8cbf3-kube-api-access-bz96w\") pod \"console-operator-58897d9998-ks4bk\" (UID: \"655f7be8-97ed-4ff8-a91a-3f68d6e8cbf3\") " pod="openshift-console-operator/console-operator-58897d9998-ks4bk" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.094542 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/69dbb713-1149-4edd-899c-3fb77a8a36e2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-b2crx\" (UID: \"69dbb713-1149-4edd-899c-3fb77a8a36e2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-b2crx" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.094559 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/25d57f2f-1353-417b-ba47-a0ceb1a4577e-console-serving-cert\") pod \"console-f9d7485db-6lsq2\" (UID: \"25d57f2f-1353-417b-ba47-a0ceb1a4577e\") " pod="openshift-console/console-f9d7485db-6lsq2" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.094576 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42da4c7a-d738-4262-a395-1ff1c9d4f399-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8k4x7\" (UID: \"42da4c7a-d738-4262-a395-1ff1c9d4f399\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8k4x7" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.094590 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/655f7be8-97ed-4ff8-a91a-3f68d6e8cbf3-trusted-ca\") pod \"console-operator-58897d9998-ks4bk\" (UID: \"655f7be8-97ed-4ff8-a91a-3f68d6e8cbf3\") " pod="openshift-console-operator/console-operator-58897d9998-ks4bk" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.094611 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18d7838a-e01c-42e1-ab20-a72878132ef1-serving-cert\") pod \"openshift-config-operator-7777fb866f-lnc8j\" (UID: \"18d7838a-e01c-42e1-ab20-a72878132ef1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lnc8j" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.094635 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms9f6\" (UniqueName: \"kubernetes.io/projected/69dbb713-1149-4edd-899c-3fb77a8a36e2-kube-api-access-ms9f6\") pod \"multus-admission-controller-857f4d67dd-b2crx\" (UID: \"69dbb713-1149-4edd-899c-3fb77a8a36e2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-b2crx" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.094650 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/25d57f2f-1353-417b-ba47-a0ceb1a4577e-console-oauth-config\") pod \"console-f9d7485db-6lsq2\" (UID: \"25d57f2f-1353-417b-ba47-a0ceb1a4577e\") " pod="openshift-console/console-f9d7485db-6lsq2" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.094684 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/25d57f2f-1353-417b-ba47-a0ceb1a4577e-oauth-serving-cert\") pod \"console-f9d7485db-6lsq2\" (UID: \"25d57f2f-1353-417b-ba47-a0ceb1a4577e\") " pod="openshift-console/console-f9d7485db-6lsq2" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.094710 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db27j\" (UniqueName: \"kubernetes.io/projected/42da4c7a-d738-4262-a395-1ff1c9d4f399-kube-api-access-db27j\") pod \"cluster-image-registry-operator-dc59b4c8b-8k4x7\" (UID: \"42da4c7a-d738-4262-a395-1ff1c9d4f399\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8k4x7" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.094733 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82xsq\" (UniqueName: \"kubernetes.io/projected/18d7838a-e01c-42e1-ab20-a72878132ef1-kube-api-access-82xsq\") pod \"openshift-config-operator-7777fb866f-lnc8j\" (UID: \"18d7838a-e01c-42e1-ab20-a72878132ef1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lnc8j" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.094749 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42da4c7a-d738-4262-a395-1ff1c9d4f399-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8k4x7\" (UID: \"42da4c7a-d738-4262-a395-1ff1c9d4f399\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8k4x7" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.094764 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/42da4c7a-d738-4262-a395-1ff1c9d4f399-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8k4x7\" (UID: \"42da4c7a-d738-4262-a395-1ff1c9d4f399\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8k4x7" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.094786 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/25d57f2f-1353-417b-ba47-a0ceb1a4577e-service-ca\") pod \"console-f9d7485db-6lsq2\" (UID: \"25d57f2f-1353-417b-ba47-a0ceb1a4577e\") " pod="openshift-console/console-f9d7485db-6lsq2" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.094809 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87c3b9b3-d6b0-46cb-aed9-c58555214163-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-fbt8p\" (UID: \"87c3b9b3-d6b0-46cb-aed9-c58555214163\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fbt8p" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.094830 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68rp2\" (UniqueName: \"kubernetes.io/projected/87c3b9b3-d6b0-46cb-aed9-c58555214163-kube-api-access-68rp2\") pod \"openshift-controller-manager-operator-756b6f6bc6-fbt8p\" (UID: \"87c3b9b3-d6b0-46cb-aed9-c58555214163\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fbt8p" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.094848 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wqxg\" (UniqueName: \"kubernetes.io/projected/01fa587e-a8a9-4092-9462-905cf90cf1dc-kube-api-access-9wqxg\") pod \"cluster-samples-operator-665b6dd947-sqxbg\" (UID: \"01fa587e-a8a9-4092-9462-905cf90cf1dc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-sqxbg" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.094857 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/18d7838a-e01c-42e1-ab20-a72878132ef1-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lnc8j\" (UID: \"18d7838a-e01c-42e1-ab20-a72878132ef1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lnc8j" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.094863 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/25d57f2f-1353-417b-ba47-a0ceb1a4577e-console-config\") pod \"console-f9d7485db-6lsq2\" (UID: \"25d57f2f-1353-417b-ba47-a0ceb1a4577e\") " pod="openshift-console/console-f9d7485db-6lsq2" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.094962 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/851af35f-1738-49d8-855c-33e09731c8e3-metrics-tls\") pod \"dns-operator-744455d44c-t67cg\" (UID: \"851af35f-1738-49d8-855c-33e09731c8e3\") " pod="openshift-dns-operator/dns-operator-744455d44c-t67cg" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.094991 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87c3b9b3-d6b0-46cb-aed9-c58555214163-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-fbt8p\" (UID: \"87c3b9b3-d6b0-46cb-aed9-c58555214163\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fbt8p" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.095016 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7f5w\" (UniqueName: \"kubernetes.io/projected/851af35f-1738-49d8-855c-33e09731c8e3-kube-api-access-j7f5w\") pod \"dns-operator-744455d44c-t67cg\" (UID: \"851af35f-1738-49d8-855c-33e09731c8e3\") " pod="openshift-dns-operator/dns-operator-744455d44c-t67cg" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.095058 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrpfd\" (UniqueName: \"kubernetes.io/projected/25d57f2f-1353-417b-ba47-a0ceb1a4577e-kube-api-access-vrpfd\") pod \"console-f9d7485db-6lsq2\" (UID: \"25d57f2f-1353-417b-ba47-a0ceb1a4577e\") " pod="openshift-console/console-f9d7485db-6lsq2" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.095466 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/25d57f2f-1353-417b-ba47-a0ceb1a4577e-console-config\") pod \"console-f9d7485db-6lsq2\" (UID: \"25d57f2f-1353-417b-ba47-a0ceb1a4577e\") " pod="openshift-console/console-f9d7485db-6lsq2" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.096863 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/25d57f2f-1353-417b-ba47-a0ceb1a4577e-oauth-serving-cert\") pod \"console-f9d7485db-6lsq2\" (UID: \"25d57f2f-1353-417b-ba47-a0ceb1a4577e\") " pod="openshift-console/console-f9d7485db-6lsq2" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.096912 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-77tbv"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.097341 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/25d57f2f-1353-417b-ba47-a0ceb1a4577e-service-ca\") pod \"console-f9d7485db-6lsq2\" (UID: \"25d57f2f-1353-417b-ba47-a0ceb1a4577e\") " pod="openshift-console/console-f9d7485db-6lsq2" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.097449 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/655f7be8-97ed-4ff8-a91a-3f68d6e8cbf3-config\") pod \"console-operator-58897d9998-ks4bk\" (UID: \"655f7be8-97ed-4ff8-a91a-3f68d6e8cbf3\") " pod="openshift-console-operator/console-operator-58897d9998-ks4bk" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.097963 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/655f7be8-97ed-4ff8-a91a-3f68d6e8cbf3-trusted-ca\") pod \"console-operator-58897d9998-ks4bk\" (UID: \"655f7be8-97ed-4ff8-a91a-3f68d6e8cbf3\") " pod="openshift-console-operator/console-operator-58897d9998-ks4bk" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.098204 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42da4c7a-d738-4262-a395-1ff1c9d4f399-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8k4x7\" (UID: \"42da4c7a-d738-4262-a395-1ff1c9d4f399\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8k4x7" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.098480 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25d57f2f-1353-417b-ba47-a0ceb1a4577e-trusted-ca-bundle\") pod \"console-f9d7485db-6lsq2\" (UID: \"25d57f2f-1353-417b-ba47-a0ceb1a4577e\") " pod="openshift-console/console-f9d7485db-6lsq2" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.098587 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87c3b9b3-d6b0-46cb-aed9-c58555214163-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-fbt8p\" (UID: \"87c3b9b3-d6b0-46cb-aed9-c58555214163\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fbt8p" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.098613 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/25d57f2f-1353-417b-ba47-a0ceb1a4577e-console-oauth-config\") pod \"console-f9d7485db-6lsq2\" (UID: \"25d57f2f-1353-417b-ba47-a0ceb1a4577e\") " pod="openshift-console/console-f9d7485db-6lsq2" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.099028 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/42da4c7a-d738-4262-a395-1ff1c9d4f399-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8k4x7\" (UID: \"42da4c7a-d738-4262-a395-1ff1c9d4f399\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8k4x7" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.099186 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/655f7be8-97ed-4ff8-a91a-3f68d6e8cbf3-serving-cert\") pod \"console-operator-58897d9998-ks4bk\" (UID: \"655f7be8-97ed-4ff8-a91a-3f68d6e8cbf3\") " pod="openshift-console-operator/console-operator-58897d9998-ks4bk" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.099324 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87c3b9b3-d6b0-46cb-aed9-c58555214163-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-fbt8p\" (UID: \"87c3b9b3-d6b0-46cb-aed9-c58555214163\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fbt8p" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.099588 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5klc7"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.100097 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/851af35f-1738-49d8-855c-33e09731c8e3-metrics-tls\") pod \"dns-operator-744455d44c-t67cg\" (UID: \"851af35f-1738-49d8-855c-33e09731c8e3\") " pod="openshift-dns-operator/dns-operator-744455d44c-t67cg" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.100380 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/01fa587e-a8a9-4092-9462-905cf90cf1dc-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-sqxbg\" (UID: \"01fa587e-a8a9-4092-9462-905cf90cf1dc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-sqxbg" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.101337 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-fxjs9"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.101336 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/25d57f2f-1353-417b-ba47-a0ceb1a4577e-console-serving-cert\") pod \"console-f9d7485db-6lsq2\" (UID: \"25d57f2f-1353-417b-ba47-a0ceb1a4577e\") " pod="openshift-console/console-f9d7485db-6lsq2" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.102240 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18d7838a-e01c-42e1-ab20-a72878132ef1-serving-cert\") pod \"openshift-config-operator-7777fb866f-lnc8j\" (UID: \"18d7838a-e01c-42e1-ab20-a72878132ef1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lnc8j" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.102749 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8tlg5"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.103904 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ztzb7"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.105521 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-szsvh"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.105761 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.106459 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-szsvh" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.107673 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-d48qm"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.109429 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29454255-49zxs"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.110400 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zjdzc"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.111525 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-p7jh5"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.112701 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-c47kg"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.114895 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-67p67"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.115526 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8mps5"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.116742 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-7qgnh"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.118576 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-sbfbv"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.120138 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qqrkc"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.124159 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-69dxz"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.125495 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.126090 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-69dxz" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.127560 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-tb6h6"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.128673 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-tb6h6" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.130417 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-69dxz"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.131941 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-tb6h6"] Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.145820 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.165012 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.185767 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.206020 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.206526 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/69dbb713-1149-4edd-899c-3fb77a8a36e2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-b2crx\" (UID: \"69dbb713-1149-4edd-899c-3fb77a8a36e2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-b2crx" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.245299 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.265223 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.285466 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.305547 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.325604 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.345824 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.365561 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.385616 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.405570 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.425996 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.445378 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.466091 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.485612 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.505821 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.525943 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.547271 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.565811 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.585456 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.644452 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmfjs\" (UniqueName: \"kubernetes.io/projected/f92ddf87-e976-4f3c-9a8c-8a4dab665391-kube-api-access-vmfjs\") pod \"machine-approver-56656f9798-pttmf\" (UID: \"f92ddf87-e976-4f3c-9a8c-8a4dab665391\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pttmf" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.652717 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjw4k\" (UniqueName: \"kubernetes.io/projected/1504048c-578a-42d2-a6de-9161ee1ebb82-kube-api-access-bjw4k\") pod \"apiserver-76f77b778f-bsfrw\" (UID: \"1504048c-578a-42d2-a6de-9161ee1ebb82\") " pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.672633 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dfqd\" (UniqueName: \"kubernetes.io/projected/9aff785a-03ef-4b1a-93d6-e2674725b053-kube-api-access-6dfqd\") pod \"route-controller-manager-6576b87f9c-j8lqg\" (UID: \"9aff785a-03ef-4b1a-93d6-e2674725b053\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j8lqg" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.694235 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmr9w\" (UniqueName: \"kubernetes.io/projected/ba5c2565-7b3c-4c9b-8600-6a572cc363e0-kube-api-access-gmr9w\") pod \"apiserver-7bbb656c7d-gjdqd\" (UID: \"ba5c2565-7b3c-4c9b-8600-6a572cc363e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjdqd" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.716383 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttrk2\" (UniqueName: \"kubernetes.io/projected/4ad652ed-d682-41ed-86e7-d49eaa1b6f0b-kube-api-access-ttrk2\") pod \"openshift-apiserver-operator-796bbdcf4f-ngh5s\" (UID: \"4ad652ed-d682-41ed-86e7-d49eaa1b6f0b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngh5s" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.735762 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwd6x\" (UniqueName: \"kubernetes.io/projected/02e8282b-37b6-4539-ad59-fae4c4c65a45-kube-api-access-dwd6x\") pod \"controller-manager-879f6c89f-jjglf\" (UID: \"02e8282b-37b6-4539-ad59-fae4c4c65a45\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jjglf" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.738380 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.749268 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hhnd\" (UniqueName: \"kubernetes.io/projected/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-kube-api-access-2hhnd\") pod \"oauth-openshift-558db77b4-wn4kc\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.774493 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khjct\" (UniqueName: \"kubernetes.io/projected/94b7004c-c318-4872-a1b7-f983c691a523-kube-api-access-khjct\") pod \"machine-api-operator-5694c8668f-hmhl9\" (UID: \"94b7004c-c318-4872-a1b7-f983c691a523\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hmhl9" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.786114 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.792612 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jjglf" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.807541 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.810202 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hmhl9" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.827753 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.846868 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngh5s" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.848812 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.867216 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.869013 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjdqd" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.875149 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.884967 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j8lqg" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.885625 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.890758 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pttmf" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.910410 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.926094 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.948538 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.966632 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 01 08:28:46 crc kubenswrapper[4867]: I0101 08:28:46.985530 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.013192 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bsfrw"] Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.013462 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.030692 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.038923 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jjglf"] Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.048589 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.065015 4867 request.go:700] Waited for 1.011143296s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-operator-config&limit=500&resourceVersion=0 Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.066622 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.086847 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.106156 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.110008 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hmhl9"] Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.123137 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wn4kc"] Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.125186 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.151320 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.157143 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngh5s"] Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.171036 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.187672 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.206241 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.226312 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.245261 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.266492 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.286076 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.305288 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.324811 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.345581 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.365205 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.385163 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.389951 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gjdqd"] Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.391232 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j8lqg"] Jan 01 08:28:47 crc kubenswrapper[4867]: W0101 08:28:47.397094 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba5c2565_7b3c_4c9b_8600_6a572cc363e0.slice/crio-4d644e524613fa2798af272261e783dd839b792fb2e2440264dbb42afb40e234 WatchSource:0}: Error finding container 4d644e524613fa2798af272261e783dd839b792fb2e2440264dbb42afb40e234: Status 404 returned error can't find the container with id 4d644e524613fa2798af272261e783dd839b792fb2e2440264dbb42afb40e234 Jan 01 08:28:47 crc kubenswrapper[4867]: W0101 08:28:47.397896 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9aff785a_03ef_4b1a_93d6_e2674725b053.slice/crio-60445b314ea401024b0dab3b21a319a87fbe1b6c595219210a1c682cc3b81b05 WatchSource:0}: Error finding container 60445b314ea401024b0dab3b21a319a87fbe1b6c595219210a1c682cc3b81b05: Status 404 returned error can't find the container with id 60445b314ea401024b0dab3b21a319a87fbe1b6c595219210a1c682cc3b81b05 Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.405220 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.425649 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.446311 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.473221 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.486014 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.505305 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.526032 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.546813 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.565922 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.586166 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.606827 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.626086 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.649863 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.665649 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.686206 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.706061 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.725711 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.746475 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.765742 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.786832 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.805597 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.828207 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.867004 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrpfd\" (UniqueName: \"kubernetes.io/projected/25d57f2f-1353-417b-ba47-a0ceb1a4577e-kube-api-access-vrpfd\") pod \"console-f9d7485db-6lsq2\" (UID: \"25d57f2f-1353-417b-ba47-a0ceb1a4577e\") " pod="openshift-console/console-f9d7485db-6lsq2" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.868163 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j8lqg" event={"ID":"9aff785a-03ef-4b1a-93d6-e2674725b053","Type":"ContainerStarted","Data":"009ccc0c7237d7edf16767bc1699bf6c13201f9ef5a80deb5da6618467562d9b"} Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.868216 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j8lqg" event={"ID":"9aff785a-03ef-4b1a-93d6-e2674725b053","Type":"ContainerStarted","Data":"60445b314ea401024b0dab3b21a319a87fbe1b6c595219210a1c682cc3b81b05"} Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.869296 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" event={"ID":"158aa7f6-a8d3-4a58-a437-5962f1fc90a2","Type":"ContainerStarted","Data":"6bc09841d5f6ad466a25de95cfaa72daafa8154248ed7db0238842b0147fb3fd"} Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.869450 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" event={"ID":"158aa7f6-a8d3-4a58-a437-5962f1fc90a2","Type":"ContainerStarted","Data":"0b603f3b8f36c65d699b0b8691a489c873f55c6091901e2dd962d79065ffb475"} Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.871801 4867 generic.go:334] "Generic (PLEG): container finished" podID="1504048c-578a-42d2-a6de-9161ee1ebb82" containerID="7600e7fd1a4cc02c4fcc1f01f53af03ea0efc85ac18cd6d4311d6726dd57753d" exitCode=0 Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.871870 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" event={"ID":"1504048c-578a-42d2-a6de-9161ee1ebb82","Type":"ContainerDied","Data":"7600e7fd1a4cc02c4fcc1f01f53af03ea0efc85ac18cd6d4311d6726dd57753d"} Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.871913 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" event={"ID":"1504048c-578a-42d2-a6de-9161ee1ebb82","Type":"ContainerStarted","Data":"bbba2c7d1a800625b76287297de860da2fbdbe36728e08a29c8cbf0cf71718b8"} Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.873284 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hmhl9" event={"ID":"94b7004c-c318-4872-a1b7-f983c691a523","Type":"ContainerStarted","Data":"ab348c6770ad7f70df62a787fae3821dcecc769fa70c5e755872ab7b8279c18f"} Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.873328 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hmhl9" event={"ID":"94b7004c-c318-4872-a1b7-f983c691a523","Type":"ContainerStarted","Data":"13777061673c64e1e35e968783695ff306b9b34be33b78743bd203cfe5326b27"} Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.876071 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pttmf" event={"ID":"f92ddf87-e976-4f3c-9a8c-8a4dab665391","Type":"ContainerStarted","Data":"8909e646c66034e33bf7fd0470bc167c5bc77c754de5fcaa2b788667eea67760"} Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.876293 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pttmf" event={"ID":"f92ddf87-e976-4f3c-9a8c-8a4dab665391","Type":"ContainerStarted","Data":"e72e156100ec18751aa37ee8a6fad0e6519a457d95971e3c521819484ca44995"} Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.877086 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngh5s" event={"ID":"4ad652ed-d682-41ed-86e7-d49eaa1b6f0b","Type":"ContainerStarted","Data":"1d3692f9b4e5ce6baed6eaa3ffda88c4d9c29ee326f14c741ef36702d00a352d"} Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.877298 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngh5s" event={"ID":"4ad652ed-d682-41ed-86e7-d49eaa1b6f0b","Type":"ContainerStarted","Data":"43df2218eacf6a171fef9ef5f08ab9f0d3b88c1179cbb737c1e49be6fb7b759f"} Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.877794 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjdqd" event={"ID":"ba5c2565-7b3c-4c9b-8600-6a572cc363e0","Type":"ContainerStarted","Data":"4d644e524613fa2798af272261e783dd839b792fb2e2440264dbb42afb40e234"} Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.878921 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jjglf" event={"ID":"02e8282b-37b6-4539-ad59-fae4c4c65a45","Type":"ContainerStarted","Data":"288b30b3cc52e5102272912ca3e9a712cda2a4805fff6260e20ba74ac72476ff"} Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.878952 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jjglf" event={"ID":"02e8282b-37b6-4539-ad59-fae4c4c65a45","Type":"ContainerStarted","Data":"3b257c2977e4df99d1928656e5c44fde6cf619b30be51455b966ca0bda8595f0"} Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.879961 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-jjglf" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.883324 4867 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-jjglf container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.883577 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-jjglf" podUID="02e8282b-37b6-4539-ad59-fae4c4c65a45" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.884856 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db27j\" (UniqueName: \"kubernetes.io/projected/42da4c7a-d738-4262-a395-1ff1c9d4f399-kube-api-access-db27j\") pod \"cluster-image-registry-operator-dc59b4c8b-8k4x7\" (UID: \"42da4c7a-d738-4262-a395-1ff1c9d4f399\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8k4x7" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.894349 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6lsq2" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.901580 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7f5w\" (UniqueName: \"kubernetes.io/projected/851af35f-1738-49d8-855c-33e09731c8e3-kube-api-access-j7f5w\") pod \"dns-operator-744455d44c-t67cg\" (UID: \"851af35f-1738-49d8-855c-33e09731c8e3\") " pod="openshift-dns-operator/dns-operator-744455d44c-t67cg" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.928797 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68rp2\" (UniqueName: \"kubernetes.io/projected/87c3b9b3-d6b0-46cb-aed9-c58555214163-kube-api-access-68rp2\") pod \"openshift-controller-manager-operator-756b6f6bc6-fbt8p\" (UID: \"87c3b9b3-d6b0-46cb-aed9-c58555214163\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fbt8p" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.951353 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82xsq\" (UniqueName: \"kubernetes.io/projected/18d7838a-e01c-42e1-ab20-a72878132ef1-kube-api-access-82xsq\") pod \"openshift-config-operator-7777fb866f-lnc8j\" (UID: \"18d7838a-e01c-42e1-ab20-a72878132ef1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lnc8j" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.959695 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wqxg\" (UniqueName: \"kubernetes.io/projected/01fa587e-a8a9-4092-9462-905cf90cf1dc-kube-api-access-9wqxg\") pod \"cluster-samples-operator-665b6dd947-sqxbg\" (UID: \"01fa587e-a8a9-4092-9462-905cf90cf1dc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-sqxbg" Jan 01 08:28:47 crc kubenswrapper[4867]: I0101 08:28:47.979507 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42da4c7a-d738-4262-a395-1ff1c9d4f399-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8k4x7\" (UID: \"42da4c7a-d738-4262-a395-1ff1c9d4f399\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8k4x7" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.001629 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz96w\" (UniqueName: \"kubernetes.io/projected/655f7be8-97ed-4ff8-a91a-3f68d6e8cbf3-kube-api-access-bz96w\") pod \"console-operator-58897d9998-ks4bk\" (UID: \"655f7be8-97ed-4ff8-a91a-3f68d6e8cbf3\") " pod="openshift-console-operator/console-operator-58897d9998-ks4bk" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.032947 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.045174 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.057664 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms9f6\" (UniqueName: \"kubernetes.io/projected/69dbb713-1149-4edd-899c-3fb77a8a36e2-kube-api-access-ms9f6\") pod \"multus-admission-controller-857f4d67dd-b2crx\" (UID: \"69dbb713-1149-4edd-899c-3fb77a8a36e2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-b2crx" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.065550 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.065603 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-6lsq2"] Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.083748 4867 request.go:700] Waited for 1.957462521s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/secrets?fieldSelector=metadata.name%3Dcsi-hostpath-provisioner-sa-dockercfg-qd74k&limit=500&resourceVersion=0 Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.085122 4867 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.105201 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.125700 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.146773 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.153796 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-ks4bk" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.159517 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-sqxbg" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.166271 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.175616 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fbt8p" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.185470 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.187838 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lnc8j" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.199076 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-t67cg" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.205696 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.224288 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8k4x7" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.234473 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.234505 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bab255b3-6b41-494f-a4f6-dde5ebe7b538-registry-certificates\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.234577 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7063db7c-d5df-471f-ad8a-48a99bfd8e19-config\") pod \"kube-apiserver-operator-766d6c64bb-t99qw\" (UID: \"7063db7c-d5df-471f-ad8a-48a99bfd8e19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t99qw" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.234598 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7063db7c-d5df-471f-ad8a-48a99bfd8e19-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-t99qw\" (UID: \"7063db7c-d5df-471f-ad8a-48a99bfd8e19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t99qw" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.234630 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/974e660d-a7f0-4c6b-922a-adcccb236a54-serving-cert\") pod \"authentication-operator-69f744f599-nxcwg\" (UID: \"974e660d-a7f0-4c6b-922a-adcccb236a54\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nxcwg" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.234668 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4vjl\" (UniqueName: \"kubernetes.io/projected/974e660d-a7f0-4c6b-922a-adcccb236a54-kube-api-access-t4vjl\") pod \"authentication-operator-69f744f599-nxcwg\" (UID: \"974e660d-a7f0-4c6b-922a-adcccb236a54\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nxcwg" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.234693 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/974e660d-a7f0-4c6b-922a-adcccb236a54-config\") pod \"authentication-operator-69f744f599-nxcwg\" (UID: \"974e660d-a7f0-4c6b-922a-adcccb236a54\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nxcwg" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.234733 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7063db7c-d5df-471f-ad8a-48a99bfd8e19-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-t99qw\" (UID: \"7063db7c-d5df-471f-ad8a-48a99bfd8e19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t99qw" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.234752 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bab255b3-6b41-494f-a4f6-dde5ebe7b538-bound-sa-token\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.234773 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kv7b\" (UniqueName: \"kubernetes.io/projected/bab255b3-6b41-494f-a4f6-dde5ebe7b538-kube-api-access-8kv7b\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.234815 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bab255b3-6b41-494f-a4f6-dde5ebe7b538-installation-pull-secrets\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.234832 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bab255b3-6b41-494f-a4f6-dde5ebe7b538-registry-tls\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.234864 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bab255b3-6b41-494f-a4f6-dde5ebe7b538-trusted-ca\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.234878 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/974e660d-a7f0-4c6b-922a-adcccb236a54-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-nxcwg\" (UID: \"974e660d-a7f0-4c6b-922a-adcccb236a54\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nxcwg" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.234910 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bab255b3-6b41-494f-a4f6-dde5ebe7b538-ca-trust-extracted\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.234925 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8z7w\" (UniqueName: \"kubernetes.io/projected/9e8f3532-2214-4769-b4f1-2edb51ca7aec-kube-api-access-m8z7w\") pod \"downloads-7954f5f757-fxjs9\" (UID: \"9e8f3532-2214-4769-b4f1-2edb51ca7aec\") " pod="openshift-console/downloads-7954f5f757-fxjs9" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.234940 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/974e660d-a7f0-4c6b-922a-adcccb236a54-service-ca-bundle\") pod \"authentication-operator-69f744f599-nxcwg\" (UID: \"974e660d-a7f0-4c6b-922a-adcccb236a54\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nxcwg" Jan 01 08:28:48 crc kubenswrapper[4867]: E0101 08:28:48.235214 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:48.735203356 +0000 UTC m=+137.870472125 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.240911 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-b2crx" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.335272 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.335471 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bab255b3-6b41-494f-a4f6-dde5ebe7b538-registry-tls\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.335498 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9523f6a7-595f-4984-87e7-3b78d9a4222c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-c47kg\" (UID: \"9523f6a7-595f-4984-87e7-3b78d9a4222c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c47kg" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.335516 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/53c28c3c-1ead-4eae-b73b-3c8d2d0475ab-webhook-cert\") pod \"packageserver-d55dfcdfc-ztzb7\" (UID: \"53c28c3c-1ead-4eae-b73b-3c8d2d0475ab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ztzb7" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.335536 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw28j\" (UniqueName: \"kubernetes.io/projected/e620889a-27ac-4bc5-99e9-b1033d3f2345-kube-api-access-pw28j\") pod \"service-ca-9c57cc56f-n955p\" (UID: \"e620889a-27ac-4bc5-99e9-b1033d3f2345\") " pod="openshift-service-ca/service-ca-9c57cc56f-n955p" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.335550 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0f1b6231-5477-4761-8cdb-8fb1b61756c5-cert\") pod \"ingress-canary-tb6h6\" (UID: \"0f1b6231-5477-4761-8cdb-8fb1b61756c5\") " pod="openshift-ingress-canary/ingress-canary-tb6h6" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.335579 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/974e660d-a7f0-4c6b-922a-adcccb236a54-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-nxcwg\" (UID: \"974e660d-a7f0-4c6b-922a-adcccb236a54\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nxcwg" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.335598 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2dpd\" (UniqueName: \"kubernetes.io/projected/53c32bbb-3266-4a0c-8e56-ce2b80becc85-kube-api-access-m2dpd\") pod \"package-server-manager-789f6589d5-77tbv\" (UID: \"53c32bbb-3266-4a0c-8e56-ce2b80becc85\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-77tbv" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.335618 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa7410b8-9dc3-410f-9c3b-c8cac55804c7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-nsfm2\" (UID: \"aa7410b8-9dc3-410f-9c3b-c8cac55804c7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nsfm2" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.335642 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bab255b3-6b41-494f-a4f6-dde5ebe7b538-registry-certificates\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.335659 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/15e74714-78ff-4351-9088-ddf6672ce8a5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8tlg5\" (UID: \"15e74714-78ff-4351-9088-ddf6672ce8a5\") " pod="openshift-marketplace/marketplace-operator-79b997595-8tlg5" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.335684 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17d1f537-63f7-4ee9-ba1a-8395a56d24a3-config\") pod \"etcd-operator-b45778765-5gnbj\" (UID: \"17d1f537-63f7-4ee9-ba1a-8395a56d24a3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5gnbj" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.335708 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcf66\" (UniqueName: \"kubernetes.io/projected/0f1b6231-5477-4761-8cdb-8fb1b61756c5-kube-api-access-mcf66\") pod \"ingress-canary-tb6h6\" (UID: \"0f1b6231-5477-4761-8cdb-8fb1b61756c5\") " pod="openshift-ingress-canary/ingress-canary-tb6h6" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.335726 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c637114e-4a21-4bea-86fc-c15d89d4c72f-config\") pod \"service-ca-operator-777779d784-d48qm\" (UID: \"c637114e-4a21-4bea-86fc-c15d89d4c72f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d48qm" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.335811 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7063db7c-d5df-471f-ad8a-48a99bfd8e19-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-t99qw\" (UID: \"7063db7c-d5df-471f-ad8a-48a99bfd8e19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t99qw" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.335829 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4klht\" (UniqueName: \"kubernetes.io/projected/7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1-kube-api-access-4klht\") pod \"csi-hostpathplugin-69dxz\" (UID: \"7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1\") " pod="hostpath-provisioner/csi-hostpathplugin-69dxz" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.335844 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e620889a-27ac-4bc5-99e9-b1033d3f2345-signing-key\") pod \"service-ca-9c57cc56f-n955p\" (UID: \"e620889a-27ac-4bc5-99e9-b1033d3f2345\") " pod="openshift-service-ca/service-ca-9c57cc56f-n955p" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.335860 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ghwk\" (UniqueName: \"kubernetes.io/projected/1ac62bb4-9b43-4266-8325-ecdc8d1c0d39-kube-api-access-7ghwk\") pod \"router-default-5444994796-7v969\" (UID: \"1ac62bb4-9b43-4266-8325-ecdc8d1c0d39\") " pod="openshift-ingress/router-default-5444994796-7v969" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.335905 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1-csi-data-dir\") pod \"csi-hostpathplugin-69dxz\" (UID: \"7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1\") " pod="hostpath-provisioner/csi-hostpathplugin-69dxz" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.335955 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1-mountpoint-dir\") pod \"csi-hostpathplugin-69dxz\" (UID: \"7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1\") " pod="hostpath-provisioner/csi-hostpathplugin-69dxz" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.335978 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/1ac62bb4-9b43-4266-8325-ecdc8d1c0d39-stats-auth\") pod \"router-default-5444994796-7v969\" (UID: \"1ac62bb4-9b43-4266-8325-ecdc8d1c0d39\") " pod="openshift-ingress/router-default-5444994796-7v969" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336001 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ba8ed252-ce14-45fe-8a3e-82d7981d3acb-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-67p67\" (UID: \"ba8ed252-ce14-45fe-8a3e-82d7981d3acb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-67p67" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336041 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/53c32bbb-3266-4a0c-8e56-ce2b80becc85-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-77tbv\" (UID: \"53c32bbb-3266-4a0c-8e56-ce2b80becc85\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-77tbv" Jan 01 08:28:48 crc kubenswrapper[4867]: E0101 08:28:48.336069 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:48.836045335 +0000 UTC m=+137.971314104 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336141 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj9zj\" (UniqueName: \"kubernetes.io/projected/fe162e8b-b505-4876-ae5a-c2314639fda9-kube-api-access-zj9zj\") pod \"kube-storage-version-migrator-operator-b67b599dd-8mps5\" (UID: \"fe162e8b-b505-4876-ae5a-c2314639fda9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8mps5" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336166 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fb2725a-267d-41b2-bf92-e18f82bcbeab-config\") pod \"kube-controller-manager-operator-78b949d7b-qqrkc\" (UID: \"3fb2725a-267d-41b2-bf92-e18f82bcbeab\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qqrkc" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336185 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c372fe91-d8d2-41b0-a85d-327796979bf1-certs\") pod \"machine-config-server-szsvh\" (UID: \"c372fe91-d8d2-41b0-a85d-327796979bf1\") " pod="openshift-machine-config-operator/machine-config-server-szsvh" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336203 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e93786ca-d394-46a5-94d8-00c6056cea5a-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zjdzc\" (UID: \"e93786ca-d394-46a5-94d8-00c6056cea5a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zjdzc" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336221 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d2ba3c1-e279-4376-a7fb-8bd238dbd9bb-config-volume\") pod \"dns-default-sbfbv\" (UID: \"9d2ba3c1-e279-4376-a7fb-8bd238dbd9bb\") " pod="openshift-dns/dns-default-sbfbv" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336238 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ac62bb4-9b43-4266-8325-ecdc8d1c0d39-service-ca-bundle\") pod \"router-default-5444994796-7v969\" (UID: \"1ac62bb4-9b43-4266-8325-ecdc8d1c0d39\") " pod="openshift-ingress/router-default-5444994796-7v969" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336253 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1ac62bb4-9b43-4266-8325-ecdc8d1c0d39-metrics-certs\") pod \"router-default-5444994796-7v969\" (UID: \"1ac62bb4-9b43-4266-8325-ecdc8d1c0d39\") " pod="openshift-ingress/router-default-5444994796-7v969" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336287 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/974e660d-a7f0-4c6b-922a-adcccb236a54-config\") pod \"authentication-operator-69f744f599-nxcwg\" (UID: \"974e660d-a7f0-4c6b-922a-adcccb236a54\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nxcwg" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336307 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/300dbd86-e6d3-4baf-9cfb-77f0930937a4-profile-collector-cert\") pod \"catalog-operator-68c6474976-5klc7\" (UID: \"300dbd86-e6d3-4baf-9cfb-77f0930937a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5klc7" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336346 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/53c28c3c-1ead-4eae-b73b-3c8d2d0475ab-apiservice-cert\") pod \"packageserver-d55dfcdfc-ztzb7\" (UID: \"53c28c3c-1ead-4eae-b73b-3c8d2d0475ab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ztzb7" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336371 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bab255b3-6b41-494f-a4f6-dde5ebe7b538-bound-sa-token\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336392 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1-registration-dir\") pod \"csi-hostpathplugin-69dxz\" (UID: \"7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1\") " pod="hostpath-provisioner/csi-hostpathplugin-69dxz" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336428 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kv7b\" (UniqueName: \"kubernetes.io/projected/bab255b3-6b41-494f-a4f6-dde5ebe7b538-kube-api-access-8kv7b\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336445 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c372fe91-d8d2-41b0-a85d-327796979bf1-node-bootstrap-token\") pod \"machine-config-server-szsvh\" (UID: \"c372fe91-d8d2-41b0-a85d-327796979bf1\") " pod="openshift-machine-config-operator/machine-config-server-szsvh" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336481 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfwfm\" (UniqueName: \"kubernetes.io/projected/c637114e-4a21-4bea-86fc-c15d89d4c72f-kube-api-access-rfwfm\") pod \"service-ca-operator-777779d784-d48qm\" (UID: \"c637114e-4a21-4bea-86fc-c15d89d4c72f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d48qm" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336499 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwvjk\" (UniqueName: \"kubernetes.io/projected/e93786ca-d394-46a5-94d8-00c6056cea5a-kube-api-access-kwvjk\") pod \"machine-config-controller-84d6567774-zjdzc\" (UID: \"e93786ca-d394-46a5-94d8-00c6056cea5a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zjdzc" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336516 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvc9n\" (UniqueName: \"kubernetes.io/projected/cab9f457-c407-4075-bddf-deb1c8e86b45-kube-api-access-wvc9n\") pod \"olm-operator-6b444d44fb-7qdzz\" (UID: \"cab9f457-c407-4075-bddf-deb1c8e86b45\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7qdzz" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336566 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17d1f537-63f7-4ee9-ba1a-8395a56d24a3-serving-cert\") pod \"etcd-operator-b45778765-5gnbj\" (UID: \"17d1f537-63f7-4ee9-ba1a-8395a56d24a3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5gnbj" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336581 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/17d1f537-63f7-4ee9-ba1a-8395a56d24a3-etcd-client\") pod \"etcd-operator-b45778765-5gnbj\" (UID: \"17d1f537-63f7-4ee9-ba1a-8395a56d24a3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5gnbj" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336608 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5fw9\" (UniqueName: \"kubernetes.io/projected/aa7410b8-9dc3-410f-9c3b-c8cac55804c7-kube-api-access-d5fw9\") pod \"control-plane-machine-set-operator-78cbb6b69f-nsfm2\" (UID: \"aa7410b8-9dc3-410f-9c3b-c8cac55804c7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nsfm2" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336623 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed3ea167-3dde-4d3d-b36b-277e5368f1c9-secret-volume\") pod \"collect-profiles-29454255-49zxs\" (UID: \"ed3ea167-3dde-4d3d-b36b-277e5368f1c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454255-49zxs" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336639 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e93786ca-d394-46a5-94d8-00c6056cea5a-proxy-tls\") pod \"machine-config-controller-84d6567774-zjdzc\" (UID: \"e93786ca-d394-46a5-94d8-00c6056cea5a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zjdzc" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336661 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3fb2725a-267d-41b2-bf92-e18f82bcbeab-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qqrkc\" (UID: \"3fb2725a-267d-41b2-bf92-e18f82bcbeab\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qqrkc" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336691 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bab255b3-6b41-494f-a4f6-dde5ebe7b538-trusted-ca\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336710 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/15e74714-78ff-4351-9088-ddf6672ce8a5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8tlg5\" (UID: \"15e74714-78ff-4351-9088-ddf6672ce8a5\") " pod="openshift-marketplace/marketplace-operator-79b997595-8tlg5" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336727 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e620889a-27ac-4bc5-99e9-b1033d3f2345-signing-cabundle\") pod \"service-ca-9c57cc56f-n955p\" (UID: \"e620889a-27ac-4bc5-99e9-b1033d3f2345\") " pod="openshift-service-ca/service-ca-9c57cc56f-n955p" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336744 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvtx6\" (UniqueName: \"kubernetes.io/projected/e005805b-ffc8-4dcf-88bc-31611953d870-kube-api-access-qvtx6\") pod \"machine-config-operator-74547568cd-p7jh5\" (UID: \"e005805b-ffc8-4dcf-88bc-31611953d870\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p7jh5" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336762 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bab255b3-6b41-494f-a4f6-dde5ebe7b538-ca-trust-extracted\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336778 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8z7w\" (UniqueName: \"kubernetes.io/projected/9e8f3532-2214-4769-b4f1-2edb51ca7aec-kube-api-access-m8z7w\") pod \"downloads-7954f5f757-fxjs9\" (UID: \"9e8f3532-2214-4769-b4f1-2edb51ca7aec\") " pod="openshift-console/downloads-7954f5f757-fxjs9" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336794 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/974e660d-a7f0-4c6b-922a-adcccb236a54-service-ca-bundle\") pod \"authentication-operator-69f744f599-nxcwg\" (UID: \"974e660d-a7f0-4c6b-922a-adcccb236a54\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nxcwg" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336809 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cab9f457-c407-4075-bddf-deb1c8e86b45-srv-cert\") pod \"olm-operator-6b444d44fb-7qdzz\" (UID: \"cab9f457-c407-4075-bddf-deb1c8e86b45\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7qdzz" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336828 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1-socket-dir\") pod \"csi-hostpathplugin-69dxz\" (UID: \"7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1\") " pod="hostpath-provisioner/csi-hostpathplugin-69dxz" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336858 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79vcr\" (UniqueName: \"kubernetes.io/projected/17d1f537-63f7-4ee9-ba1a-8395a56d24a3-kube-api-access-79vcr\") pod \"etcd-operator-b45778765-5gnbj\" (UID: \"17d1f537-63f7-4ee9-ba1a-8395a56d24a3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5gnbj" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336874 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5v45\" (UniqueName: \"kubernetes.io/projected/c372fe91-d8d2-41b0-a85d-327796979bf1-kube-api-access-r5v45\") pod \"machine-config-server-szsvh\" (UID: \"c372fe91-d8d2-41b0-a85d-327796979bf1\") " pod="openshift-machine-config-operator/machine-config-server-szsvh" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336901 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1-plugins-dir\") pod \"csi-hostpathplugin-69dxz\" (UID: \"7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1\") " pod="hostpath-provisioner/csi-hostpathplugin-69dxz" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336920 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7063db7c-d5df-471f-ad8a-48a99bfd8e19-config\") pod \"kube-apiserver-operator-766d6c64bb-t99qw\" (UID: \"7063db7c-d5df-471f-ad8a-48a99bfd8e19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t99qw" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336938 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9523f6a7-595f-4984-87e7-3b78d9a4222c-trusted-ca\") pod \"ingress-operator-5b745b69d9-c47kg\" (UID: \"9523f6a7-595f-4984-87e7-3b78d9a4222c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c47kg" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336954 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tljtw\" (UniqueName: \"kubernetes.io/projected/53c28c3c-1ead-4eae-b73b-3c8d2d0475ab-kube-api-access-tljtw\") pod \"packageserver-d55dfcdfc-ztzb7\" (UID: \"53c28c3c-1ead-4eae-b73b-3c8d2d0475ab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ztzb7" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.336969 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba8ed252-ce14-45fe-8a3e-82d7981d3acb-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-67p67\" (UID: \"ba8ed252-ce14-45fe-8a3e-82d7981d3acb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-67p67" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.337007 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgz5x\" (UniqueName: \"kubernetes.io/projected/8e196803-de1e-4b07-acf8-489c31eb0bf2-kube-api-access-vgz5x\") pod \"migrator-59844c95c7-7qgnh\" (UID: \"8e196803-de1e-4b07-acf8-489c31eb0bf2\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7qgnh" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.337042 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9523f6a7-595f-4984-87e7-3b78d9a4222c-metrics-tls\") pod \"ingress-operator-5b745b69d9-c47kg\" (UID: \"9523f6a7-595f-4984-87e7-3b78d9a4222c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c47kg" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.337070 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e005805b-ffc8-4dcf-88bc-31611953d870-auth-proxy-config\") pod \"machine-config-operator-74547568cd-p7jh5\" (UID: \"e005805b-ffc8-4dcf-88bc-31611953d870\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p7jh5" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.337092 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/17d1f537-63f7-4ee9-ba1a-8395a56d24a3-etcd-service-ca\") pod \"etcd-operator-b45778765-5gnbj\" (UID: \"17d1f537-63f7-4ee9-ba1a-8395a56d24a3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5gnbj" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.337108 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe162e8b-b505-4876-ae5a-c2314639fda9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-8mps5\" (UID: \"fe162e8b-b505-4876-ae5a-c2314639fda9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8mps5" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.337123 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f8jr\" (UniqueName: \"kubernetes.io/projected/9d2ba3c1-e279-4376-a7fb-8bd238dbd9bb-kube-api-access-7f8jr\") pod \"dns-default-sbfbv\" (UID: \"9d2ba3c1-e279-4376-a7fb-8bd238dbd9bb\") " pod="openshift-dns/dns-default-sbfbv" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.337169 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg47j\" (UniqueName: \"kubernetes.io/projected/ed3ea167-3dde-4d3d-b36b-277e5368f1c9-kube-api-access-tg47j\") pod \"collect-profiles-29454255-49zxs\" (UID: \"ed3ea167-3dde-4d3d-b36b-277e5368f1c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454255-49zxs" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.337186 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/974e660d-a7f0-4c6b-922a-adcccb236a54-serving-cert\") pod \"authentication-operator-69f744f599-nxcwg\" (UID: \"974e660d-a7f0-4c6b-922a-adcccb236a54\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nxcwg" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.337205 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e005805b-ffc8-4dcf-88bc-31611953d870-proxy-tls\") pod \"machine-config-operator-74547568cd-p7jh5\" (UID: \"e005805b-ffc8-4dcf-88bc-31611953d870\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p7jh5" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.337221 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/300dbd86-e6d3-4baf-9cfb-77f0930937a4-srv-cert\") pod \"catalog-operator-68c6474976-5klc7\" (UID: \"300dbd86-e6d3-4baf-9cfb-77f0930937a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5klc7" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.339228 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/974e660d-a7f0-4c6b-922a-adcccb236a54-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-nxcwg\" (UID: \"974e660d-a7f0-4c6b-922a-adcccb236a54\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nxcwg" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.339301 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4vjl\" (UniqueName: \"kubernetes.io/projected/974e660d-a7f0-4c6b-922a-adcccb236a54-kube-api-access-t4vjl\") pod \"authentication-operator-69f744f599-nxcwg\" (UID: \"974e660d-a7f0-4c6b-922a-adcccb236a54\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nxcwg" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.339325 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e005805b-ffc8-4dcf-88bc-31611953d870-images\") pod \"machine-config-operator-74547568cd-p7jh5\" (UID: \"e005805b-ffc8-4dcf-88bc-31611953d870\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p7jh5" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.339358 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9d2ba3c1-e279-4376-a7fb-8bd238dbd9bb-metrics-tls\") pod \"dns-default-sbfbv\" (UID: \"9d2ba3c1-e279-4376-a7fb-8bd238dbd9bb\") " pod="openshift-dns/dns-default-sbfbv" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.339373 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/1ac62bb4-9b43-4266-8325-ecdc8d1c0d39-default-certificate\") pod \"router-default-5444994796-7v969\" (UID: \"1ac62bb4-9b43-4266-8325-ecdc8d1c0d39\") " pod="openshift-ingress/router-default-5444994796-7v969" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.339393 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed3ea167-3dde-4d3d-b36b-277e5368f1c9-config-volume\") pod \"collect-profiles-29454255-49zxs\" (UID: \"ed3ea167-3dde-4d3d-b36b-277e5368f1c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454255-49zxs" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.339414 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7063db7c-d5df-471f-ad8a-48a99bfd8e19-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-t99qw\" (UID: \"7063db7c-d5df-471f-ad8a-48a99bfd8e19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t99qw" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.339437 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3fb2725a-267d-41b2-bf92-e18f82bcbeab-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qqrkc\" (UID: \"3fb2725a-267d-41b2-bf92-e18f82bcbeab\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qqrkc" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.339482 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp8tg\" (UniqueName: \"kubernetes.io/projected/15e74714-78ff-4351-9088-ddf6672ce8a5-kube-api-access-fp8tg\") pod \"marketplace-operator-79b997595-8tlg5\" (UID: \"15e74714-78ff-4351-9088-ddf6672ce8a5\") " pod="openshift-marketplace/marketplace-operator-79b997595-8tlg5" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.339503 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c637114e-4a21-4bea-86fc-c15d89d4c72f-serving-cert\") pod \"service-ca-operator-777779d784-d48qm\" (UID: \"c637114e-4a21-4bea-86fc-c15d89d4c72f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d48qm" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.339551 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cab9f457-c407-4075-bddf-deb1c8e86b45-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7qdzz\" (UID: \"cab9f457-c407-4075-bddf-deb1c8e86b45\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7qdzz" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.339579 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/17d1f537-63f7-4ee9-ba1a-8395a56d24a3-etcd-ca\") pod \"etcd-operator-b45778765-5gnbj\" (UID: \"17d1f537-63f7-4ee9-ba1a-8395a56d24a3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5gnbj" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.339594 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba8ed252-ce14-45fe-8a3e-82d7981d3acb-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-67p67\" (UID: \"ba8ed252-ce14-45fe-8a3e-82d7981d3acb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-67p67" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.339620 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8r89\" (UniqueName: \"kubernetes.io/projected/9523f6a7-595f-4984-87e7-3b78d9a4222c-kube-api-access-m8r89\") pod \"ingress-operator-5b745b69d9-c47kg\" (UID: \"9523f6a7-595f-4984-87e7-3b78d9a4222c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c47kg" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.339649 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe162e8b-b505-4876-ae5a-c2314639fda9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-8mps5\" (UID: \"fe162e8b-b505-4876-ae5a-c2314639fda9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8mps5" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.339698 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bab255b3-6b41-494f-a4f6-dde5ebe7b538-installation-pull-secrets\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.339718 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhdm2\" (UniqueName: \"kubernetes.io/projected/300dbd86-e6d3-4baf-9cfb-77f0930937a4-kube-api-access-dhdm2\") pod \"catalog-operator-68c6474976-5klc7\" (UID: \"300dbd86-e6d3-4baf-9cfb-77f0930937a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5klc7" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.339732 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/53c28c3c-1ead-4eae-b73b-3c8d2d0475ab-tmpfs\") pod \"packageserver-d55dfcdfc-ztzb7\" (UID: \"53c28c3c-1ead-4eae-b73b-3c8d2d0475ab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ztzb7" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.342352 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/974e660d-a7f0-4c6b-922a-adcccb236a54-config\") pod \"authentication-operator-69f744f599-nxcwg\" (UID: \"974e660d-a7f0-4c6b-922a-adcccb236a54\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nxcwg" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.344013 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/974e660d-a7f0-4c6b-922a-adcccb236a54-serving-cert\") pod \"authentication-operator-69f744f599-nxcwg\" (UID: \"974e660d-a7f0-4c6b-922a-adcccb236a54\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nxcwg" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.344149 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bab255b3-6b41-494f-a4f6-dde5ebe7b538-registry-certificates\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.346389 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bab255b3-6b41-494f-a4f6-dde5ebe7b538-trusted-ca\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.353440 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7063db7c-d5df-471f-ad8a-48a99bfd8e19-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-t99qw\" (UID: \"7063db7c-d5df-471f-ad8a-48a99bfd8e19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t99qw" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.353850 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bab255b3-6b41-494f-a4f6-dde5ebe7b538-ca-trust-extracted\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.354972 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/974e660d-a7f0-4c6b-922a-adcccb236a54-service-ca-bundle\") pod \"authentication-operator-69f744f599-nxcwg\" (UID: \"974e660d-a7f0-4c6b-922a-adcccb236a54\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nxcwg" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.355610 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7063db7c-d5df-471f-ad8a-48a99bfd8e19-config\") pod \"kube-apiserver-operator-766d6c64bb-t99qw\" (UID: \"7063db7c-d5df-471f-ad8a-48a99bfd8e19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t99qw" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.360338 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bab255b3-6b41-494f-a4f6-dde5ebe7b538-installation-pull-secrets\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.378666 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bab255b3-6b41-494f-a4f6-dde5ebe7b538-registry-tls\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.396479 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kv7b\" (UniqueName: \"kubernetes.io/projected/bab255b3-6b41-494f-a4f6-dde5ebe7b538-kube-api-access-8kv7b\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.402370 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bab255b3-6b41-494f-a4f6-dde5ebe7b538-bound-sa-token\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.425782 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4vjl\" (UniqueName: \"kubernetes.io/projected/974e660d-a7f0-4c6b-922a-adcccb236a54-kube-api-access-t4vjl\") pod \"authentication-operator-69f744f599-nxcwg\" (UID: \"974e660d-a7f0-4c6b-922a-adcccb236a54\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nxcwg" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.448151 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp8tg\" (UniqueName: \"kubernetes.io/projected/15e74714-78ff-4351-9088-ddf6672ce8a5-kube-api-access-fp8tg\") pod \"marketplace-operator-79b997595-8tlg5\" (UID: \"15e74714-78ff-4351-9088-ddf6672ce8a5\") " pod="openshift-marketplace/marketplace-operator-79b997595-8tlg5" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.448202 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c637114e-4a21-4bea-86fc-c15d89d4c72f-serving-cert\") pod \"service-ca-operator-777779d784-d48qm\" (UID: \"c637114e-4a21-4bea-86fc-c15d89d4c72f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d48qm" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.448229 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cab9f457-c407-4075-bddf-deb1c8e86b45-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7qdzz\" (UID: \"cab9f457-c407-4075-bddf-deb1c8e86b45\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7qdzz" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.448263 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/17d1f537-63f7-4ee9-ba1a-8395a56d24a3-etcd-ca\") pod \"etcd-operator-b45778765-5gnbj\" (UID: \"17d1f537-63f7-4ee9-ba1a-8395a56d24a3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5gnbj" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.448286 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba8ed252-ce14-45fe-8a3e-82d7981d3acb-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-67p67\" (UID: \"ba8ed252-ce14-45fe-8a3e-82d7981d3acb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-67p67" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.448313 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8r89\" (UniqueName: \"kubernetes.io/projected/9523f6a7-595f-4984-87e7-3b78d9a4222c-kube-api-access-m8r89\") pod \"ingress-operator-5b745b69d9-c47kg\" (UID: \"9523f6a7-595f-4984-87e7-3b78d9a4222c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c47kg" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.448335 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe162e8b-b505-4876-ae5a-c2314639fda9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-8mps5\" (UID: \"fe162e8b-b505-4876-ae5a-c2314639fda9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8mps5" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.448361 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhdm2\" (UniqueName: \"kubernetes.io/projected/300dbd86-e6d3-4baf-9cfb-77f0930937a4-kube-api-access-dhdm2\") pod \"catalog-operator-68c6474976-5klc7\" (UID: \"300dbd86-e6d3-4baf-9cfb-77f0930937a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5klc7" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.448381 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/53c28c3c-1ead-4eae-b73b-3c8d2d0475ab-tmpfs\") pod \"packageserver-d55dfcdfc-ztzb7\" (UID: \"53c28c3c-1ead-4eae-b73b-3c8d2d0475ab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ztzb7" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.448406 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9523f6a7-595f-4984-87e7-3b78d9a4222c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-c47kg\" (UID: \"9523f6a7-595f-4984-87e7-3b78d9a4222c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c47kg" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.448426 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/53c28c3c-1ead-4eae-b73b-3c8d2d0475ab-webhook-cert\") pod \"packageserver-d55dfcdfc-ztzb7\" (UID: \"53c28c3c-1ead-4eae-b73b-3c8d2d0475ab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ztzb7" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.448450 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw28j\" (UniqueName: \"kubernetes.io/projected/e620889a-27ac-4bc5-99e9-b1033d3f2345-kube-api-access-pw28j\") pod \"service-ca-9c57cc56f-n955p\" (UID: \"e620889a-27ac-4bc5-99e9-b1033d3f2345\") " pod="openshift-service-ca/service-ca-9c57cc56f-n955p" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.448473 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0f1b6231-5477-4761-8cdb-8fb1b61756c5-cert\") pod \"ingress-canary-tb6h6\" (UID: \"0f1b6231-5477-4761-8cdb-8fb1b61756c5\") " pod="openshift-ingress-canary/ingress-canary-tb6h6" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.452704 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/53c28c3c-1ead-4eae-b73b-3c8d2d0475ab-tmpfs\") pod \"packageserver-d55dfcdfc-ztzb7\" (UID: \"53c28c3c-1ead-4eae-b73b-3c8d2d0475ab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ztzb7" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.452721 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe162e8b-b505-4876-ae5a-c2314639fda9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-8mps5\" (UID: \"fe162e8b-b505-4876-ae5a-c2314639fda9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8mps5" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.453384 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/17d1f537-63f7-4ee9-ba1a-8395a56d24a3-etcd-ca\") pod \"etcd-operator-b45778765-5gnbj\" (UID: \"17d1f537-63f7-4ee9-ba1a-8395a56d24a3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5gnbj" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.448499 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2dpd\" (UniqueName: \"kubernetes.io/projected/53c32bbb-3266-4a0c-8e56-ce2b80becc85-kube-api-access-m2dpd\") pod \"package-server-manager-789f6589d5-77tbv\" (UID: \"53c32bbb-3266-4a0c-8e56-ce2b80becc85\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-77tbv" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.453878 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa7410b8-9dc3-410f-9c3b-c8cac55804c7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-nsfm2\" (UID: \"aa7410b8-9dc3-410f-9c3b-c8cac55804c7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nsfm2" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.453930 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.454052 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/15e74714-78ff-4351-9088-ddf6672ce8a5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8tlg5\" (UID: \"15e74714-78ff-4351-9088-ddf6672ce8a5\") " pod="openshift-marketplace/marketplace-operator-79b997595-8tlg5" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.454093 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17d1f537-63f7-4ee9-ba1a-8395a56d24a3-config\") pod \"etcd-operator-b45778765-5gnbj\" (UID: \"17d1f537-63f7-4ee9-ba1a-8395a56d24a3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5gnbj" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.454155 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcf66\" (UniqueName: \"kubernetes.io/projected/0f1b6231-5477-4761-8cdb-8fb1b61756c5-kube-api-access-mcf66\") pod \"ingress-canary-tb6h6\" (UID: \"0f1b6231-5477-4761-8cdb-8fb1b61756c5\") " pod="openshift-ingress-canary/ingress-canary-tb6h6" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.454177 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c637114e-4a21-4bea-86fc-c15d89d4c72f-config\") pod \"service-ca-operator-777779d784-d48qm\" (UID: \"c637114e-4a21-4bea-86fc-c15d89d4c72f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d48qm" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.454255 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4klht\" (UniqueName: \"kubernetes.io/projected/7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1-kube-api-access-4klht\") pod \"csi-hostpathplugin-69dxz\" (UID: \"7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1\") " pod="hostpath-provisioner/csi-hostpathplugin-69dxz" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.454277 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e620889a-27ac-4bc5-99e9-b1033d3f2345-signing-key\") pod \"service-ca-9c57cc56f-n955p\" (UID: \"e620889a-27ac-4bc5-99e9-b1033d3f2345\") " pod="openshift-service-ca/service-ca-9c57cc56f-n955p" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.454300 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ghwk\" (UniqueName: \"kubernetes.io/projected/1ac62bb4-9b43-4266-8325-ecdc8d1c0d39-kube-api-access-7ghwk\") pod \"router-default-5444994796-7v969\" (UID: \"1ac62bb4-9b43-4266-8325-ecdc8d1c0d39\") " pod="openshift-ingress/router-default-5444994796-7v969" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.454324 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1-csi-data-dir\") pod \"csi-hostpathplugin-69dxz\" (UID: \"7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1\") " pod="hostpath-provisioner/csi-hostpathplugin-69dxz" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.454371 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1-mountpoint-dir\") pod \"csi-hostpathplugin-69dxz\" (UID: \"7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1\") " pod="hostpath-provisioner/csi-hostpathplugin-69dxz" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.454394 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/1ac62bb4-9b43-4266-8325-ecdc8d1c0d39-stats-auth\") pod \"router-default-5444994796-7v969\" (UID: \"1ac62bb4-9b43-4266-8325-ecdc8d1c0d39\") " pod="openshift-ingress/router-default-5444994796-7v969" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.454419 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ba8ed252-ce14-45fe-8a3e-82d7981d3acb-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-67p67\" (UID: \"ba8ed252-ce14-45fe-8a3e-82d7981d3acb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-67p67" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.454446 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/53c32bbb-3266-4a0c-8e56-ce2b80becc85-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-77tbv\" (UID: \"53c32bbb-3266-4a0c-8e56-ce2b80becc85\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-77tbv" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.454502 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj9zj\" (UniqueName: \"kubernetes.io/projected/fe162e8b-b505-4876-ae5a-c2314639fda9-kube-api-access-zj9zj\") pod \"kube-storage-version-migrator-operator-b67b599dd-8mps5\" (UID: \"fe162e8b-b505-4876-ae5a-c2314639fda9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8mps5" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.454532 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fb2725a-267d-41b2-bf92-e18f82bcbeab-config\") pod \"kube-controller-manager-operator-78b949d7b-qqrkc\" (UID: \"3fb2725a-267d-41b2-bf92-e18f82bcbeab\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qqrkc" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.454557 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c372fe91-d8d2-41b0-a85d-327796979bf1-certs\") pod \"machine-config-server-szsvh\" (UID: \"c372fe91-d8d2-41b0-a85d-327796979bf1\") " pod="openshift-machine-config-operator/machine-config-server-szsvh" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.454584 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e93786ca-d394-46a5-94d8-00c6056cea5a-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zjdzc\" (UID: \"e93786ca-d394-46a5-94d8-00c6056cea5a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zjdzc" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.454608 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d2ba3c1-e279-4376-a7fb-8bd238dbd9bb-config-volume\") pod \"dns-default-sbfbv\" (UID: \"9d2ba3c1-e279-4376-a7fb-8bd238dbd9bb\") " pod="openshift-dns/dns-default-sbfbv" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.454629 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ac62bb4-9b43-4266-8325-ecdc8d1c0d39-service-ca-bundle\") pod \"router-default-5444994796-7v969\" (UID: \"1ac62bb4-9b43-4266-8325-ecdc8d1c0d39\") " pod="openshift-ingress/router-default-5444994796-7v969" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.454650 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1ac62bb4-9b43-4266-8325-ecdc8d1c0d39-metrics-certs\") pod \"router-default-5444994796-7v969\" (UID: \"1ac62bb4-9b43-4266-8325-ecdc8d1c0d39\") " pod="openshift-ingress/router-default-5444994796-7v969" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.454677 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/300dbd86-e6d3-4baf-9cfb-77f0930937a4-profile-collector-cert\") pod \"catalog-operator-68c6474976-5klc7\" (UID: \"300dbd86-e6d3-4baf-9cfb-77f0930937a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5klc7" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.454705 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/53c28c3c-1ead-4eae-b73b-3c8d2d0475ab-apiservice-cert\") pod \"packageserver-d55dfcdfc-ztzb7\" (UID: \"53c28c3c-1ead-4eae-b73b-3c8d2d0475ab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ztzb7" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.454729 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1-registration-dir\") pod \"csi-hostpathplugin-69dxz\" (UID: \"7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1\") " pod="hostpath-provisioner/csi-hostpathplugin-69dxz" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.454753 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c372fe91-d8d2-41b0-a85d-327796979bf1-node-bootstrap-token\") pod \"machine-config-server-szsvh\" (UID: \"c372fe91-d8d2-41b0-a85d-327796979bf1\") " pod="openshift-machine-config-operator/machine-config-server-szsvh" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.454782 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfwfm\" (UniqueName: \"kubernetes.io/projected/c637114e-4a21-4bea-86fc-c15d89d4c72f-kube-api-access-rfwfm\") pod \"service-ca-operator-777779d784-d48qm\" (UID: \"c637114e-4a21-4bea-86fc-c15d89d4c72f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d48qm" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.454808 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwvjk\" (UniqueName: \"kubernetes.io/projected/e93786ca-d394-46a5-94d8-00c6056cea5a-kube-api-access-kwvjk\") pod \"machine-config-controller-84d6567774-zjdzc\" (UID: \"e93786ca-d394-46a5-94d8-00c6056cea5a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zjdzc" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.454831 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvc9n\" (UniqueName: \"kubernetes.io/projected/cab9f457-c407-4075-bddf-deb1c8e86b45-kube-api-access-wvc9n\") pod \"olm-operator-6b444d44fb-7qdzz\" (UID: \"cab9f457-c407-4075-bddf-deb1c8e86b45\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7qdzz" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.454859 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17d1f537-63f7-4ee9-ba1a-8395a56d24a3-serving-cert\") pod \"etcd-operator-b45778765-5gnbj\" (UID: \"17d1f537-63f7-4ee9-ba1a-8395a56d24a3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5gnbj" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.462984 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/53c28c3c-1ead-4eae-b73b-3c8d2d0475ab-webhook-cert\") pod \"packageserver-d55dfcdfc-ztzb7\" (UID: \"53c28c3c-1ead-4eae-b73b-3c8d2d0475ab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ztzb7" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.463075 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1-mountpoint-dir\") pod \"csi-hostpathplugin-69dxz\" (UID: \"7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1\") " pod="hostpath-provisioner/csi-hostpathplugin-69dxz" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.463960 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1-csi-data-dir\") pod \"csi-hostpathplugin-69dxz\" (UID: \"7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1\") " pod="hostpath-provisioner/csi-hostpathplugin-69dxz" Jan 01 08:28:48 crc kubenswrapper[4867]: E0101 08:28:48.465869 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:48.965853539 +0000 UTC m=+138.101122308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.454785 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cab9f457-c407-4075-bddf-deb1c8e86b45-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7qdzz\" (UID: \"cab9f457-c407-4075-bddf-deb1c8e86b45\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7qdzz" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.468776 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c637114e-4a21-4bea-86fc-c15d89d4c72f-serving-cert\") pod \"service-ca-operator-777779d784-d48qm\" (UID: \"c637114e-4a21-4bea-86fc-c15d89d4c72f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d48qm" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.469266 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba8ed252-ce14-45fe-8a3e-82d7981d3acb-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-67p67\" (UID: \"ba8ed252-ce14-45fe-8a3e-82d7981d3acb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-67p67" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.469484 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/17d1f537-63f7-4ee9-ba1a-8395a56d24a3-etcd-client\") pod \"etcd-operator-b45778765-5gnbj\" (UID: \"17d1f537-63f7-4ee9-ba1a-8395a56d24a3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5gnbj" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.469543 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d2ba3c1-e279-4376-a7fb-8bd238dbd9bb-config-volume\") pod \"dns-default-sbfbv\" (UID: \"9d2ba3c1-e279-4376-a7fb-8bd238dbd9bb\") " pod="openshift-dns/dns-default-sbfbv" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.469573 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5fw9\" (UniqueName: \"kubernetes.io/projected/aa7410b8-9dc3-410f-9c3b-c8cac55804c7-kube-api-access-d5fw9\") pod \"control-plane-machine-set-operator-78cbb6b69f-nsfm2\" (UID: \"aa7410b8-9dc3-410f-9c3b-c8cac55804c7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nsfm2" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.469607 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed3ea167-3dde-4d3d-b36b-277e5368f1c9-secret-volume\") pod \"collect-profiles-29454255-49zxs\" (UID: \"ed3ea167-3dde-4d3d-b36b-277e5368f1c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454255-49zxs" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.469630 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e93786ca-d394-46a5-94d8-00c6056cea5a-proxy-tls\") pod \"machine-config-controller-84d6567774-zjdzc\" (UID: \"e93786ca-d394-46a5-94d8-00c6056cea5a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zjdzc" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.469658 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3fb2725a-267d-41b2-bf92-e18f82bcbeab-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qqrkc\" (UID: \"3fb2725a-267d-41b2-bf92-e18f82bcbeab\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qqrkc" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.469690 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/15e74714-78ff-4351-9088-ddf6672ce8a5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8tlg5\" (UID: \"15e74714-78ff-4351-9088-ddf6672ce8a5\") " pod="openshift-marketplace/marketplace-operator-79b997595-8tlg5" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.469715 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e620889a-27ac-4bc5-99e9-b1033d3f2345-signing-cabundle\") pod \"service-ca-9c57cc56f-n955p\" (UID: \"e620889a-27ac-4bc5-99e9-b1033d3f2345\") " pod="openshift-service-ca/service-ca-9c57cc56f-n955p" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.469740 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvtx6\" (UniqueName: \"kubernetes.io/projected/e005805b-ffc8-4dcf-88bc-31611953d870-kube-api-access-qvtx6\") pod \"machine-config-operator-74547568cd-p7jh5\" (UID: \"e005805b-ffc8-4dcf-88bc-31611953d870\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p7jh5" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.469802 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17d1f537-63f7-4ee9-ba1a-8395a56d24a3-config\") pod \"etcd-operator-b45778765-5gnbj\" (UID: \"17d1f537-63f7-4ee9-ba1a-8395a56d24a3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5gnbj" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.467859 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1-registration-dir\") pod \"csi-hostpathplugin-69dxz\" (UID: \"7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1\") " pod="hostpath-provisioner/csi-hostpathplugin-69dxz" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.470902 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c637114e-4a21-4bea-86fc-c15d89d4c72f-config\") pod \"service-ca-operator-777779d784-d48qm\" (UID: \"c637114e-4a21-4bea-86fc-c15d89d4c72f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d48qm" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.471104 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7063db7c-d5df-471f-ad8a-48a99bfd8e19-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-t99qw\" (UID: \"7063db7c-d5df-471f-ad8a-48a99bfd8e19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t99qw" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.469774 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cab9f457-c407-4075-bddf-deb1c8e86b45-srv-cert\") pod \"olm-operator-6b444d44fb-7qdzz\" (UID: \"cab9f457-c407-4075-bddf-deb1c8e86b45\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7qdzz" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.478170 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1-socket-dir\") pod \"csi-hostpathplugin-69dxz\" (UID: \"7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1\") " pod="hostpath-provisioner/csi-hostpathplugin-69dxz" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.478426 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79vcr\" (UniqueName: \"kubernetes.io/projected/17d1f537-63f7-4ee9-ba1a-8395a56d24a3-kube-api-access-79vcr\") pod \"etcd-operator-b45778765-5gnbj\" (UID: \"17d1f537-63f7-4ee9-ba1a-8395a56d24a3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5gnbj" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.478470 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5v45\" (UniqueName: \"kubernetes.io/projected/c372fe91-d8d2-41b0-a85d-327796979bf1-kube-api-access-r5v45\") pod \"machine-config-server-szsvh\" (UID: \"c372fe91-d8d2-41b0-a85d-327796979bf1\") " pod="openshift-machine-config-operator/machine-config-server-szsvh" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.484100 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1-plugins-dir\") pod \"csi-hostpathplugin-69dxz\" (UID: \"7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1\") " pod="hostpath-provisioner/csi-hostpathplugin-69dxz" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.484141 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9523f6a7-595f-4984-87e7-3b78d9a4222c-trusted-ca\") pod \"ingress-operator-5b745b69d9-c47kg\" (UID: \"9523f6a7-595f-4984-87e7-3b78d9a4222c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c47kg" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.484167 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tljtw\" (UniqueName: \"kubernetes.io/projected/53c28c3c-1ead-4eae-b73b-3c8d2d0475ab-kube-api-access-tljtw\") pod \"packageserver-d55dfcdfc-ztzb7\" (UID: \"53c28c3c-1ead-4eae-b73b-3c8d2d0475ab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ztzb7" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.484189 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba8ed252-ce14-45fe-8a3e-82d7981d3acb-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-67p67\" (UID: \"ba8ed252-ce14-45fe-8a3e-82d7981d3acb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-67p67" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.484214 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgz5x\" (UniqueName: \"kubernetes.io/projected/8e196803-de1e-4b07-acf8-489c31eb0bf2-kube-api-access-vgz5x\") pod \"migrator-59844c95c7-7qgnh\" (UID: \"8e196803-de1e-4b07-acf8-489c31eb0bf2\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7qgnh" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.484259 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9523f6a7-595f-4984-87e7-3b78d9a4222c-metrics-tls\") pod \"ingress-operator-5b745b69d9-c47kg\" (UID: \"9523f6a7-595f-4984-87e7-3b78d9a4222c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c47kg" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.484289 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e005805b-ffc8-4dcf-88bc-31611953d870-auth-proxy-config\") pod \"machine-config-operator-74547568cd-p7jh5\" (UID: \"e005805b-ffc8-4dcf-88bc-31611953d870\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p7jh5" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.484327 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/17d1f537-63f7-4ee9-ba1a-8395a56d24a3-etcd-service-ca\") pod \"etcd-operator-b45778765-5gnbj\" (UID: \"17d1f537-63f7-4ee9-ba1a-8395a56d24a3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5gnbj" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.484350 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe162e8b-b505-4876-ae5a-c2314639fda9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-8mps5\" (UID: \"fe162e8b-b505-4876-ae5a-c2314639fda9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8mps5" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.484371 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f8jr\" (UniqueName: \"kubernetes.io/projected/9d2ba3c1-e279-4376-a7fb-8bd238dbd9bb-kube-api-access-7f8jr\") pod \"dns-default-sbfbv\" (UID: \"9d2ba3c1-e279-4376-a7fb-8bd238dbd9bb\") " pod="openshift-dns/dns-default-sbfbv" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.484396 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg47j\" (UniqueName: \"kubernetes.io/projected/ed3ea167-3dde-4d3d-b36b-277e5368f1c9-kube-api-access-tg47j\") pod \"collect-profiles-29454255-49zxs\" (UID: \"ed3ea167-3dde-4d3d-b36b-277e5368f1c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454255-49zxs" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.484419 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e005805b-ffc8-4dcf-88bc-31611953d870-proxy-tls\") pod \"machine-config-operator-74547568cd-p7jh5\" (UID: \"e005805b-ffc8-4dcf-88bc-31611953d870\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p7jh5" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.484442 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/300dbd86-e6d3-4baf-9cfb-77f0930937a4-srv-cert\") pod \"catalog-operator-68c6474976-5klc7\" (UID: \"300dbd86-e6d3-4baf-9cfb-77f0930937a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5klc7" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.484469 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e005805b-ffc8-4dcf-88bc-31611953d870-images\") pod \"machine-config-operator-74547568cd-p7jh5\" (UID: \"e005805b-ffc8-4dcf-88bc-31611953d870\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p7jh5" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.484502 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9d2ba3c1-e279-4376-a7fb-8bd238dbd9bb-metrics-tls\") pod \"dns-default-sbfbv\" (UID: \"9d2ba3c1-e279-4376-a7fb-8bd238dbd9bb\") " pod="openshift-dns/dns-default-sbfbv" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.484535 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/1ac62bb4-9b43-4266-8325-ecdc8d1c0d39-default-certificate\") pod \"router-default-5444994796-7v969\" (UID: \"1ac62bb4-9b43-4266-8325-ecdc8d1c0d39\") " pod="openshift-ingress/router-default-5444994796-7v969" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.484558 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed3ea167-3dde-4d3d-b36b-277e5368f1c9-config-volume\") pod \"collect-profiles-29454255-49zxs\" (UID: \"ed3ea167-3dde-4d3d-b36b-277e5368f1c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454255-49zxs" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.484578 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3fb2725a-267d-41b2-bf92-e18f82bcbeab-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qqrkc\" (UID: \"3fb2725a-267d-41b2-bf92-e18f82bcbeab\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qqrkc" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.480202 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-nxcwg" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.486609 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e93786ca-d394-46a5-94d8-00c6056cea5a-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zjdzc\" (UID: \"e93786ca-d394-46a5-94d8-00c6056cea5a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zjdzc" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.486818 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1-plugins-dir\") pod \"csi-hostpathplugin-69dxz\" (UID: \"7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1\") " pod="hostpath-provisioner/csi-hostpathplugin-69dxz" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.480381 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1-socket-dir\") pod \"csi-hostpathplugin-69dxz\" (UID: \"7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1\") " pod="hostpath-provisioner/csi-hostpathplugin-69dxz" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.487465 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba8ed252-ce14-45fe-8a3e-82d7981d3acb-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-67p67\" (UID: \"ba8ed252-ce14-45fe-8a3e-82d7981d3acb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-67p67" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.487845 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9523f6a7-595f-4984-87e7-3b78d9a4222c-trusted-ca\") pod \"ingress-operator-5b745b69d9-c47kg\" (UID: \"9523f6a7-595f-4984-87e7-3b78d9a4222c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c47kg" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.488045 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/17d1f537-63f7-4ee9-ba1a-8395a56d24a3-etcd-service-ca\") pod \"etcd-operator-b45778765-5gnbj\" (UID: \"17d1f537-63f7-4ee9-ba1a-8395a56d24a3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5gnbj" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.488330 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e005805b-ffc8-4dcf-88bc-31611953d870-auth-proxy-config\") pod \"machine-config-operator-74547568cd-p7jh5\" (UID: \"e005805b-ffc8-4dcf-88bc-31611953d870\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p7jh5" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.489278 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e005805b-ffc8-4dcf-88bc-31611953d870-images\") pod \"machine-config-operator-74547568cd-p7jh5\" (UID: \"e005805b-ffc8-4dcf-88bc-31611953d870\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p7jh5" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.489752 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c372fe91-d8d2-41b0-a85d-327796979bf1-node-bootstrap-token\") pod \"machine-config-server-szsvh\" (UID: \"c372fe91-d8d2-41b0-a85d-327796979bf1\") " pod="openshift-machine-config-operator/machine-config-server-szsvh" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.490519 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fb2725a-267d-41b2-bf92-e18f82bcbeab-config\") pod \"kube-controller-manager-operator-78b949d7b-qqrkc\" (UID: \"3fb2725a-267d-41b2-bf92-e18f82bcbeab\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qqrkc" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.490847 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe162e8b-b505-4876-ae5a-c2314639fda9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-8mps5\" (UID: \"fe162e8b-b505-4876-ae5a-c2314639fda9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8mps5" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.491610 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa7410b8-9dc3-410f-9c3b-c8cac55804c7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-nsfm2\" (UID: \"aa7410b8-9dc3-410f-9c3b-c8cac55804c7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nsfm2" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.492935 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c372fe91-d8d2-41b0-a85d-327796979bf1-certs\") pod \"machine-config-server-szsvh\" (UID: \"c372fe91-d8d2-41b0-a85d-327796979bf1\") " pod="openshift-machine-config-operator/machine-config-server-szsvh" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.493404 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/53c32bbb-3266-4a0c-8e56-ce2b80becc85-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-77tbv\" (UID: \"53c32bbb-3266-4a0c-8e56-ce2b80becc85\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-77tbv" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.493468 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e620889a-27ac-4bc5-99e9-b1033d3f2345-signing-key\") pod \"service-ca-9c57cc56f-n955p\" (UID: \"e620889a-27ac-4bc5-99e9-b1033d3f2345\") " pod="openshift-service-ca/service-ca-9c57cc56f-n955p" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.493663 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/1ac62bb4-9b43-4266-8325-ecdc8d1c0d39-stats-auth\") pod \"router-default-5444994796-7v969\" (UID: \"1ac62bb4-9b43-4266-8325-ecdc8d1c0d39\") " pod="openshift-ingress/router-default-5444994796-7v969" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.494376 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1ac62bb4-9b43-4266-8325-ecdc8d1c0d39-metrics-certs\") pod \"router-default-5444994796-7v969\" (UID: \"1ac62bb4-9b43-4266-8325-ecdc8d1c0d39\") " pod="openshift-ingress/router-default-5444994796-7v969" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.494447 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/53c28c3c-1ead-4eae-b73b-3c8d2d0475ab-apiservice-cert\") pod \"packageserver-d55dfcdfc-ztzb7\" (UID: \"53c28c3c-1ead-4eae-b73b-3c8d2d0475ab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ztzb7" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.495758 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3fb2725a-267d-41b2-bf92-e18f82bcbeab-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qqrkc\" (UID: \"3fb2725a-267d-41b2-bf92-e18f82bcbeab\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qqrkc" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.495786 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e005805b-ffc8-4dcf-88bc-31611953d870-proxy-tls\") pod \"machine-config-operator-74547568cd-p7jh5\" (UID: \"e005805b-ffc8-4dcf-88bc-31611953d870\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p7jh5" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.499293 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed3ea167-3dde-4d3d-b36b-277e5368f1c9-config-volume\") pod \"collect-profiles-29454255-49zxs\" (UID: \"ed3ea167-3dde-4d3d-b36b-277e5368f1c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454255-49zxs" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.500075 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/15e74714-78ff-4351-9088-ddf6672ce8a5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8tlg5\" (UID: \"15e74714-78ff-4351-9088-ddf6672ce8a5\") " pod="openshift-marketplace/marketplace-operator-79b997595-8tlg5" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.501557 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ac62bb4-9b43-4266-8325-ecdc8d1c0d39-service-ca-bundle\") pod \"router-default-5444994796-7v969\" (UID: \"1ac62bb4-9b43-4266-8325-ecdc8d1c0d39\") " pod="openshift-ingress/router-default-5444994796-7v969" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.501587 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e93786ca-d394-46a5-94d8-00c6056cea5a-proxy-tls\") pod \"machine-config-controller-84d6567774-zjdzc\" (UID: \"e93786ca-d394-46a5-94d8-00c6056cea5a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zjdzc" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.501722 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e620889a-27ac-4bc5-99e9-b1033d3f2345-signing-cabundle\") pod \"service-ca-9c57cc56f-n955p\" (UID: \"e620889a-27ac-4bc5-99e9-b1033d3f2345\") " pod="openshift-service-ca/service-ca-9c57cc56f-n955p" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.502329 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/15e74714-78ff-4351-9088-ddf6672ce8a5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8tlg5\" (UID: \"15e74714-78ff-4351-9088-ddf6672ce8a5\") " pod="openshift-marketplace/marketplace-operator-79b997595-8tlg5" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.502828 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed3ea167-3dde-4d3d-b36b-277e5368f1c9-secret-volume\") pod \"collect-profiles-29454255-49zxs\" (UID: \"ed3ea167-3dde-4d3d-b36b-277e5368f1c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454255-49zxs" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.503596 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8z7w\" (UniqueName: \"kubernetes.io/projected/9e8f3532-2214-4769-b4f1-2edb51ca7aec-kube-api-access-m8z7w\") pod \"downloads-7954f5f757-fxjs9\" (UID: \"9e8f3532-2214-4769-b4f1-2edb51ca7aec\") " pod="openshift-console/downloads-7954f5f757-fxjs9" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.510641 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0f1b6231-5477-4761-8cdb-8fb1b61756c5-cert\") pod \"ingress-canary-tb6h6\" (UID: \"0f1b6231-5477-4761-8cdb-8fb1b61756c5\") " pod="openshift-ingress-canary/ingress-canary-tb6h6" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.511110 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17d1f537-63f7-4ee9-ba1a-8395a56d24a3-serving-cert\") pod \"etcd-operator-b45778765-5gnbj\" (UID: \"17d1f537-63f7-4ee9-ba1a-8395a56d24a3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5gnbj" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.511126 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/300dbd86-e6d3-4baf-9cfb-77f0930937a4-profile-collector-cert\") pod \"catalog-operator-68c6474976-5klc7\" (UID: \"300dbd86-e6d3-4baf-9cfb-77f0930937a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5klc7" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.511258 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9d2ba3c1-e279-4376-a7fb-8bd238dbd9bb-metrics-tls\") pod \"dns-default-sbfbv\" (UID: \"9d2ba3c1-e279-4376-a7fb-8bd238dbd9bb\") " pod="openshift-dns/dns-default-sbfbv" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.513495 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/300dbd86-e6d3-4baf-9cfb-77f0930937a4-srv-cert\") pod \"catalog-operator-68c6474976-5klc7\" (UID: \"300dbd86-e6d3-4baf-9cfb-77f0930937a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5klc7" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.513949 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cab9f457-c407-4075-bddf-deb1c8e86b45-srv-cert\") pod \"olm-operator-6b444d44fb-7qdzz\" (UID: \"cab9f457-c407-4075-bddf-deb1c8e86b45\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7qdzz" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.514338 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/1ac62bb4-9b43-4266-8325-ecdc8d1c0d39-default-certificate\") pod \"router-default-5444994796-7v969\" (UID: \"1ac62bb4-9b43-4266-8325-ecdc8d1c0d39\") " pod="openshift-ingress/router-default-5444994796-7v969" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.515668 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9523f6a7-595f-4984-87e7-3b78d9a4222c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-c47kg\" (UID: \"9523f6a7-595f-4984-87e7-3b78d9a4222c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c47kg" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.519014 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhdm2\" (UniqueName: \"kubernetes.io/projected/300dbd86-e6d3-4baf-9cfb-77f0930937a4-kube-api-access-dhdm2\") pod \"catalog-operator-68c6474976-5klc7\" (UID: \"300dbd86-e6d3-4baf-9cfb-77f0930937a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5klc7" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.527067 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9523f6a7-595f-4984-87e7-3b78d9a4222c-metrics-tls\") pod \"ingress-operator-5b745b69d9-c47kg\" (UID: \"9523f6a7-595f-4984-87e7-3b78d9a4222c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c47kg" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.527447 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/17d1f537-63f7-4ee9-ba1a-8395a56d24a3-etcd-client\") pod \"etcd-operator-b45778765-5gnbj\" (UID: \"17d1f537-63f7-4ee9-ba1a-8395a56d24a3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5gnbj" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.533843 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t99qw" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.542792 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5klc7" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.550265 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw28j\" (UniqueName: \"kubernetes.io/projected/e620889a-27ac-4bc5-99e9-b1033d3f2345-kube-api-access-pw28j\") pod \"service-ca-9c57cc56f-n955p\" (UID: \"e620889a-27ac-4bc5-99e9-b1033d3f2345\") " pod="openshift-service-ca/service-ca-9c57cc56f-n955p" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.572168 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp8tg\" (UniqueName: \"kubernetes.io/projected/15e74714-78ff-4351-9088-ddf6672ce8a5-kube-api-access-fp8tg\") pod \"marketplace-operator-79b997595-8tlg5\" (UID: \"15e74714-78ff-4351-9088-ddf6672ce8a5\") " pod="openshift-marketplace/marketplace-operator-79b997595-8tlg5" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.598273 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2dpd\" (UniqueName: \"kubernetes.io/projected/53c32bbb-3266-4a0c-8e56-ce2b80becc85-kube-api-access-m2dpd\") pod \"package-server-manager-789f6589d5-77tbv\" (UID: \"53c32bbb-3266-4a0c-8e56-ce2b80becc85\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-77tbv" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.603208 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.603654 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8tlg5" Jan 01 08:28:48 crc kubenswrapper[4867]: E0101 08:28:48.604094 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:49.104059566 +0000 UTC m=+138.239328325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.604530 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:48 crc kubenswrapper[4867]: E0101 08:28:48.605064 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:49.105051792 +0000 UTC m=+138.240320561 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.609074 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-n955p" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.614597 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-77tbv" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.628749 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8r89\" (UniqueName: \"kubernetes.io/projected/9523f6a7-595f-4984-87e7-3b78d9a4222c-kube-api-access-m8r89\") pod \"ingress-operator-5b745b69d9-c47kg\" (UID: \"9523f6a7-595f-4984-87e7-3b78d9a4222c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c47kg" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.632296 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj9zj\" (UniqueName: \"kubernetes.io/projected/fe162e8b-b505-4876-ae5a-c2314639fda9-kube-api-access-zj9zj\") pod \"kube-storage-version-migrator-operator-b67b599dd-8mps5\" (UID: \"fe162e8b-b505-4876-ae5a-c2314639fda9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8mps5" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.640874 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c47kg" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.650543 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8mps5" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.653781 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ghwk\" (UniqueName: \"kubernetes.io/projected/1ac62bb4-9b43-4266-8325-ecdc8d1c0d39-kube-api-access-7ghwk\") pod \"router-default-5444994796-7v969\" (UID: \"1ac62bb4-9b43-4266-8325-ecdc8d1c0d39\") " pod="openshift-ingress/router-default-5444994796-7v969" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.674389 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ba8ed252-ce14-45fe-8a3e-82d7981d3acb-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-67p67\" (UID: \"ba8ed252-ce14-45fe-8a3e-82d7981d3acb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-67p67" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.679674 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ks4bk"] Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.696341 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-sqxbg"] Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.697467 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcf66\" (UniqueName: \"kubernetes.io/projected/0f1b6231-5477-4761-8cdb-8fb1b61756c5-kube-api-access-mcf66\") pod \"ingress-canary-tb6h6\" (UID: \"0f1b6231-5477-4761-8cdb-8fb1b61756c5\") " pod="openshift-ingress-canary/ingress-canary-tb6h6" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.707247 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:48 crc kubenswrapper[4867]: E0101 08:28:48.707536 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:49.207509699 +0000 UTC m=+138.342778468 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.712656 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-tb6h6" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.725567 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvc9n\" (UniqueName: \"kubernetes.io/projected/cab9f457-c407-4075-bddf-deb1c8e86b45-kube-api-access-wvc9n\") pod \"olm-operator-6b444d44fb-7qdzz\" (UID: \"cab9f457-c407-4075-bddf-deb1c8e86b45\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7qdzz" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.746568 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fbt8p"] Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.769808 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-fxjs9" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.786717 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4klht\" (UniqueName: \"kubernetes.io/projected/7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1-kube-api-access-4klht\") pod \"csi-hostpathplugin-69dxz\" (UID: \"7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1\") " pod="hostpath-provisioner/csi-hostpathplugin-69dxz" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.786797 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lnc8j"] Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.794132 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwvjk\" (UniqueName: \"kubernetes.io/projected/e93786ca-d394-46a5-94d8-00c6056cea5a-kube-api-access-kwvjk\") pod \"machine-config-controller-84d6567774-zjdzc\" (UID: \"e93786ca-d394-46a5-94d8-00c6056cea5a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zjdzc" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.798455 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfwfm\" (UniqueName: \"kubernetes.io/projected/c637114e-4a21-4bea-86fc-c15d89d4c72f-kube-api-access-rfwfm\") pod \"service-ca-operator-777779d784-d48qm\" (UID: \"c637114e-4a21-4bea-86fc-c15d89d4c72f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d48qm" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.808711 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:48 crc kubenswrapper[4867]: E0101 08:28:48.809067 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:49.309056433 +0000 UTC m=+138.444325202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.812904 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-b2crx"] Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.831863 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvtx6\" (UniqueName: \"kubernetes.io/projected/e005805b-ffc8-4dcf-88bc-31611953d870-kube-api-access-qvtx6\") pod \"machine-config-operator-74547568cd-p7jh5\" (UID: \"e005805b-ffc8-4dcf-88bc-31611953d870\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p7jh5" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.832322 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5fw9\" (UniqueName: \"kubernetes.io/projected/aa7410b8-9dc3-410f-9c3b-c8cac55804c7-kube-api-access-d5fw9\") pod \"control-plane-machine-set-operator-78cbb6b69f-nsfm2\" (UID: \"aa7410b8-9dc3-410f-9c3b-c8cac55804c7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nsfm2" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.846701 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79vcr\" (UniqueName: \"kubernetes.io/projected/17d1f537-63f7-4ee9-ba1a-8395a56d24a3-kube-api-access-79vcr\") pod \"etcd-operator-b45778765-5gnbj\" (UID: \"17d1f537-63f7-4ee9-ba1a-8395a56d24a3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5gnbj" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.851247 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p7jh5" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.856231 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nsfm2" Jan 01 08:28:48 crc kubenswrapper[4867]: W0101 08:28:48.857177 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87c3b9b3_d6b0_46cb_aed9_c58555214163.slice/crio-d803ead0db2a4c0d94b979713f2de39f2f7e937ff44c2f09c31449eea12540db WatchSource:0}: Error finding container d803ead0db2a4c0d94b979713f2de39f2f7e937ff44c2f09c31449eea12540db: Status 404 returned error can't find the container with id d803ead0db2a4c0d94b979713f2de39f2f7e937ff44c2f09c31449eea12540db Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.865927 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-7v969" Jan 01 08:28:48 crc kubenswrapper[4867]: W0101 08:28:48.865984 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18d7838a_e01c_42e1_ab20_a72878132ef1.slice/crio-c84863e923ff2807bea7b6cd1328815811940d50a9374c9eee61f29537a88998 WatchSource:0}: Error finding container c84863e923ff2807bea7b6cd1328815811940d50a9374c9eee61f29537a88998: Status 404 returned error can't find the container with id c84863e923ff2807bea7b6cd1328815811940d50a9374c9eee61f29537a88998 Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.870725 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zjdzc" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.880564 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7qdzz" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.887644 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-t67cg"] Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.888419 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-d48qm" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.897478 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3fb2725a-267d-41b2-bf92-e18f82bcbeab-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qqrkc\" (UID: \"3fb2725a-267d-41b2-bf92-e18f82bcbeab\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qqrkc" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.910140 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:48 crc kubenswrapper[4867]: E0101 08:28:48.910484 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:49.410464692 +0000 UTC m=+138.545733461 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.911127 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5v45\" (UniqueName: \"kubernetes.io/projected/c372fe91-d8d2-41b0-a85d-327796979bf1-kube-api-access-r5v45\") pod \"machine-config-server-szsvh\" (UID: \"c372fe91-d8d2-41b0-a85d-327796979bf1\") " pod="openshift-machine-config-operator/machine-config-server-szsvh" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.929803 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-67p67" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.941497 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgz5x\" (UniqueName: \"kubernetes.io/projected/8e196803-de1e-4b07-acf8-489c31eb0bf2-kube-api-access-vgz5x\") pod \"migrator-59844c95c7-7qgnh\" (UID: \"8e196803-de1e-4b07-acf8-489c31eb0bf2\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7qgnh" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.941626 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tljtw\" (UniqueName: \"kubernetes.io/projected/53c28c3c-1ead-4eae-b73b-3c8d2d0475ab-kube-api-access-tljtw\") pod \"packageserver-d55dfcdfc-ztzb7\" (UID: \"53c28c3c-1ead-4eae-b73b-3c8d2d0475ab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ztzb7" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.950553 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg47j\" (UniqueName: \"kubernetes.io/projected/ed3ea167-3dde-4d3d-b36b-277e5368f1c9-kube-api-access-tg47j\") pod \"collect-profiles-29454255-49zxs\" (UID: \"ed3ea167-3dde-4d3d-b36b-277e5368f1c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454255-49zxs" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.955335 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qqrkc" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.960540 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hmhl9" event={"ID":"94b7004c-c318-4872-a1b7-f983c691a523","Type":"ContainerStarted","Data":"499703cbbfa5000c31f2fb052e35d3f11dfc1067c7ea5f9ddd8f5ebd2f988f9e"} Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.962590 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" event={"ID":"1504048c-578a-42d2-a6de-9161ee1ebb82","Type":"ContainerStarted","Data":"a863a1311035176817600465e99da7f3627f18f258eb02dc13e2c9b9b03ec28a"} Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.963407 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6lsq2" event={"ID":"25d57f2f-1353-417b-ba47-a0ceb1a4577e","Type":"ContainerStarted","Data":"f24abf94f4fac3cefc439d3a5fb75778aa741a1a6910f2a0336af767d46240e2"} Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.963424 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6lsq2" event={"ID":"25d57f2f-1353-417b-ba47-a0ceb1a4577e","Type":"ContainerStarted","Data":"78c1070c6faeeee7298b51fd78b1b7dd587dfc15453ad2223bc05b674894e2a6"} Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.969275 4867 generic.go:334] "Generic (PLEG): container finished" podID="ba5c2565-7b3c-4c9b-8600-6a572cc363e0" containerID="15ca0e06f126db49130150585ff467bf5590fda11e149a5076f97a90e1cb8d56" exitCode=0 Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.970140 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjdqd" event={"ID":"ba5c2565-7b3c-4c9b-8600-6a572cc363e0","Type":"ContainerDied","Data":"15ca0e06f126db49130150585ff467bf5590fda11e149a5076f97a90e1cb8d56"} Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.971563 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7qgnh" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.973613 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lnc8j" event={"ID":"18d7838a-e01c-42e1-ab20-a72878132ef1","Type":"ContainerStarted","Data":"c84863e923ff2807bea7b6cd1328815811940d50a9374c9eee61f29537a88998"} Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.977807 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-szsvh" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.980361 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f8jr\" (UniqueName: \"kubernetes.io/projected/9d2ba3c1-e279-4376-a7fb-8bd238dbd9bb-kube-api-access-7f8jr\") pod \"dns-default-sbfbv\" (UID: \"9d2ba3c1-e279-4376-a7fb-8bd238dbd9bb\") " pod="openshift-dns/dns-default-sbfbv" Jan 01 08:28:48 crc kubenswrapper[4867]: I0101 08:28:48.998476 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8k4x7"] Jan 01 08:28:49 crc kubenswrapper[4867]: I0101 08:28:49.009368 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-5gnbj" Jan 01 08:28:49 crc kubenswrapper[4867]: I0101 08:28:49.013493 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-69dxz" Jan 01 08:28:49 crc kubenswrapper[4867]: I0101 08:28:49.014947 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:49 crc kubenswrapper[4867]: E0101 08:28:49.015319 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:49.515307675 +0000 UTC m=+138.650576444 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:49 crc kubenswrapper[4867]: I0101 08:28:49.015516 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pttmf" event={"ID":"f92ddf87-e976-4f3c-9a8c-8a4dab665391","Type":"ContainerStarted","Data":"4da09da4c089bd5aa112ab52d18842c00ef21f33a35751e1f3bc7e76f81af79e"} Jan 01 08:28:49 crc kubenswrapper[4867]: I0101 08:28:49.022117 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-ks4bk" event={"ID":"655f7be8-97ed-4ff8-a91a-3f68d6e8cbf3","Type":"ContainerStarted","Data":"1fc22a4c0d7981adc64dc14a67c54e1a2689867e469dc1877317e3e1c88b3f3f"} Jan 01 08:28:49 crc kubenswrapper[4867]: I0101 08:28:49.037156 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fbt8p" event={"ID":"87c3b9b3-d6b0-46cb-aed9-c58555214163","Type":"ContainerStarted","Data":"d803ead0db2a4c0d94b979713f2de39f2f7e937ff44c2f09c31449eea12540db"} Jan 01 08:28:49 crc kubenswrapper[4867]: I0101 08:28:49.037486 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j8lqg" Jan 01 08:28:49 crc kubenswrapper[4867]: I0101 08:28:49.052766 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t99qw"] Jan 01 08:28:49 crc kubenswrapper[4867]: I0101 08:28:49.061153 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-jjglf" Jan 01 08:28:49 crc kubenswrapper[4867]: I0101 08:28:49.080470 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j8lqg" Jan 01 08:28:49 crc kubenswrapper[4867]: I0101 08:28:49.098699 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-nxcwg"] Jan 01 08:28:49 crc kubenswrapper[4867]: I0101 08:28:49.120325 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:49 crc kubenswrapper[4867]: E0101 08:28:49.121351 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:49.621337941 +0000 UTC m=+138.756606710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:49 crc kubenswrapper[4867]: I0101 08:28:49.182265 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ztzb7" Jan 01 08:28:49 crc kubenswrapper[4867]: I0101 08:28:49.220446 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29454255-49zxs" Jan 01 08:28:49 crc kubenswrapper[4867]: I0101 08:28:49.222529 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:49 crc kubenswrapper[4867]: E0101 08:28:49.224038 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:49.723762977 +0000 UTC m=+138.859031746 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:49 crc kubenswrapper[4867]: I0101 08:28:49.265138 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-sbfbv" Jan 01 08:28:49 crc kubenswrapper[4867]: I0101 08:28:49.323537 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:49 crc kubenswrapper[4867]: E0101 08:28:49.324144 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:49.824128268 +0000 UTC m=+138.959397037 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:49 crc kubenswrapper[4867]: I0101 08:28:49.355000 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5klc7"] Jan 01 08:28:49 crc kubenswrapper[4867]: I0101 08:28:49.426302 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:49 crc kubenswrapper[4867]: E0101 08:28:49.426936 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:49.926917607 +0000 UTC m=+139.062186426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:49 crc kubenswrapper[4867]: I0101 08:28:49.527186 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:49 crc kubenswrapper[4867]: E0101 08:28:49.527352 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:50.02732479 +0000 UTC m=+139.162593559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:49 crc kubenswrapper[4867]: I0101 08:28:49.527456 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:49 crc kubenswrapper[4867]: E0101 08:28:49.527769 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:50.027762576 +0000 UTC m=+139.163031345 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:49 crc kubenswrapper[4867]: I0101 08:28:49.629108 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:49 crc kubenswrapper[4867]: E0101 08:28:49.629342 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:50.129320671 +0000 UTC m=+139.264589440 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:49 crc kubenswrapper[4867]: I0101 08:28:49.629426 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:49 crc kubenswrapper[4867]: E0101 08:28:49.629810 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:50.129799858 +0000 UTC m=+139.265068637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:49 crc kubenswrapper[4867]: I0101 08:28:49.737700 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:49 crc kubenswrapper[4867]: E0101 08:28:49.738276 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:50.238239161 +0000 UTC m=+139.373507930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:49 crc kubenswrapper[4867]: I0101 08:28:49.742692 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:49 crc kubenswrapper[4867]: E0101 08:28:49.743263 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:50.243248882 +0000 UTC m=+139.378517651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:49 crc kubenswrapper[4867]: I0101 08:28:49.836809 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8tlg5"] Jan 01 08:28:49 crc kubenswrapper[4867]: I0101 08:28:49.846450 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:49 crc kubenswrapper[4867]: E0101 08:28:49.848332 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:50.348309483 +0000 UTC m=+139.483578252 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:49 crc kubenswrapper[4867]: I0101 08:28:49.949286 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:49 crc kubenswrapper[4867]: E0101 08:28:49.949703 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:50.449688191 +0000 UTC m=+139.584956960 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.051705 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:50 crc kubenswrapper[4867]: E0101 08:28:50.051906 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:50.551864797 +0000 UTC m=+139.687133566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.052205 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:50 crc kubenswrapper[4867]: E0101 08:28:50.052480 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:50.552469179 +0000 UTC m=+139.687737948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.118502 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-hmhl9" podStartSLOduration=120.118478921 podStartE2EDuration="2m0.118478921s" podCreationTimestamp="2026-01-01 08:26:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:50.116009912 +0000 UTC m=+139.251278701" watchObservedRunningTime="2026-01-01 08:28:50.118478921 +0000 UTC m=+139.253747690" Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.146596 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-nxcwg" event={"ID":"974e660d-a7f0-4c6b-922a-adcccb236a54","Type":"ContainerStarted","Data":"42c41005bec921c7c3abaa06e89f7a923d11f1183009979f7ebb36f0e327b35f"} Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.146645 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-nxcwg" event={"ID":"974e660d-a7f0-4c6b-922a-adcccb236a54","Type":"ContainerStarted","Data":"7ecb408195a329e1293d4a8f3d0216d556ab0561967bc7d2eb8f698224f2b793"} Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.171375 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:50 crc kubenswrapper[4867]: E0101 08:28:50.171906 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:50.671872768 +0000 UTC m=+139.807141537 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.177603 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-n955p"] Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.177662 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-c47kg"] Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.183125 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-fxjs9"] Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.184064 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8k4x7" event={"ID":"42da4c7a-d738-4262-a395-1ff1c9d4f399","Type":"ContainerStarted","Data":"04cf8f26ef1f6d67218f8086ba479977f5118b0bb449aae3145c98d54c78aabf"} Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.185462 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pttmf" podStartSLOduration=121.185444557 podStartE2EDuration="2m1.185444557s" podCreationTimestamp="2026-01-01 08:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:50.183499737 +0000 UTC m=+139.318768506" watchObservedRunningTime="2026-01-01 08:28:50.185444557 +0000 UTC m=+139.320713326" Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.191162 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5klc7" event={"ID":"300dbd86-e6d3-4baf-9cfb-77f0930937a4","Type":"ContainerStarted","Data":"34c88261c184ffada53bf7dc28a4fcdec259b5e9ef3d840cb0ba3d08eff1f57c"} Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.204005 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-ks4bk" event={"ID":"655f7be8-97ed-4ff8-a91a-3f68d6e8cbf3","Type":"ContainerStarted","Data":"0a8d90795b8e86e2d3fe4be2d346f8a0ab49810d51e182af9dd98056979b9dbd"} Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.204939 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-ks4bk" Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.208705 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fbt8p" event={"ID":"87c3b9b3-d6b0-46cb-aed9-c58555214163","Type":"ContainerStarted","Data":"7a40d348ef2e41c7def5eb8901c7dce7af7d445824ca76ccd50b4e57e1e3e6c8"} Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.210103 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-b2crx" event={"ID":"69dbb713-1149-4edd-899c-3fb77a8a36e2","Type":"ContainerStarted","Data":"fc75a8075bcb9186d34d8303035199db2d8359c9c599d2df4dccd15c2b0155e0"} Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.210676 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t99qw" event={"ID":"7063db7c-d5df-471f-ad8a-48a99bfd8e19","Type":"ContainerStarted","Data":"372dc00a8053e2d98995cb86b5f467f32d81b00235f20ac3bd8b3498e5e547f2"} Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.212183 4867 generic.go:334] "Generic (PLEG): container finished" podID="18d7838a-e01c-42e1-ab20-a72878132ef1" containerID="20def09a8557301a70254bc626acd85cc55921a749a42f8c2e054d011c02786f" exitCode=0 Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.212219 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lnc8j" event={"ID":"18d7838a-e01c-42e1-ab20-a72878132ef1","Type":"ContainerDied","Data":"20def09a8557301a70254bc626acd85cc55921a749a42f8c2e054d011c02786f"} Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.214195 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8tlg5" event={"ID":"15e74714-78ff-4351-9088-ddf6672ce8a5","Type":"ContainerStarted","Data":"9c30c68d2f91ede95d3251d8395311039ade25d031aabe34d61774cee20869bb"} Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.229966 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" event={"ID":"1504048c-578a-42d2-a6de-9161ee1ebb82","Type":"ContainerStarted","Data":"77b04a1fda09a6324eee45d1a8d7c66c3c4c59dfa23861f97f6983db463bb304"} Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.249251 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-sqxbg" event={"ID":"01fa587e-a8a9-4092-9462-905cf90cf1dc","Type":"ContainerStarted","Data":"3a6b26ab85a7e54ca89c5b80728d57730e473677fd7c9fbaec609a08bbb68bb7"} Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.265862 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-szsvh" event={"ID":"c372fe91-d8d2-41b0-a85d-327796979bf1","Type":"ContainerStarted","Data":"7bd3c9052ba09f46a9aa818e8c22e24dc20a1ceed8c15ab77748d0f7f7402008"} Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.265919 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-szsvh" event={"ID":"c372fe91-d8d2-41b0-a85d-327796979bf1","Type":"ContainerStarted","Data":"bdba6309a1426017648a7b190a6a30e6be0188956b0157950c25663b1a44c08f"} Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.268231 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-7v969" event={"ID":"1ac62bb4-9b43-4266-8325-ecdc8d1c0d39","Type":"ContainerStarted","Data":"fec9cceb83d2bcb49a712462b1c0779261bf7a10d72ffa2702692505c168cad5"} Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.268283 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-7v969" event={"ID":"1ac62bb4-9b43-4266-8325-ecdc8d1c0d39","Type":"ContainerStarted","Data":"69bf2fee0d5ea328a15ff68bf8d1ab5524a44f677ac93609b7aa4a7e3b924c60"} Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.272561 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:50 crc kubenswrapper[4867]: E0101 08:28:50.273910 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:50.773897089 +0000 UTC m=+139.909165858 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.274485 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-t67cg" event={"ID":"851af35f-1738-49d8-855c-33e09731c8e3","Type":"ContainerStarted","Data":"14e67d8dab7da9805c596b3fc41dc199c2ee8f5d8f26177d01b490848bed4ab4"} Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.285483 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8mps5"] Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.362802 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" podStartSLOduration=121.362784436 podStartE2EDuration="2m1.362784436s" podCreationTimestamp="2026-01-01 08:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:50.352545417 +0000 UTC m=+139.487814196" watchObservedRunningTime="2026-01-01 08:28:50.362784436 +0000 UTC m=+139.498053205" Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.376345 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:50 crc kubenswrapper[4867]: E0101 08:28:50.376905 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:50.876866775 +0000 UTC m=+140.012135544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.478927 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:50 crc kubenswrapper[4867]: E0101 08:28:50.482262 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:50.982245927 +0000 UTC m=+140.117514696 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.562983 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j8lqg" podStartSLOduration=120.562959139 podStartE2EDuration="2m0.562959139s" podCreationTimestamp="2026-01-01 08:26:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:50.501620046 +0000 UTC m=+139.636888815" watchObservedRunningTime="2026-01-01 08:28:50.562959139 +0000 UTC m=+139.698227908" Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.581022 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:50 crc kubenswrapper[4867]: E0101 08:28:50.581540 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:51.081524539 +0000 UTC m=+140.216793308 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.682272 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:50 crc kubenswrapper[4867]: E0101 08:28:50.682608 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:51.182596116 +0000 UTC m=+140.317864885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.721966 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-6lsq2" podStartSLOduration=121.721952835 podStartE2EDuration="2m1.721952835s" podCreationTimestamp="2026-01-01 08:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:50.683022992 +0000 UTC m=+139.818291761" watchObservedRunningTime="2026-01-01 08:28:50.721952835 +0000 UTC m=+139.857221604" Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.742532 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-jjglf" podStartSLOduration=121.742514497 podStartE2EDuration="2m1.742514497s" podCreationTimestamp="2026-01-01 08:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:50.740546696 +0000 UTC m=+139.875815455" watchObservedRunningTime="2026-01-01 08:28:50.742514497 +0000 UTC m=+139.877783266" Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.759980 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngh5s" podStartSLOduration=121.759964387 podStartE2EDuration="2m1.759964387s" podCreationTimestamp="2026-01-01 08:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:50.7597669 +0000 UTC m=+139.895035669" watchObservedRunningTime="2026-01-01 08:28:50.759964387 +0000 UTC m=+139.895233156" Jan 01 08:28:50 crc kubenswrapper[4867]: W0101 08:28:50.777404 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode620889a_27ac_4bc5_99e9_b1033d3f2345.slice/crio-e0751df46559dca92d736a8d5b8629910cdaaade50c9d18232af6ce9f4a6cb13 WatchSource:0}: Error finding container e0751df46559dca92d736a8d5b8629910cdaaade50c9d18232af6ce9f4a6cb13: Status 404 returned error can't find the container with id e0751df46559dca92d736a8d5b8629910cdaaade50c9d18232af6ce9f4a6cb13 Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.784319 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:50 crc kubenswrapper[4867]: E0101 08:28:50.785170 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:51.285155266 +0000 UTC m=+140.420424035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.860176 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-szsvh" podStartSLOduration=5.860156182 podStartE2EDuration="5.860156182s" podCreationTimestamp="2026-01-01 08:28:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:50.851587093 +0000 UTC m=+139.986855882" watchObservedRunningTime="2026-01-01 08:28:50.860156182 +0000 UTC m=+139.995424941" Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.868452 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-7v969" Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.886762 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:50 crc kubenswrapper[4867]: E0101 08:28:50.887160 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:51.387148616 +0000 UTC m=+140.522417385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.918171 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8k4x7" podStartSLOduration=121.918152765 podStartE2EDuration="2m1.918152765s" podCreationTimestamp="2026-01-01 08:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:50.893128092 +0000 UTC m=+140.028396871" watchObservedRunningTime="2026-01-01 08:28:50.918152765 +0000 UTC m=+140.053421534" Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.927285 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-ks4bk" podStartSLOduration=121.927266094 podStartE2EDuration="2m1.927266094s" podCreationTimestamp="2026-01-01 08:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:50.91440044 +0000 UTC m=+140.049669219" watchObservedRunningTime="2026-01-01 08:28:50.927266094 +0000 UTC m=+140.062534863" Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.970000 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fbt8p" podStartSLOduration=121.969980685 podStartE2EDuration="2m1.969980685s" podCreationTimestamp="2026-01-01 08:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:50.966697827 +0000 UTC m=+140.101966606" watchObservedRunningTime="2026-01-01 08:28:50.969980685 +0000 UTC m=+140.105249454" Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.971571 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-7v969" podStartSLOduration=121.971562822 podStartE2EDuration="2m1.971562822s" podCreationTimestamp="2026-01-01 08:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:50.950337496 +0000 UTC m=+140.085606285" watchObservedRunningTime="2026-01-01 08:28:50.971562822 +0000 UTC m=+140.106831591" Jan 01 08:28:50 crc kubenswrapper[4867]: I0101 08:28:50.994593 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:50 crc kubenswrapper[4867]: E0101 08:28:50.994998 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:51.494976377 +0000 UTC m=+140.630245146 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.014322 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" podStartSLOduration=122.014305304 podStartE2EDuration="2m2.014305304s" podCreationTimestamp="2026-01-01 08:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:51.009327515 +0000 UTC m=+140.144596304" watchObservedRunningTime="2026-01-01 08:28:51.014305304 +0000 UTC m=+140.149574073" Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.064169 4867 patch_prober.go:28] interesting pod/router-default-5444994796-7v969 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 01 08:28:51 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Jan 01 08:28:51 crc kubenswrapper[4867]: [+]process-running ok Jan 01 08:28:51 crc kubenswrapper[4867]: healthz check failed Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.064457 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7v969" podUID="1ac62bb4-9b43-4266-8325-ecdc8d1c0d39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.071596 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjdqd" podStartSLOduration=121.071573051 podStartE2EDuration="2m1.071573051s" podCreationTimestamp="2026-01-01 08:26:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:51.06351317 +0000 UTC m=+140.198781939" watchObservedRunningTime="2026-01-01 08:28:51.071573051 +0000 UTC m=+140.206841820" Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.100105 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:51 crc kubenswrapper[4867]: E0101 08:28:51.114798 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:51.614489499 +0000 UTC m=+140.749758268 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.146300 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-ks4bk" Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.209982 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:51 crc kubenswrapper[4867]: E0101 08:28:51.210539 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:51.710523835 +0000 UTC m=+140.845792604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.249995 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-77tbv"] Jan 01 08:28:51 crc kubenswrapper[4867]: W0101 08:28:51.312758 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53c32bbb_3266_4a0c_8e56_ce2b80becc85.slice/crio-ae848e2837564942633187de7e8a2b330b52805508b86bac3075da654745afd9 WatchSource:0}: Error finding container ae848e2837564942633187de7e8a2b330b52805508b86bac3075da654745afd9: Status 404 returned error can't find the container with id ae848e2837564942633187de7e8a2b330b52805508b86bac3075da654745afd9 Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.312932 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:51 crc kubenswrapper[4867]: E0101 08:28:51.313188 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:51.813178989 +0000 UTC m=+140.948447758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.324385 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5klc7" event={"ID":"300dbd86-e6d3-4baf-9cfb-77f0930937a4","Type":"ContainerStarted","Data":"22b51f86e59844a86bd4931c73613beefca62182e96f014deda95cbe8dca6826"} Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.324427 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5klc7" Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.331251 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c47kg" event={"ID":"9523f6a7-595f-4984-87e7-3b78d9a4222c","Type":"ContainerStarted","Data":"f214285c393d397e510c5aa133add39b1ffe34cb2b9db38a1cc80a817043c10c"} Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.332639 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.332668 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.332795 4867 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-5klc7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.332842 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5klc7" podUID="300dbd86-e6d3-4baf-9cfb-77f0930937a4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.344643 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lnc8j" event={"ID":"18d7838a-e01c-42e1-ab20-a72878132ef1","Type":"ContainerStarted","Data":"7072fd6a1f77f8d6dae9e4d98615a4bc8f420983a57ef3e60a9cbb2d2365c39d"} Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.344688 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lnc8j" Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.361915 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8tlg5" event={"ID":"15e74714-78ff-4351-9088-ddf6672ce8a5","Type":"ContainerStarted","Data":"c6f191078c9db97f18c9b8f987c8115ad6dac694b70e0fdf253e4f0def7e6cc0"} Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.362372 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-8tlg5" Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.363938 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fxjs9" event={"ID":"9e8f3532-2214-4769-b4f1-2edb51ca7aec","Type":"ContainerStarted","Data":"0d7af2e4a3cd6ea6c898cf1434995846d1fd02c0c5395b315d30dbd354e5f1eb"} Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.371024 4867 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-8tlg5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.371078 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-8tlg5" podUID="15e74714-78ff-4351-9088-ddf6672ce8a5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.378471 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8mps5" event={"ID":"fe162e8b-b505-4876-ae5a-c2314639fda9","Type":"ContainerStarted","Data":"315b432f97cd5eb03f73fcac7c214d32c53416fdb10442fd6fa5a5b4f6ab2223"} Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.414819 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:51 crc kubenswrapper[4867]: E0101 08:28:51.415857 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:51.915828223 +0000 UTC m=+141.051096992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.423749 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-sqxbg" event={"ID":"01fa587e-a8a9-4092-9462-905cf90cf1dc","Type":"ContainerStarted","Data":"aff1bf3f81e37056144d26a9ae516adc301b9767d9ec5c45ae2d7b9fc07cd70d"} Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.423788 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-sqxbg" event={"ID":"01fa587e-a8a9-4092-9462-905cf90cf1dc","Type":"ContainerStarted","Data":"ecf88c098ef88ab0f5fcff8040bd81f5f892e8cab548883de8c9f30ac87a27a7"} Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.435005 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-tb6h6"] Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.436083 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-8tlg5" podStartSLOduration=121.436066023 podStartE2EDuration="2m1.436066023s" podCreationTimestamp="2026-01-01 08:26:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:51.431342803 +0000 UTC m=+140.566611572" watchObservedRunningTime="2026-01-01 08:28:51.436066023 +0000 UTC m=+140.571334792" Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.458221 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t99qw" event={"ID":"7063db7c-d5df-471f-ad8a-48a99bfd8e19","Type":"ContainerStarted","Data":"6eaef08bb02c518284c6aa5b59bdd1b53a439b970399993d9af971aed109e983"} Jan 01 08:28:51 crc kubenswrapper[4867]: W0101 08:28:51.468708 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f1b6231_5477_4761_8cdb_8fb1b61756c5.slice/crio-aca921187832b8a4cf96bcd5f90dc90a3a2dea8974bd8d8228ed1047ac65340d WatchSource:0}: Error finding container aca921187832b8a4cf96bcd5f90dc90a3a2dea8974bd8d8228ed1047ac65340d: Status 404 returned error can't find the container with id aca921187832b8a4cf96bcd5f90dc90a3a2dea8974bd8d8228ed1047ac65340d Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.523046 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-t67cg" event={"ID":"851af35f-1738-49d8-855c-33e09731c8e3","Type":"ContainerStarted","Data":"6cb39e0bfaba103e68061209a25d984027a81f553c2cf93c9672563c575bdea0"} Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.523083 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-t67cg" event={"ID":"851af35f-1738-49d8-855c-33e09731c8e3","Type":"ContainerStarted","Data":"2f02e82bfe6b2daff2fddc7d0b17c5bb0ecdc640afbab4a991389a2a2763512c"} Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.524381 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:51 crc kubenswrapper[4867]: E0101 08:28:51.525328 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:52.025313203 +0000 UTC m=+141.160581972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.567536 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-b2crx" event={"ID":"69dbb713-1149-4edd-899c-3fb77a8a36e2","Type":"ContainerStarted","Data":"7ff4274526d648b68c64f52adf709e8cff8cc389d1182b049eb484b896fdf785"} Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.567578 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-b2crx" event={"ID":"69dbb713-1149-4edd-899c-3fb77a8a36e2","Type":"ContainerStarted","Data":"e1d3915687c4aa0df571e5cf26c23a290c91ed369634164f88df912211db6d51"} Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.583705 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8k4x7" event={"ID":"42da4c7a-d738-4262-a395-1ff1c9d4f399","Type":"ContainerStarted","Data":"2f764acce5b566eb072f44fd7c940fd7cdc11d068ae32b3a8290c868eed266f1"} Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.588999 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-n955p" event={"ID":"e620889a-27ac-4bc5-99e9-b1033d3f2345","Type":"ContainerStarted","Data":"e0751df46559dca92d736a8d5b8629910cdaaade50c9d18232af6ce9f4a6cb13"} Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.589099 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5klc7" podStartSLOduration=121.589080504 podStartE2EDuration="2m1.589080504s" podCreationTimestamp="2026-01-01 08:26:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:51.567423973 +0000 UTC m=+140.702692752" watchObservedRunningTime="2026-01-01 08:28:51.589080504 +0000 UTC m=+140.724349263" Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.590386 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lnc8j" podStartSLOduration=122.590381601 podStartE2EDuration="2m2.590381601s" podCreationTimestamp="2026-01-01 08:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:51.587599811 +0000 UTC m=+140.722868600" watchObservedRunningTime="2026-01-01 08:28:51.590381601 +0000 UTC m=+140.725650370" Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.594605 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjdqd" event={"ID":"ba5c2565-7b3c-4c9b-8600-6a572cc363e0","Type":"ContainerStarted","Data":"592b1d97b984baec141ba5d69aaaf50297a8574c6cc9fbc497e45894a40a3e89"} Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.627406 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:51 crc kubenswrapper[4867]: E0101 08:28:51.628428 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:52.128410383 +0000 UTC m=+141.263679152 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.732413 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:51 crc kubenswrapper[4867]: E0101 08:28:51.733015 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:52.233003557 +0000 UTC m=+141.368272326 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.739985 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.740523 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.784743 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qqrkc"] Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.787524 4867 patch_prober.go:28] interesting pod/apiserver-76f77b778f-bsfrw container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 01 08:28:51 crc kubenswrapper[4867]: [+]log ok Jan 01 08:28:51 crc kubenswrapper[4867]: [+]etcd ok Jan 01 08:28:51 crc kubenswrapper[4867]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 01 08:28:51 crc kubenswrapper[4867]: [+]poststarthook/generic-apiserver-start-informers ok Jan 01 08:28:51 crc kubenswrapper[4867]: [+]poststarthook/max-in-flight-filter ok Jan 01 08:28:51 crc kubenswrapper[4867]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 01 08:28:51 crc kubenswrapper[4867]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 01 08:28:51 crc kubenswrapper[4867]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 01 08:28:51 crc kubenswrapper[4867]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 01 08:28:51 crc kubenswrapper[4867]: [+]poststarthook/project.openshift.io-projectcache ok Jan 01 08:28:51 crc kubenswrapper[4867]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 01 08:28:51 crc kubenswrapper[4867]: [+]poststarthook/openshift.io-startinformers ok Jan 01 08:28:51 crc kubenswrapper[4867]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 01 08:28:51 crc kubenswrapper[4867]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 01 08:28:51 crc kubenswrapper[4867]: livez check failed Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.787587 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" podUID="1504048c-578a-42d2-a6de-9161ee1ebb82" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.817360 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ztzb7"] Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.827460 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zjdzc"] Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.834340 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:51 crc kubenswrapper[4867]: E0101 08:28:51.834703 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:52.334688307 +0000 UTC m=+141.469957066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.839836 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-p7jh5"] Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.841804 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-b2crx" podStartSLOduration=121.841790983 podStartE2EDuration="2m1.841790983s" podCreationTimestamp="2026-01-01 08:26:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:51.814112934 +0000 UTC m=+140.949381703" watchObservedRunningTime="2026-01-01 08:28:51.841790983 +0000 UTC m=+140.977059752" Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.853696 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nsfm2"] Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.860739 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-d48qm"] Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.870607 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjdqd" Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.870642 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjdqd" Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.875308 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-sqxbg" podStartSLOduration=122.875287872 podStartE2EDuration="2m2.875287872s" podCreationTimestamp="2026-01-01 08:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:51.868776967 +0000 UTC m=+141.004045736" watchObservedRunningTime="2026-01-01 08:28:51.875287872 +0000 UTC m=+141.010556641" Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.875684 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-67p67"] Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.879613 4867 patch_prober.go:28] interesting pod/router-default-5444994796-7v969 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 01 08:28:51 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Jan 01 08:28:51 crc kubenswrapper[4867]: [+]process-running ok Jan 01 08:28:51 crc kubenswrapper[4867]: healthz check failed Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.879670 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7v969" podUID="1ac62bb4-9b43-4266-8325-ecdc8d1c0d39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.890978 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-5gnbj"] Jan 01 08:28:51 crc kubenswrapper[4867]: W0101 08:28:51.896636 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba8ed252_ce14_45fe_8a3e_82d7981d3acb.slice/crio-86e2c843f27fec51a4b1e897c60cae89109c38640bc1db911105b5a932f358dc WatchSource:0}: Error finding container 86e2c843f27fec51a4b1e897c60cae89109c38640bc1db911105b5a932f358dc: Status 404 returned error can't find the container with id 86e2c843f27fec51a4b1e897c60cae89109c38640bc1db911105b5a932f358dc Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.901456 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7qdzz"] Jan 01 08:28:51 crc kubenswrapper[4867]: W0101 08:28:51.907834 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17d1f537_63f7_4ee9_ba1a_8395a56d24a3.slice/crio-92a0362a7004aeba95eb800111e1b3c9f06175a9d11cbfb791ad41252b4dafb7 WatchSource:0}: Error finding container 92a0362a7004aeba95eb800111e1b3c9f06175a9d11cbfb791ad41252b4dafb7: Status 404 returned error can't find the container with id 92a0362a7004aeba95eb800111e1b3c9f06175a9d11cbfb791ad41252b4dafb7 Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.913508 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-7qgnh"] Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.923113 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-t67cg" podStartSLOduration=122.923093257 podStartE2EDuration="2m2.923093257s" podCreationTimestamp="2026-01-01 08:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:51.890840953 +0000 UTC m=+141.026109732" watchObservedRunningTime="2026-01-01 08:28:51.923093257 +0000 UTC m=+141.058362026" Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.929631 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjdqd" Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.933332 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-sbfbv"] Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.935611 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:51 crc kubenswrapper[4867]: E0101 08:28:51.935954 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:52.43594219 +0000 UTC m=+141.571210959 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:51 crc kubenswrapper[4867]: W0101 08:28:51.941607 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d2ba3c1_e279_4376_a7fb_8bd238dbd9bb.slice/crio-c54fd3b736fc90f95c08f161bbd1bc0cb886cfbe8ad14e1d882180579cc23740 WatchSource:0}: Error finding container c54fd3b736fc90f95c08f161bbd1bc0cb886cfbe8ad14e1d882180579cc23740: Status 404 returned error can't find the container with id c54fd3b736fc90f95c08f161bbd1bc0cb886cfbe8ad14e1d882180579cc23740 Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.959483 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t99qw" podStartSLOduration=122.959455919 podStartE2EDuration="2m2.959455919s" podCreationTimestamp="2026-01-01 08:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:51.907234254 +0000 UTC m=+141.042503023" watchObservedRunningTime="2026-01-01 08:28:51.959455919 +0000 UTC m=+141.094724678" Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.975239 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-nxcwg" podStartSLOduration=122.975220847 podStartE2EDuration="2m2.975220847s" podCreationTimestamp="2026-01-01 08:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:51.942002799 +0000 UTC m=+141.077271568" watchObservedRunningTime="2026-01-01 08:28:51.975220847 +0000 UTC m=+141.110489616" Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.976994 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-69dxz"] Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.983662 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29454255-49zxs"] Jan 01 08:28:51 crc kubenswrapper[4867]: I0101 08:28:51.985901 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-n955p" podStartSLOduration=121.985871122 podStartE2EDuration="2m1.985871122s" podCreationTimestamp="2026-01-01 08:26:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:51.985244209 +0000 UTC m=+141.120512978" watchObservedRunningTime="2026-01-01 08:28:51.985871122 +0000 UTC m=+141.121139891" Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.037922 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:52 crc kubenswrapper[4867]: E0101 08:28:52.038118 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:52.538093156 +0000 UTC m=+141.673361925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.038205 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:52 crc kubenswrapper[4867]: E0101 08:28:52.038507 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:52.538500031 +0000 UTC m=+141.673768790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.140361 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:52 crc kubenswrapper[4867]: E0101 08:28:52.140767 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:52.64075085 +0000 UTC m=+141.776019619 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.245630 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:52 crc kubenswrapper[4867]: E0101 08:28:52.246103 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:52.746086781 +0000 UTC m=+141.881355540 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.346831 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:52 crc kubenswrapper[4867]: E0101 08:28:52.347240 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:52.847225461 +0000 UTC m=+141.982494220 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.452173 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:52 crc kubenswrapper[4867]: E0101 08:28:52.453281 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:52.953267417 +0000 UTC m=+142.088536186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.568336 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:52 crc kubenswrapper[4867]: E0101 08:28:52.568598 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:53.068583428 +0000 UTC m=+142.203852197 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.621268 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-5gnbj" event={"ID":"17d1f537-63f7-4ee9-ba1a-8395a56d24a3","Type":"ContainerStarted","Data":"92a0362a7004aeba95eb800111e1b3c9f06175a9d11cbfb791ad41252b4dafb7"} Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.630206 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-tb6h6" event={"ID":"0f1b6231-5477-4761-8cdb-8fb1b61756c5","Type":"ContainerStarted","Data":"a1b0e1ae27432143cd06f5b2ee1ba6ab86e7de0f4868ec86f5e80c69f472e49c"} Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.630266 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-tb6h6" event={"ID":"0f1b6231-5477-4761-8cdb-8fb1b61756c5","Type":"ContainerStarted","Data":"aca921187832b8a4cf96bcd5f90dc90a3a2dea8974bd8d8228ed1047ac65340d"} Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.632514 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p7jh5" event={"ID":"e005805b-ffc8-4dcf-88bc-31611953d870","Type":"ContainerStarted","Data":"0061b1c769292856b391beaa0160e2eca5304f5c0ac97b03b9f0a8fd7c149c3b"} Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.632551 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p7jh5" event={"ID":"e005805b-ffc8-4dcf-88bc-31611953d870","Type":"ContainerStarted","Data":"3ad596150399e756000965c8c2dad3670c80b5f80c9ee8a3ea9d9532c6e0260d"} Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.645965 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-67p67" event={"ID":"ba8ed252-ce14-45fe-8a3e-82d7981d3acb","Type":"ContainerStarted","Data":"86e2c843f27fec51a4b1e897c60cae89109c38640bc1db911105b5a932f358dc"} Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.649804 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-tb6h6" podStartSLOduration=6.649787718 podStartE2EDuration="6.649787718s" podCreationTimestamp="2026-01-01 08:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:52.64901391 +0000 UTC m=+141.784282699" watchObservedRunningTime="2026-01-01 08:28:52.649787718 +0000 UTC m=+141.785056487" Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.663840 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29454255-49zxs" event={"ID":"ed3ea167-3dde-4d3d-b36b-277e5368f1c9","Type":"ContainerStarted","Data":"114e92d1d20332eb1c35e284e0ef4f5ece705d2ed154a6f2259669ec173e3dc1"} Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.673626 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.678976 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qqrkc" event={"ID":"3fb2725a-267d-41b2-bf92-e18f82bcbeab","Type":"ContainerStarted","Data":"4dc1bd53d103bfa1c176c38f78311833da24f589e1cf238b3464c9c4ea4e19e0"} Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.679020 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qqrkc" event={"ID":"3fb2725a-267d-41b2-bf92-e18f82bcbeab","Type":"ContainerStarted","Data":"ad77415fae2c0d44a93c8a888d981b0d346713ede74a7501707a408bec157a04"} Jan 01 08:28:52 crc kubenswrapper[4867]: E0101 08:28:52.679114 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:53.179094906 +0000 UTC m=+142.314363765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.705872 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qqrkc" podStartSLOduration=122.705845651 podStartE2EDuration="2m2.705845651s" podCreationTimestamp="2026-01-01 08:26:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:52.705399935 +0000 UTC m=+141.840668704" watchObservedRunningTime="2026-01-01 08:28:52.705845651 +0000 UTC m=+141.841114420" Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.706486 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-77tbv" event={"ID":"53c32bbb-3266-4a0c-8e56-ce2b80becc85","Type":"ContainerStarted","Data":"08c19f83a1d4e2cd87e5897fb0f0bf144423ea64094aef0fc421d1abda912397"} Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.706527 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-77tbv" event={"ID":"53c32bbb-3266-4a0c-8e56-ce2b80becc85","Type":"ContainerStarted","Data":"1beeb46c774f5addba0526131c7d5915502c5c31b6dae5c03ba54b85db6e27d4"} Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.706537 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-77tbv" event={"ID":"53c32bbb-3266-4a0c-8e56-ce2b80becc85","Type":"ContainerStarted","Data":"ae848e2837564942633187de7e8a2b330b52805508b86bac3075da654745afd9"} Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.707149 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-77tbv" Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.728547 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7qgnh" event={"ID":"8e196803-de1e-4b07-acf8-489c31eb0bf2","Type":"ContainerStarted","Data":"f12b3cf8339b0e9f451028fb97fe063c6c943a66d9be814a9b30de5ce44faf8b"} Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.728601 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7qgnh" event={"ID":"8e196803-de1e-4b07-acf8-489c31eb0bf2","Type":"ContainerStarted","Data":"49a3dfc9d464167d338b0b602537ca5ab53e98243807eb8d7e452d4d065f7dbd"} Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.742680 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-77tbv" podStartSLOduration=122.742659369 podStartE2EDuration="2m2.742659369s" podCreationTimestamp="2026-01-01 08:26:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:52.739308628 +0000 UTC m=+141.874577397" watchObservedRunningTime="2026-01-01 08:28:52.742659369 +0000 UTC m=+141.877928138" Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.748259 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zjdzc" event={"ID":"e93786ca-d394-46a5-94d8-00c6056cea5a","Type":"ContainerStarted","Data":"8aad8a0c0a137b026a183a6471888629e05def66aca43ad6e7888475746717e4"} Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.748312 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zjdzc" event={"ID":"e93786ca-d394-46a5-94d8-00c6056cea5a","Type":"ContainerStarted","Data":"b3e279839202b831f1893ac764972f547f51f0bee41f8e25bb802be628bdb87b"} Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.751236 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-d48qm" event={"ID":"c637114e-4a21-4bea-86fc-c15d89d4c72f","Type":"ContainerStarted","Data":"04abf66e18d7b1968e4b95131e0ebbdf5d32e14d704367a5dd56ceb0a2bf8468"} Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.751289 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-d48qm" event={"ID":"c637114e-4a21-4bea-86fc-c15d89d4c72f","Type":"ContainerStarted","Data":"1da6f221df2cb4ffccc1ab49915b6d7e15c76cc11de404a3559ebfe3fb898c05"} Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.753758 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-69dxz" event={"ID":"7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1","Type":"ContainerStarted","Data":"3c361b4f64bdf6a07878fe2f0d0021588b119642e01f18f3d3170d598a7657b3"} Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.764208 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ztzb7" event={"ID":"53c28c3c-1ead-4eae-b73b-3c8d2d0475ab","Type":"ContainerStarted","Data":"19efbef05dbe1bebadf91152c5847f8404eba2c76696e601e05d69b5268d2347"} Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.764252 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ztzb7" event={"ID":"53c28c3c-1ead-4eae-b73b-3c8d2d0475ab","Type":"ContainerStarted","Data":"9ec7115a5f9cdeaa34e08edb6255dd19c87ac42dc66a52806dd63d99c5c0b031"} Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.765086 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ztzb7" Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.774682 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:52 crc kubenswrapper[4867]: E0101 08:28:52.775186 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:53.275163142 +0000 UTC m=+142.410431911 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.776557 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c47kg" event={"ID":"9523f6a7-595f-4984-87e7-3b78d9a4222c","Type":"ContainerStarted","Data":"dbf900608b2462339b4c28f1737a3e933b2381376b2394fdc4642a06e04c92fb"} Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.776593 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c47kg" event={"ID":"9523f6a7-595f-4984-87e7-3b78d9a4222c","Type":"ContainerStarted","Data":"8540aa1bece73f6aefd713946768ce2aff1a549cdb8a8c19fd7fca57c03613e8"} Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.778189 4867 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-ztzb7 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" start-of-body= Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.778220 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ztzb7" podUID="53c28c3c-1ead-4eae-b73b-3c8d2d0475ab" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.783650 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8mps5" event={"ID":"fe162e8b-b505-4876-ae5a-c2314639fda9","Type":"ContainerStarted","Data":"d3e32e97503dfe90b63ef1893b5fb43dd8d79cb62e5cfafb609feae42d6af500"} Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.798595 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zjdzc" podStartSLOduration=122.798579357 podStartE2EDuration="2m2.798579357s" podCreationTimestamp="2026-01-01 08:26:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:52.776253001 +0000 UTC m=+141.911521770" watchObservedRunningTime="2026-01-01 08:28:52.798579357 +0000 UTC m=+141.933848126" Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.799049 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-d48qm" podStartSLOduration=122.799044864 podStartE2EDuration="2m2.799044864s" podCreationTimestamp="2026-01-01 08:26:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:52.797745177 +0000 UTC m=+141.933013956" watchObservedRunningTime="2026-01-01 08:28:52.799044864 +0000 UTC m=+141.934313633" Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.800795 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-n955p" event={"ID":"e620889a-27ac-4bc5-99e9-b1033d3f2345","Type":"ContainerStarted","Data":"319f8eae20684205f3baa1a4128c8c2468b39deb6da0c0e04e0b3cf48deac72a"} Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.816340 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7qdzz" event={"ID":"cab9f457-c407-4075-bddf-deb1c8e86b45","Type":"ContainerStarted","Data":"2197c8c02b2a75f161fcd010e2c90936175b5859c8d6a57233353ebf7fd2b1f7"} Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.816378 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7qdzz" event={"ID":"cab9f457-c407-4075-bddf-deb1c8e86b45","Type":"ContainerStarted","Data":"6fae176db3c342250b23f4f4160e9fa25d00f276eaea609104e35ee3a2234ce4"} Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.817218 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7qdzz" Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.827098 4867 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-7qdzz container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.827149 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7qdzz" podUID="cab9f457-c407-4075-bddf-deb1c8e86b45" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.829205 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fxjs9" event={"ID":"9e8f3532-2214-4769-b4f1-2edb51ca7aec","Type":"ContainerStarted","Data":"1ac2d9b440d749acdf9979fa77e0f34f63481998e9a3af15d2882c453147f64e"} Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.830049 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-fxjs9" Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.848904 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxjs9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.848966 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fxjs9" podUID="9e8f3532-2214-4769-b4f1-2edb51ca7aec" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.851406 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nsfm2" event={"ID":"aa7410b8-9dc3-410f-9c3b-c8cac55804c7","Type":"ContainerStarted","Data":"1cb26240248bcd098fe25c87e720a880802337788fa19c3afc6117ab033ffc25"} Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.851451 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nsfm2" event={"ID":"aa7410b8-9dc3-410f-9c3b-c8cac55804c7","Type":"ContainerStarted","Data":"c512cb4e77c66f917f03a1bf345c45dc4b3245d49ac26a1b40180860ad8653c5"} Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.856182 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sbfbv" event={"ID":"9d2ba3c1-e279-4376-a7fb-8bd238dbd9bb","Type":"ContainerStarted","Data":"c54fd3b736fc90f95c08f161bbd1bc0cb886cfbe8ad14e1d882180579cc23740"} Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.857596 4867 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-8tlg5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.857640 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-8tlg5" podUID="15e74714-78ff-4351-9088-ddf6672ce8a5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.870464 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjdqd" Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.870671 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5klc7" Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.878653 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.880277 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ztzb7" podStartSLOduration=122.880260104 podStartE2EDuration="2m2.880260104s" podCreationTimestamp="2026-01-01 08:26:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:52.849258206 +0000 UTC m=+141.984526995" watchObservedRunningTime="2026-01-01 08:28:52.880260104 +0000 UTC m=+142.015528873" Jan 01 08:28:52 crc kubenswrapper[4867]: E0101 08:28:52.881724 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:53.381710617 +0000 UTC m=+142.516979386 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.891244 4867 patch_prober.go:28] interesting pod/router-default-5444994796-7v969 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 01 08:28:52 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Jan 01 08:28:52 crc kubenswrapper[4867]: [+]process-running ok Jan 01 08:28:52 crc kubenswrapper[4867]: healthz check failed Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.891572 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7v969" podUID="1ac62bb4-9b43-4266-8325-ecdc8d1c0d39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.904045 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7qdzz" podStartSLOduration=122.904028842 podStartE2EDuration="2m2.904028842s" podCreationTimestamp="2026-01-01 08:26:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:52.903522764 +0000 UTC m=+142.038791543" watchObservedRunningTime="2026-01-01 08:28:52.904028842 +0000 UTC m=+142.039297611" Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.904303 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c47kg" podStartSLOduration=123.904297122 podStartE2EDuration="2m3.904297122s" podCreationTimestamp="2026-01-01 08:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:52.88125433 +0000 UTC m=+142.016523109" watchObservedRunningTime="2026-01-01 08:28:52.904297122 +0000 UTC m=+142.039565891" Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.938590 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8mps5" podStartSLOduration=122.938569908 podStartE2EDuration="2m2.938569908s" podCreationTimestamp="2026-01-01 08:26:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:52.936184412 +0000 UTC m=+142.071453191" watchObservedRunningTime="2026-01-01 08:28:52.938569908 +0000 UTC m=+142.073838677" Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.960239 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-fxjs9" podStartSLOduration=123.96022194 podStartE2EDuration="2m3.96022194s" podCreationTimestamp="2026-01-01 08:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:52.959977251 +0000 UTC m=+142.095246030" watchObservedRunningTime="2026-01-01 08:28:52.96022194 +0000 UTC m=+142.095490719" Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.979038 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nsfm2" podStartSLOduration=122.979023628 podStartE2EDuration="2m2.979023628s" podCreationTimestamp="2026-01-01 08:26:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:52.977867186 +0000 UTC m=+142.113135955" watchObservedRunningTime="2026-01-01 08:28:52.979023628 +0000 UTC m=+142.114292397" Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.980443 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:52 crc kubenswrapper[4867]: E0101 08:28:52.980804 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:53.480774731 +0000 UTC m=+142.616043500 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:52 crc kubenswrapper[4867]: I0101 08:28:52.981298 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:53 crc kubenswrapper[4867]: E0101 08:28:53.001819 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:53.50180027 +0000 UTC m=+142.637069039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.090323 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:53 crc kubenswrapper[4867]: E0101 08:28:53.090515 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:53.59048567 +0000 UTC m=+142.725754439 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.090697 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:53 crc kubenswrapper[4867]: E0101 08:28:53.091102 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:53.591095142 +0000 UTC m=+142.726363911 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.191527 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:53 crc kubenswrapper[4867]: E0101 08:28:53.191712 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:53.691682691 +0000 UTC m=+142.826951460 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.191802 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:53 crc kubenswrapper[4867]: E0101 08:28:53.192345 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:53.692335375 +0000 UTC m=+142.827604244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.293009 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:53 crc kubenswrapper[4867]: E0101 08:28:53.293128 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:53.793110651 +0000 UTC m=+142.928379420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.293551 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:53 crc kubenswrapper[4867]: E0101 08:28:53.293830 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:53.793818937 +0000 UTC m=+142.929087706 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.314374 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gmjjl"] Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.315278 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmjjl" Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.321598 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.345678 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gmjjl"] Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.394514 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.394642 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb25595-1b19-4e0b-a711-f3e0ed8e0689-catalog-content\") pod \"community-operators-gmjjl\" (UID: \"bcb25595-1b19-4e0b-a711-f3e0ed8e0689\") " pod="openshift-marketplace/community-operators-gmjjl" Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.394677 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcqn9\" (UniqueName: \"kubernetes.io/projected/bcb25595-1b19-4e0b-a711-f3e0ed8e0689-kube-api-access-mcqn9\") pod \"community-operators-gmjjl\" (UID: \"bcb25595-1b19-4e0b-a711-f3e0ed8e0689\") " pod="openshift-marketplace/community-operators-gmjjl" Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.394720 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb25595-1b19-4e0b-a711-f3e0ed8e0689-utilities\") pod \"community-operators-gmjjl\" (UID: \"bcb25595-1b19-4e0b-a711-f3e0ed8e0689\") " pod="openshift-marketplace/community-operators-gmjjl" Jan 01 08:28:53 crc kubenswrapper[4867]: E0101 08:28:53.394757 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:53.894733638 +0000 UTC m=+143.030002407 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.496397 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb25595-1b19-4e0b-a711-f3e0ed8e0689-catalog-content\") pod \"community-operators-gmjjl\" (UID: \"bcb25595-1b19-4e0b-a711-f3e0ed8e0689\") " pod="openshift-marketplace/community-operators-gmjjl" Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.496448 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:53 crc kubenswrapper[4867]: E0101 08:28:53.496717 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:53.996706858 +0000 UTC m=+143.131975627 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.496869 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcqn9\" (UniqueName: \"kubernetes.io/projected/bcb25595-1b19-4e0b-a711-f3e0ed8e0689-kube-api-access-mcqn9\") pod \"community-operators-gmjjl\" (UID: \"bcb25595-1b19-4e0b-a711-f3e0ed8e0689\") " pod="openshift-marketplace/community-operators-gmjjl" Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.496912 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb25595-1b19-4e0b-a711-f3e0ed8e0689-catalog-content\") pod \"community-operators-gmjjl\" (UID: \"bcb25595-1b19-4e0b-a711-f3e0ed8e0689\") " pod="openshift-marketplace/community-operators-gmjjl" Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.496970 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb25595-1b19-4e0b-a711-f3e0ed8e0689-utilities\") pod \"community-operators-gmjjl\" (UID: \"bcb25595-1b19-4e0b-a711-f3e0ed8e0689\") " pod="openshift-marketplace/community-operators-gmjjl" Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.497208 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb25595-1b19-4e0b-a711-f3e0ed8e0689-utilities\") pod \"community-operators-gmjjl\" (UID: \"bcb25595-1b19-4e0b-a711-f3e0ed8e0689\") " pod="openshift-marketplace/community-operators-gmjjl" Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.499335 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-txdr6"] Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.500152 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-txdr6" Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.506515 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.531773 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcqn9\" (UniqueName: \"kubernetes.io/projected/bcb25595-1b19-4e0b-a711-f3e0ed8e0689-kube-api-access-mcqn9\") pod \"community-operators-gmjjl\" (UID: \"bcb25595-1b19-4e0b-a711-f3e0ed8e0689\") " pod="openshift-marketplace/community-operators-gmjjl" Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.597554 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:53 crc kubenswrapper[4867]: E0101 08:28:53.597795 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:54.097772835 +0000 UTC m=+143.233041604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.598447 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frr5l\" (UniqueName: \"kubernetes.io/projected/72494188-2bff-4e14-8a71-041a84c049f2-kube-api-access-frr5l\") pod \"certified-operators-txdr6\" (UID: \"72494188-2bff-4e14-8a71-041a84c049f2\") " pod="openshift-marketplace/certified-operators-txdr6" Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.598502 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72494188-2bff-4e14-8a71-041a84c049f2-utilities\") pod \"certified-operators-txdr6\" (UID: \"72494188-2bff-4e14-8a71-041a84c049f2\") " pod="openshift-marketplace/certified-operators-txdr6" Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.598562 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.598608 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72494188-2bff-4e14-8a71-041a84c049f2-catalog-content\") pod \"certified-operators-txdr6\" (UID: \"72494188-2bff-4e14-8a71-041a84c049f2\") " pod="openshift-marketplace/certified-operators-txdr6" Jan 01 08:28:53 crc kubenswrapper[4867]: E0101 08:28:53.598948 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:54.098935927 +0000 UTC m=+143.234204696 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.600174 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-txdr6"] Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.636148 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmjjl" Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.700641 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lk4zz"] Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.701599 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lk4zz" Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.703182 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:53 crc kubenswrapper[4867]: E0101 08:28:53.703352 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:54.203324403 +0000 UTC m=+143.338593182 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.703388 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72494188-2bff-4e14-8a71-041a84c049f2-utilities\") pod \"certified-operators-txdr6\" (UID: \"72494188-2bff-4e14-8a71-041a84c049f2\") " pod="openshift-marketplace/certified-operators-txdr6" Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.703411 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a072e3d1-b363-49da-b227-6c6f7bb0aa9d-catalog-content\") pod \"community-operators-lk4zz\" (UID: \"a072e3d1-b363-49da-b227-6c6f7bb0aa9d\") " pod="openshift-marketplace/community-operators-lk4zz" Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.703612 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.703644 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72494188-2bff-4e14-8a71-041a84c049f2-catalog-content\") pod \"certified-operators-txdr6\" (UID: \"72494188-2bff-4e14-8a71-041a84c049f2\") " pod="openshift-marketplace/certified-operators-txdr6" Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.703665 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a072e3d1-b363-49da-b227-6c6f7bb0aa9d-utilities\") pod \"community-operators-lk4zz\" (UID: \"a072e3d1-b363-49da-b227-6c6f7bb0aa9d\") " pod="openshift-marketplace/community-operators-lk4zz" Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.703722 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n545\" (UniqueName: \"kubernetes.io/projected/a072e3d1-b363-49da-b227-6c6f7bb0aa9d-kube-api-access-5n545\") pod \"community-operators-lk4zz\" (UID: \"a072e3d1-b363-49da-b227-6c6f7bb0aa9d\") " pod="openshift-marketplace/community-operators-lk4zz" Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.703745 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frr5l\" (UniqueName: \"kubernetes.io/projected/72494188-2bff-4e14-8a71-041a84c049f2-kube-api-access-frr5l\") pod \"certified-operators-txdr6\" (UID: \"72494188-2bff-4e14-8a71-041a84c049f2\") " pod="openshift-marketplace/certified-operators-txdr6" Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.703895 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72494188-2bff-4e14-8a71-041a84c049f2-utilities\") pod \"certified-operators-txdr6\" (UID: \"72494188-2bff-4e14-8a71-041a84c049f2\") " pod="openshift-marketplace/certified-operators-txdr6" Jan 01 08:28:53 crc kubenswrapper[4867]: E0101 08:28:53.703914 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:54.203899394 +0000 UTC m=+143.339168163 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.704226 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72494188-2bff-4e14-8a71-041a84c049f2-catalog-content\") pod \"certified-operators-txdr6\" (UID: \"72494188-2bff-4e14-8a71-041a84c049f2\") " pod="openshift-marketplace/certified-operators-txdr6" Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.731346 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lk4zz"] Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.771553 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frr5l\" (UniqueName: \"kubernetes.io/projected/72494188-2bff-4e14-8a71-041a84c049f2-kube-api-access-frr5l\") pod \"certified-operators-txdr6\" (UID: \"72494188-2bff-4e14-8a71-041a84c049f2\") " pod="openshift-marketplace/certified-operators-txdr6" Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.804685 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:53 crc kubenswrapper[4867]: E0101 08:28:53.804850 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:54.304824706 +0000 UTC m=+143.440093475 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.804936 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.804996 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a072e3d1-b363-49da-b227-6c6f7bb0aa9d-utilities\") pod \"community-operators-lk4zz\" (UID: \"a072e3d1-b363-49da-b227-6c6f7bb0aa9d\") " pod="openshift-marketplace/community-operators-lk4zz" Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.805078 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n545\" (UniqueName: \"kubernetes.io/projected/a072e3d1-b363-49da-b227-6c6f7bb0aa9d-kube-api-access-5n545\") pod \"community-operators-lk4zz\" (UID: \"a072e3d1-b363-49da-b227-6c6f7bb0aa9d\") " pod="openshift-marketplace/community-operators-lk4zz" Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.805121 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a072e3d1-b363-49da-b227-6c6f7bb0aa9d-catalog-content\") pod \"community-operators-lk4zz\" (UID: \"a072e3d1-b363-49da-b227-6c6f7bb0aa9d\") " pod="openshift-marketplace/community-operators-lk4zz" Jan 01 08:28:53 crc kubenswrapper[4867]: E0101 08:28:53.805937 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:54.305919525 +0000 UTC m=+143.441188294 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.806103 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a072e3d1-b363-49da-b227-6c6f7bb0aa9d-catalog-content\") pod \"community-operators-lk4zz\" (UID: \"a072e3d1-b363-49da-b227-6c6f7bb0aa9d\") " pod="openshift-marketplace/community-operators-lk4zz" Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.806435 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a072e3d1-b363-49da-b227-6c6f7bb0aa9d-utilities\") pod \"community-operators-lk4zz\" (UID: \"a072e3d1-b363-49da-b227-6c6f7bb0aa9d\") " pod="openshift-marketplace/community-operators-lk4zz" Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.814428 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-txdr6" Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.834550 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n545\" (UniqueName: \"kubernetes.io/projected/a072e3d1-b363-49da-b227-6c6f7bb0aa9d-kube-api-access-5n545\") pod \"community-operators-lk4zz\" (UID: \"a072e3d1-b363-49da-b227-6c6f7bb0aa9d\") " pod="openshift-marketplace/community-operators-lk4zz" Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.883489 4867 patch_prober.go:28] interesting pod/router-default-5444994796-7v969 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 01 08:28:53 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Jan 01 08:28:53 crc kubenswrapper[4867]: [+]process-running ok Jan 01 08:28:53 crc kubenswrapper[4867]: healthz check failed Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.883625 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7v969" podUID="1ac62bb4-9b43-4266-8325-ecdc8d1c0d39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.898025 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29454255-49zxs" event={"ID":"ed3ea167-3dde-4d3d-b36b-277e5368f1c9","Type":"ContainerStarted","Data":"aef129ecc0ac3ce02b207ca25600b4c322c872e8c9af9326a544a7369c6dab45"} Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.905812 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:53 crc kubenswrapper[4867]: E0101 08:28:53.906848 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:54.406833016 +0000 UTC m=+143.542101785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.910701 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qxlxw"] Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.911577 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qxlxw" Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.925287 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29454255-49zxs" podStartSLOduration=124.925242961 podStartE2EDuration="2m4.925242961s" podCreationTimestamp="2026-01-01 08:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:53.919142851 +0000 UTC m=+143.054411620" watchObservedRunningTime="2026-01-01 08:28:53.925242961 +0000 UTC m=+143.060511730" Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.934239 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qxlxw"] Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.959679 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7qgnh" event={"ID":"8e196803-de1e-4b07-acf8-489c31eb0bf2","Type":"ContainerStarted","Data":"a38fa727e3f0e774ba727ad12fe8decd12fbb5ff58723836d32a58e1433af681"} Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.966349 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-5gnbj" event={"ID":"17d1f537-63f7-4ee9-ba1a-8395a56d24a3","Type":"ContainerStarted","Data":"3cca9e365786c857378c83ad78555693779141bbf43a792db707274c282e0525"} Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.980433 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-69dxz" event={"ID":"7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1","Type":"ContainerStarted","Data":"1b676feb704726a5774d49c40d7b960d98a8dd396a95f84e6453922fecb909eb"} Jan 01 08:28:53 crc kubenswrapper[4867]: I0101 08:28:53.993281 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7qgnh" podStartSLOduration=123.993260095 podStartE2EDuration="2m3.993260095s" podCreationTimestamp="2026-01-01 08:26:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:53.990533557 +0000 UTC m=+143.125802326" watchObservedRunningTime="2026-01-01 08:28:53.993260095 +0000 UTC m=+143.128528864" Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.017720 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zjdzc" event={"ID":"e93786ca-d394-46a5-94d8-00c6056cea5a","Type":"ContainerStarted","Data":"175418e53778dcbb265a56c42c368dbd587c9037f42b3c450506994f8e1b3c31"} Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.023404 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-5gnbj" podStartSLOduration=125.023387382 podStartE2EDuration="2m5.023387382s" podCreationTimestamp="2026-01-01 08:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:54.023350891 +0000 UTC m=+143.158619650" watchObservedRunningTime="2026-01-01 08:28:54.023387382 +0000 UTC m=+143.158656151" Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.024827 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p7jh5" event={"ID":"e005805b-ffc8-4dcf-88bc-31611953d870","Type":"ContainerStarted","Data":"bcc5879c648467adde5e6c049e6c761938174a6d2f266a6380a9e9b43125694d"} Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.027288 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sbfbv" event={"ID":"9d2ba3c1-e279-4376-a7fb-8bd238dbd9bb","Type":"ContainerStarted","Data":"f769b496cf331019ba3721cb06a313cce9c45735760c1bde810c221598d6210f"} Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.027313 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sbfbv" event={"ID":"9d2ba3c1-e279-4376-a7fb-8bd238dbd9bb","Type":"ContainerStarted","Data":"e70491a0b1f5393cec4db8641e3cf31f69c5566bfa91015ccd8d15da9acaa2c3"} Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.027514 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-sbfbv" Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.030347 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lk4zz" Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.032137 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8723fd85-0062-4c7e-b113-f46b791257f4-catalog-content\") pod \"certified-operators-qxlxw\" (UID: \"8723fd85-0062-4c7e-b113-f46b791257f4\") " pod="openshift-marketplace/certified-operators-qxlxw" Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.032331 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6r4f\" (UniqueName: \"kubernetes.io/projected/8723fd85-0062-4c7e-b113-f46b791257f4-kube-api-access-j6r4f\") pod \"certified-operators-qxlxw\" (UID: \"8723fd85-0062-4c7e-b113-f46b791257f4\") " pod="openshift-marketplace/certified-operators-qxlxw" Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.032380 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8723fd85-0062-4c7e-b113-f46b791257f4-utilities\") pod \"certified-operators-qxlxw\" (UID: \"8723fd85-0062-4c7e-b113-f46b791257f4\") " pod="openshift-marketplace/certified-operators-qxlxw" Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.032410 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:54 crc kubenswrapper[4867]: E0101 08:28:54.035326 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:54.535313422 +0000 UTC m=+143.670582191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.056283 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-67p67" event={"ID":"ba8ed252-ce14-45fe-8a3e-82d7981d3acb","Type":"ContainerStarted","Data":"9dd444ba37e568a0c63792727a630006b4fc917f207d01da082a82db6b875f68"} Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.056911 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxjs9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.056948 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fxjs9" podUID="9e8f3532-2214-4769-b4f1-2edb51ca7aec" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.065935 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p7jh5" podStartSLOduration=124.065921217 podStartE2EDuration="2m4.065921217s" podCreationTimestamp="2026-01-01 08:26:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:54.063952786 +0000 UTC m=+143.199221575" watchObservedRunningTime="2026-01-01 08:28:54.065921217 +0000 UTC m=+143.201189986" Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.066073 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7qdzz" Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.070002 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ztzb7" Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.114211 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gmjjl"] Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.115403 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-67p67" podStartSLOduration=124.115390362 podStartE2EDuration="2m4.115390362s" podCreationTimestamp="2026-01-01 08:26:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:54.112090963 +0000 UTC m=+143.247359732" watchObservedRunningTime="2026-01-01 08:28:54.115390362 +0000 UTC m=+143.250659131" Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.138115 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:54 crc kubenswrapper[4867]: E0101 08:28:54.152697 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:54.652670387 +0000 UTC m=+143.787939156 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.152899 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6r4f\" (UniqueName: \"kubernetes.io/projected/8723fd85-0062-4c7e-b113-f46b791257f4-kube-api-access-j6r4f\") pod \"certified-operators-qxlxw\" (UID: \"8723fd85-0062-4c7e-b113-f46b791257f4\") " pod="openshift-marketplace/certified-operators-qxlxw" Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.153082 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8723fd85-0062-4c7e-b113-f46b791257f4-utilities\") pod \"certified-operators-qxlxw\" (UID: \"8723fd85-0062-4c7e-b113-f46b791257f4\") " pod="openshift-marketplace/certified-operators-qxlxw" Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.153127 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.153195 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8723fd85-0062-4c7e-b113-f46b791257f4-catalog-content\") pod \"certified-operators-qxlxw\" (UID: \"8723fd85-0062-4c7e-b113-f46b791257f4\") " pod="openshift-marketplace/certified-operators-qxlxw" Jan 01 08:28:54 crc kubenswrapper[4867]: E0101 08:28:54.156577 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:54.656563758 +0000 UTC m=+143.791832527 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.163985 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8723fd85-0062-4c7e-b113-f46b791257f4-catalog-content\") pod \"certified-operators-qxlxw\" (UID: \"8723fd85-0062-4c7e-b113-f46b791257f4\") " pod="openshift-marketplace/certified-operators-qxlxw" Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.165100 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8723fd85-0062-4c7e-b113-f46b791257f4-utilities\") pod \"certified-operators-qxlxw\" (UID: \"8723fd85-0062-4c7e-b113-f46b791257f4\") " pod="openshift-marketplace/certified-operators-qxlxw" Jan 01 08:28:54 crc kubenswrapper[4867]: W0101 08:28:54.166685 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbcb25595_1b19_4e0b_a711_f3e0ed8e0689.slice/crio-eb78519c532b2ef04198c41a771f58ce9c8da21f1aacdc8c038ec3fc7e5357a3 WatchSource:0}: Error finding container eb78519c532b2ef04198c41a771f58ce9c8da21f1aacdc8c038ec3fc7e5357a3: Status 404 returned error can't find the container with id eb78519c532b2ef04198c41a771f58ce9c8da21f1aacdc8c038ec3fc7e5357a3 Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.179774 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-sbfbv" podStartSLOduration=9.179733954 podStartE2EDuration="9.179733954s" podCreationTimestamp="2026-01-01 08:28:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:54.16966521 +0000 UTC m=+143.304933999" watchObservedRunningTime="2026-01-01 08:28:54.179733954 +0000 UTC m=+143.315002723" Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.210971 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6r4f\" (UniqueName: \"kubernetes.io/projected/8723fd85-0062-4c7e-b113-f46b791257f4-kube-api-access-j6r4f\") pod \"certified-operators-qxlxw\" (UID: \"8723fd85-0062-4c7e-b113-f46b791257f4\") " pod="openshift-marketplace/certified-operators-qxlxw" Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.214012 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lnc8j" Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.236914 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-txdr6"] Jan 01 08:28:54 crc kubenswrapper[4867]: W0101 08:28:54.246000 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72494188_2bff_4e14_8a71_041a84c049f2.slice/crio-ff17e0b9138056267a34bb722bc75e20ac75c8902ba3b6321deb2914a0a5b3f9 WatchSource:0}: Error finding container ff17e0b9138056267a34bb722bc75e20ac75c8902ba3b6321deb2914a0a5b3f9: Status 404 returned error can't find the container with id ff17e0b9138056267a34bb722bc75e20ac75c8902ba3b6321deb2914a0a5b3f9 Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.256856 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:54 crc kubenswrapper[4867]: E0101 08:28:54.258631 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:54.75861585 +0000 UTC m=+143.893884619 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.262524 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qxlxw" Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.358620 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:54 crc kubenswrapper[4867]: E0101 08:28:54.359394 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:54.859379015 +0000 UTC m=+143.994647784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.433221 4867 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.460134 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:54 crc kubenswrapper[4867]: E0101 08:28:54.460663 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:54.960649959 +0000 UTC m=+144.095918728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.489717 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lk4zz"] Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.562105 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:54 crc kubenswrapper[4867]: E0101 08:28:54.562686 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:55.06267457 +0000 UTC m=+144.197943339 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.573856 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qxlxw"] Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.662847 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:54 crc kubenswrapper[4867]: E0101 08:28:54.662995 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:55.162970229 +0000 UTC m=+144.298238998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.663341 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:54 crc kubenswrapper[4867]: E0101 08:28:54.663612 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:55.163599392 +0000 UTC m=+144.298868151 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.764031 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:54 crc kubenswrapper[4867]: E0101 08:28:54.764368 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:55.264230933 +0000 UTC m=+144.399499702 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.865217 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:54 crc kubenswrapper[4867]: E0101 08:28:54.865468 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:55.365456896 +0000 UTC m=+144.500725665 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.869989 4867 patch_prober.go:28] interesting pod/router-default-5444994796-7v969 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 01 08:28:54 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Jan 01 08:28:54 crc kubenswrapper[4867]: [+]process-running ok Jan 01 08:28:54 crc kubenswrapper[4867]: healthz check failed Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.870021 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7v969" podUID="1ac62bb4-9b43-4266-8325-ecdc8d1c0d39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 01 08:28:54 crc kubenswrapper[4867]: I0101 08:28:54.965923 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:54 crc kubenswrapper[4867]: E0101 08:28:54.966371 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:55.466357917 +0000 UTC m=+144.601626686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.060943 4867 generic.go:334] "Generic (PLEG): container finished" podID="72494188-2bff-4e14-8a71-041a84c049f2" containerID="48bf2a57c3aa98e0c9ff529aa158f6d78614a23e928bd18f8f6fd89626bad53a" exitCode=0 Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.061105 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txdr6" event={"ID":"72494188-2bff-4e14-8a71-041a84c049f2","Type":"ContainerDied","Data":"48bf2a57c3aa98e0c9ff529aa158f6d78614a23e928bd18f8f6fd89626bad53a"} Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.061788 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txdr6" event={"ID":"72494188-2bff-4e14-8a71-041a84c049f2","Type":"ContainerStarted","Data":"ff17e0b9138056267a34bb722bc75e20ac75c8902ba3b6321deb2914a0a5b3f9"} Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.063237 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.063253 4867 generic.go:334] "Generic (PLEG): container finished" podID="a072e3d1-b363-49da-b227-6c6f7bb0aa9d" containerID="223b5521c8d122a0bce447ed243859acdf7f74480ac420951b1926b60afd6c55" exitCode=0 Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.063369 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lk4zz" event={"ID":"a072e3d1-b363-49da-b227-6c6f7bb0aa9d","Type":"ContainerDied","Data":"223b5521c8d122a0bce447ed243859acdf7f74480ac420951b1926b60afd6c55"} Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.063538 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lk4zz" event={"ID":"a072e3d1-b363-49da-b227-6c6f7bb0aa9d","Type":"ContainerStarted","Data":"832e8064b4a9229cf65a6ff0975be503c8dacfad85f27ea81ac44dfc73aebb65"} Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.065025 4867 generic.go:334] "Generic (PLEG): container finished" podID="8723fd85-0062-4c7e-b113-f46b791257f4" containerID="366d38b8d87b466d6cee49ee248f8b91652f0dc34c3f9457d33d8a132c1bbb74" exitCode=0 Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.065116 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxlxw" event={"ID":"8723fd85-0062-4c7e-b113-f46b791257f4","Type":"ContainerDied","Data":"366d38b8d87b466d6cee49ee248f8b91652f0dc34c3f9457d33d8a132c1bbb74"} Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.065190 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxlxw" event={"ID":"8723fd85-0062-4c7e-b113-f46b791257f4","Type":"ContainerStarted","Data":"e7803ea7e41964f3c4f8c817cc180ab756bbaea08ecd5086cb0876619a68af98"} Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.069056 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:55 crc kubenswrapper[4867]: E0101 08:28:55.069908 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:55.569868692 +0000 UTC m=+144.705137461 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.076809 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-69dxz" event={"ID":"7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1","Type":"ContainerStarted","Data":"e2c432ce49350d8faebb030dd4bd5be654ea94bc56f0ca1abb885752c68fe3c2"} Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.076933 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-69dxz" event={"ID":"7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1","Type":"ContainerStarted","Data":"19f237b463bc93a0457a46fe301d7f244e52a70175c3d3f1a04949d62c5c139f"} Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.077022 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-69dxz" event={"ID":"7e3bf0e4-b65e-4a66-9b0b-6b4c067dc4f1","Type":"ContainerStarted","Data":"c9078765029f14ca8c0c1d98d2545dc83c768eced95457f92851da80fd62faf8"} Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.079040 4867 generic.go:334] "Generic (PLEG): container finished" podID="bcb25595-1b19-4e0b-a711-f3e0ed8e0689" containerID="1be2a6b57798ddf7130e0d1257322fba44bd6a3e1fdc6aff86bbaab59a2143db" exitCode=0 Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.079146 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmjjl" event={"ID":"bcb25595-1b19-4e0b-a711-f3e0ed8e0689","Type":"ContainerDied","Data":"1be2a6b57798ddf7130e0d1257322fba44bd6a3e1fdc6aff86bbaab59a2143db"} Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.079175 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmjjl" event={"ID":"bcb25595-1b19-4e0b-a711-f3e0ed8e0689","Type":"ContainerStarted","Data":"eb78519c532b2ef04198c41a771f58ce9c8da21f1aacdc8c038ec3fc7e5357a3"} Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.079912 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-fxjs9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.079951 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fxjs9" podUID="9e8f3532-2214-4769-b4f1-2edb51ca7aec" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.107212 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-69dxz" podStartSLOduration=10.107195799 podStartE2EDuration="10.107195799s" podCreationTimestamp="2026-01-01 08:28:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:55.105334731 +0000 UTC m=+144.240603500" watchObservedRunningTime="2026-01-01 08:28:55.107195799 +0000 UTC m=+144.242464568" Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.170484 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:55 crc kubenswrapper[4867]: E0101 08:28:55.170676 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:55.670635038 +0000 UTC m=+144.805903807 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.171256 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:55 crc kubenswrapper[4867]: E0101 08:28:55.172218 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-01 08:28:55.672201644 +0000 UTC m=+144.807470403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nb85" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.272244 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:55 crc kubenswrapper[4867]: E0101 08:28:55.272531 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-01 08:28:55.772513924 +0000 UTC m=+144.907782693 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.307934 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dr5fz"] Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.309424 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dr5fz" Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.311374 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.311935 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dr5fz"] Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.353873 4867 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-01T08:28:54.433399426Z","Handler":null,"Name":""} Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.364305 4867 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.364339 4867 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.375215 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.377072 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.377108 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.401724 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nb85\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.476441 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.476734 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b1938e8-f894-481e-a3d9-9050583ee8c2-utilities\") pod \"redhat-marketplace-dr5fz\" (UID: \"8b1938e8-f894-481e-a3d9-9050583ee8c2\") " pod="openshift-marketplace/redhat-marketplace-dr5fz" Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.476795 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b1938e8-f894-481e-a3d9-9050583ee8c2-catalog-content\") pod \"redhat-marketplace-dr5fz\" (UID: \"8b1938e8-f894-481e-a3d9-9050583ee8c2\") " pod="openshift-marketplace/redhat-marketplace-dr5fz" Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.476858 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5hks\" (UniqueName: \"kubernetes.io/projected/8b1938e8-f894-481e-a3d9-9050583ee8c2-kube-api-access-m5hks\") pod \"redhat-marketplace-dr5fz\" (UID: \"8b1938e8-f894-481e-a3d9-9050583ee8c2\") " pod="openshift-marketplace/redhat-marketplace-dr5fz" Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.485152 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.577807 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b1938e8-f894-481e-a3d9-9050583ee8c2-utilities\") pod \"redhat-marketplace-dr5fz\" (UID: \"8b1938e8-f894-481e-a3d9-9050583ee8c2\") " pod="openshift-marketplace/redhat-marketplace-dr5fz" Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.577868 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b1938e8-f894-481e-a3d9-9050583ee8c2-catalog-content\") pod \"redhat-marketplace-dr5fz\" (UID: \"8b1938e8-f894-481e-a3d9-9050583ee8c2\") " pod="openshift-marketplace/redhat-marketplace-dr5fz" Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.577921 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5hks\" (UniqueName: \"kubernetes.io/projected/8b1938e8-f894-481e-a3d9-9050583ee8c2-kube-api-access-m5hks\") pod \"redhat-marketplace-dr5fz\" (UID: \"8b1938e8-f894-481e-a3d9-9050583ee8c2\") " pod="openshift-marketplace/redhat-marketplace-dr5fz" Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.578418 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b1938e8-f894-481e-a3d9-9050583ee8c2-utilities\") pod \"redhat-marketplace-dr5fz\" (UID: \"8b1938e8-f894-481e-a3d9-9050583ee8c2\") " pod="openshift-marketplace/redhat-marketplace-dr5fz" Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.578587 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b1938e8-f894-481e-a3d9-9050583ee8c2-catalog-content\") pod \"redhat-marketplace-dr5fz\" (UID: \"8b1938e8-f894-481e-a3d9-9050583ee8c2\") " pod="openshift-marketplace/redhat-marketplace-dr5fz" Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.610229 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5hks\" (UniqueName: \"kubernetes.io/projected/8b1938e8-f894-481e-a3d9-9050583ee8c2-kube-api-access-m5hks\") pod \"redhat-marketplace-dr5fz\" (UID: \"8b1938e8-f894-481e-a3d9-9050583ee8c2\") " pod="openshift-marketplace/redhat-marketplace-dr5fz" Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.622720 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dr5fz" Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.681016 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.689070 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wfbvj"] Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.690443 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wfbvj" Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.702181 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfbvj"] Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.783464 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ee33b06-e0e4-458d-8b01-76c6f2d62891-catalog-content\") pod \"redhat-marketplace-wfbvj\" (UID: \"6ee33b06-e0e4-458d-8b01-76c6f2d62891\") " pod="openshift-marketplace/redhat-marketplace-wfbvj" Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.783539 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ee33b06-e0e4-458d-8b01-76c6f2d62891-utilities\") pod \"redhat-marketplace-wfbvj\" (UID: \"6ee33b06-e0e4-458d-8b01-76c6f2d62891\") " pod="openshift-marketplace/redhat-marketplace-wfbvj" Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.783633 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2jb8\" (UniqueName: \"kubernetes.io/projected/6ee33b06-e0e4-458d-8b01-76c6f2d62891-kube-api-access-r2jb8\") pod \"redhat-marketplace-wfbvj\" (UID: \"6ee33b06-e0e4-458d-8b01-76c6f2d62891\") " pod="openshift-marketplace/redhat-marketplace-wfbvj" Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.877494 4867 patch_prober.go:28] interesting pod/router-default-5444994796-7v969 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 01 08:28:55 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Jan 01 08:28:55 crc kubenswrapper[4867]: [+]process-running ok Jan 01 08:28:55 crc kubenswrapper[4867]: healthz check failed Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.877551 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7v969" podUID="1ac62bb4-9b43-4266-8325-ecdc8d1c0d39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.884407 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dr5fz"] Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.884493 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2jb8\" (UniqueName: \"kubernetes.io/projected/6ee33b06-e0e4-458d-8b01-76c6f2d62891-kube-api-access-r2jb8\") pod \"redhat-marketplace-wfbvj\" (UID: \"6ee33b06-e0e4-458d-8b01-76c6f2d62891\") " pod="openshift-marketplace/redhat-marketplace-wfbvj" Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.884593 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ee33b06-e0e4-458d-8b01-76c6f2d62891-catalog-content\") pod \"redhat-marketplace-wfbvj\" (UID: \"6ee33b06-e0e4-458d-8b01-76c6f2d62891\") " pod="openshift-marketplace/redhat-marketplace-wfbvj" Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.884667 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ee33b06-e0e4-458d-8b01-76c6f2d62891-utilities\") pod \"redhat-marketplace-wfbvj\" (UID: \"6ee33b06-e0e4-458d-8b01-76c6f2d62891\") " pod="openshift-marketplace/redhat-marketplace-wfbvj" Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.885440 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ee33b06-e0e4-458d-8b01-76c6f2d62891-utilities\") pod \"redhat-marketplace-wfbvj\" (UID: \"6ee33b06-e0e4-458d-8b01-76c6f2d62891\") " pod="openshift-marketplace/redhat-marketplace-wfbvj" Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.887904 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ee33b06-e0e4-458d-8b01-76c6f2d62891-catalog-content\") pod \"redhat-marketplace-wfbvj\" (UID: \"6ee33b06-e0e4-458d-8b01-76c6f2d62891\") " pod="openshift-marketplace/redhat-marketplace-wfbvj" Jan 01 08:28:55 crc kubenswrapper[4867]: W0101 08:28:55.898175 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b1938e8_f894_481e_a3d9_9050583ee8c2.slice/crio-610ffc783fc5ad098307f5810b44b215957c1ce7ba5208e3ff05629065e294c1 WatchSource:0}: Error finding container 610ffc783fc5ad098307f5810b44b215957c1ce7ba5208e3ff05629065e294c1: Status 404 returned error can't find the container with id 610ffc783fc5ad098307f5810b44b215957c1ce7ba5208e3ff05629065e294c1 Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.903769 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2jb8\" (UniqueName: \"kubernetes.io/projected/6ee33b06-e0e4-458d-8b01-76c6f2d62891-kube-api-access-r2jb8\") pod \"redhat-marketplace-wfbvj\" (UID: \"6ee33b06-e0e4-458d-8b01-76c6f2d62891\") " pod="openshift-marketplace/redhat-marketplace-wfbvj" Jan 01 08:28:55 crc kubenswrapper[4867]: I0101 08:28:55.971875 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4nb85"] Jan 01 08:28:55 crc kubenswrapper[4867]: W0101 08:28:55.982354 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbab255b3_6b41_494f_a4f6_dde5ebe7b538.slice/crio-ce8043c911f4b287e7d74de50848297bc19a3f2745188805cf6ca2c858c741c4 WatchSource:0}: Error finding container ce8043c911f4b287e7d74de50848297bc19a3f2745188805cf6ca2c858c741c4: Status 404 returned error can't find the container with id ce8043c911f4b287e7d74de50848297bc19a3f2745188805cf6ca2c858c741c4 Jan 01 08:28:56 crc kubenswrapper[4867]: I0101 08:28:56.005986 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wfbvj" Jan 01 08:28:56 crc kubenswrapper[4867]: I0101 08:28:56.100870 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" event={"ID":"bab255b3-6b41-494f-a4f6-dde5ebe7b538","Type":"ContainerStarted","Data":"ce8043c911f4b287e7d74de50848297bc19a3f2745188805cf6ca2c858c741c4"} Jan 01 08:28:56 crc kubenswrapper[4867]: I0101 08:28:56.124796 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dr5fz" event={"ID":"8b1938e8-f894-481e-a3d9-9050583ee8c2","Type":"ContainerStarted","Data":"8f83968b4221e710316b20f187494a77775da996b435dc941cf0fa0982bfdc59"} Jan 01 08:28:56 crc kubenswrapper[4867]: I0101 08:28:56.124835 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dr5fz" event={"ID":"8b1938e8-f894-481e-a3d9-9050583ee8c2","Type":"ContainerStarted","Data":"610ffc783fc5ad098307f5810b44b215957c1ce7ba5208e3ff05629065e294c1"} Jan 01 08:28:56 crc kubenswrapper[4867]: I0101 08:28:56.230223 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfbvj"] Jan 01 08:28:56 crc kubenswrapper[4867]: W0101 08:28:56.235591 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ee33b06_e0e4_458d_8b01_76c6f2d62891.slice/crio-316ad113acd6d9d45f408d1e29ca166c36d79dd25614288b0bf32f87e776201f WatchSource:0}: Error finding container 316ad113acd6d9d45f408d1e29ca166c36d79dd25614288b0bf32f87e776201f: Status 404 returned error can't find the container with id 316ad113acd6d9d45f408d1e29ca166c36d79dd25614288b0bf32f87e776201f Jan 01 08:28:56 crc kubenswrapper[4867]: I0101 08:28:56.692098 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bt5bw"] Jan 01 08:28:56 crc kubenswrapper[4867]: I0101 08:28:56.694414 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bt5bw" Jan 01 08:28:56 crc kubenswrapper[4867]: I0101 08:28:56.700260 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bt5bw"] Jan 01 08:28:56 crc kubenswrapper[4867]: I0101 08:28:56.710270 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 01 08:28:56 crc kubenswrapper[4867]: I0101 08:28:56.748849 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" Jan 01 08:28:56 crc kubenswrapper[4867]: I0101 08:28:56.769103 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-bsfrw" Jan 01 08:28:56 crc kubenswrapper[4867]: I0101 08:28:56.869258 4867 patch_prober.go:28] interesting pod/router-default-5444994796-7v969 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 01 08:28:56 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Jan 01 08:28:56 crc kubenswrapper[4867]: [+]process-running ok Jan 01 08:28:56 crc kubenswrapper[4867]: healthz check failed Jan 01 08:28:56 crc kubenswrapper[4867]: I0101 08:28:56.869314 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7v969" podUID="1ac62bb4-9b43-4266-8325-ecdc8d1c0d39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 01 08:28:56 crc kubenswrapper[4867]: I0101 08:28:56.875678 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:56 crc kubenswrapper[4867]: I0101 08:28:56.903236 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:28:56 crc kubenswrapper[4867]: I0101 08:28:56.912159 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cda336b-c663-4993-bdc1-66b729bf0740-catalog-content\") pod \"redhat-operators-bt5bw\" (UID: \"8cda336b-c663-4993-bdc1-66b729bf0740\") " pod="openshift-marketplace/redhat-operators-bt5bw" Jan 01 08:28:56 crc kubenswrapper[4867]: I0101 08:28:56.912225 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-622ml\" (UniqueName: \"kubernetes.io/projected/8cda336b-c663-4993-bdc1-66b729bf0740-kube-api-access-622ml\") pod \"redhat-operators-bt5bw\" (UID: \"8cda336b-c663-4993-bdc1-66b729bf0740\") " pod="openshift-marketplace/redhat-operators-bt5bw" Jan 01 08:28:56 crc kubenswrapper[4867]: I0101 08:28:56.912306 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cda336b-c663-4993-bdc1-66b729bf0740-utilities\") pod \"redhat-operators-bt5bw\" (UID: \"8cda336b-c663-4993-bdc1-66b729bf0740\") " pod="openshift-marketplace/redhat-operators-bt5bw" Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.014552 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cda336b-c663-4993-bdc1-66b729bf0740-catalog-content\") pod \"redhat-operators-bt5bw\" (UID: \"8cda336b-c663-4993-bdc1-66b729bf0740\") " pod="openshift-marketplace/redhat-operators-bt5bw" Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.014661 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-622ml\" (UniqueName: \"kubernetes.io/projected/8cda336b-c663-4993-bdc1-66b729bf0740-kube-api-access-622ml\") pod \"redhat-operators-bt5bw\" (UID: \"8cda336b-c663-4993-bdc1-66b729bf0740\") " pod="openshift-marketplace/redhat-operators-bt5bw" Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.014739 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cda336b-c663-4993-bdc1-66b729bf0740-utilities\") pod \"redhat-operators-bt5bw\" (UID: \"8cda336b-c663-4993-bdc1-66b729bf0740\") " pod="openshift-marketplace/redhat-operators-bt5bw" Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.015850 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cda336b-c663-4993-bdc1-66b729bf0740-catalog-content\") pod \"redhat-operators-bt5bw\" (UID: \"8cda336b-c663-4993-bdc1-66b729bf0740\") " pod="openshift-marketplace/redhat-operators-bt5bw" Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.017210 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cda336b-c663-4993-bdc1-66b729bf0740-utilities\") pod \"redhat-operators-bt5bw\" (UID: \"8cda336b-c663-4993-bdc1-66b729bf0740\") " pod="openshift-marketplace/redhat-operators-bt5bw" Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.040985 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-622ml\" (UniqueName: \"kubernetes.io/projected/8cda336b-c663-4993-bdc1-66b729bf0740-kube-api-access-622ml\") pod \"redhat-operators-bt5bw\" (UID: \"8cda336b-c663-4993-bdc1-66b729bf0740\") " pod="openshift-marketplace/redhat-operators-bt5bw" Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.102074 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d2x67"] Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.103055 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d2x67" Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.115609 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.115682 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.115769 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.115797 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.119517 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.123535 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.123564 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.143308 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.143912 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d2x67"] Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.172671 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" event={"ID":"bab255b3-6b41-494f-a4f6-dde5ebe7b538","Type":"ContainerStarted","Data":"8d47e170aa6bcb204c2758b521d56a700de61b0e3faf84929813cba1ff65f620"} Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.172737 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.178529 4867 generic.go:334] "Generic (PLEG): container finished" podID="8b1938e8-f894-481e-a3d9-9050583ee8c2" containerID="8f83968b4221e710316b20f187494a77775da996b435dc941cf0fa0982bfdc59" exitCode=0 Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.178640 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dr5fz" event={"ID":"8b1938e8-f894-481e-a3d9-9050583ee8c2","Type":"ContainerDied","Data":"8f83968b4221e710316b20f187494a77775da996b435dc941cf0fa0982bfdc59"} Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.180473 4867 generic.go:334] "Generic (PLEG): container finished" podID="6ee33b06-e0e4-458d-8b01-76c6f2d62891" containerID="94a79ff6f7306d91bb450c1eb3024d8c9aa518c971bde605321d4f3ac7509f5d" exitCode=0 Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.181515 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfbvj" event={"ID":"6ee33b06-e0e4-458d-8b01-76c6f2d62891","Type":"ContainerDied","Data":"94a79ff6f7306d91bb450c1eb3024d8c9aa518c971bde605321d4f3ac7509f5d"} Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.181540 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfbvj" event={"ID":"6ee33b06-e0e4-458d-8b01-76c6f2d62891","Type":"ContainerStarted","Data":"316ad113acd6d9d45f408d1e29ca166c36d79dd25614288b0bf32f87e776201f"} Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.204778 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" podStartSLOduration=128.204719964 podStartE2EDuration="2m8.204719964s" podCreationTimestamp="2026-01-01 08:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:28:57.195563394 +0000 UTC m=+146.330832173" watchObservedRunningTime="2026-01-01 08:28:57.204719964 +0000 UTC m=+146.339988743" Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.216071 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.216838 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tjmv\" (UniqueName: \"kubernetes.io/projected/390347a2-a9b2-4441-8910-1be8ea15282c-kube-api-access-2tjmv\") pod \"redhat-operators-d2x67\" (UID: \"390347a2-a9b2-4441-8910-1be8ea15282c\") " pod="openshift-marketplace/redhat-operators-d2x67" Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.216871 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/390347a2-a9b2-4441-8910-1be8ea15282c-catalog-content\") pod \"redhat-operators-d2x67\" (UID: \"390347a2-a9b2-4441-8910-1be8ea15282c\") " pod="openshift-marketplace/redhat-operators-d2x67" Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.217051 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/390347a2-a9b2-4441-8910-1be8ea15282c-utilities\") pod \"redhat-operators-d2x67\" (UID: \"390347a2-a9b2-4441-8910-1be8ea15282c\") " pod="openshift-marketplace/redhat-operators-d2x67" Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.269152 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.280347 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.291905 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.315261 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bt5bw" Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.319633 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/390347a2-a9b2-4441-8910-1be8ea15282c-utilities\") pod \"redhat-operators-d2x67\" (UID: \"390347a2-a9b2-4441-8910-1be8ea15282c\") " pod="openshift-marketplace/redhat-operators-d2x67" Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.319724 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tjmv\" (UniqueName: \"kubernetes.io/projected/390347a2-a9b2-4441-8910-1be8ea15282c-kube-api-access-2tjmv\") pod \"redhat-operators-d2x67\" (UID: \"390347a2-a9b2-4441-8910-1be8ea15282c\") " pod="openshift-marketplace/redhat-operators-d2x67" Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.319816 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/390347a2-a9b2-4441-8910-1be8ea15282c-catalog-content\") pod \"redhat-operators-d2x67\" (UID: \"390347a2-a9b2-4441-8910-1be8ea15282c\") " pod="openshift-marketplace/redhat-operators-d2x67" Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.325005 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/390347a2-a9b2-4441-8910-1be8ea15282c-catalog-content\") pod \"redhat-operators-d2x67\" (UID: \"390347a2-a9b2-4441-8910-1be8ea15282c\") " pod="openshift-marketplace/redhat-operators-d2x67" Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.325076 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/390347a2-a9b2-4441-8910-1be8ea15282c-utilities\") pod \"redhat-operators-d2x67\" (UID: \"390347a2-a9b2-4441-8910-1be8ea15282c\") " pod="openshift-marketplace/redhat-operators-d2x67" Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.362506 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tjmv\" (UniqueName: \"kubernetes.io/projected/390347a2-a9b2-4441-8910-1be8ea15282c-kube-api-access-2tjmv\") pod \"redhat-operators-d2x67\" (UID: \"390347a2-a9b2-4441-8910-1be8ea15282c\") " pod="openshift-marketplace/redhat-operators-d2x67" Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.485398 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d2x67" Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.873168 4867 patch_prober.go:28] interesting pod/router-default-5444994796-7v969 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 01 08:28:57 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Jan 01 08:28:57 crc kubenswrapper[4867]: [+]process-running ok Jan 01 08:28:57 crc kubenswrapper[4867]: healthz check failed Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.873386 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7v969" podUID="1ac62bb4-9b43-4266-8325-ecdc8d1c0d39" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.896248 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-6lsq2" Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.896281 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-6lsq2" Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.918580 4867 patch_prober.go:28] interesting pod/console-f9d7485db-6lsq2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.19:8443/health\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.918630 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-6lsq2" podUID="25d57f2f-1353-417b-ba47-a0ceb1a4577e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.19:8443/health\": dial tcp 10.217.0.19:8443: connect: connection refused" Jan 01 08:28:57 crc kubenswrapper[4867]: I0101 08:28:57.999088 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d2x67"] Jan 01 08:28:58 crc kubenswrapper[4867]: I0101 08:28:58.035763 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bt5bw"] Jan 01 08:28:58 crc kubenswrapper[4867]: W0101 08:28:58.060683 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-ce9883c23cb33f3b86eebc830d0b3f2e8571325ba4808bf1544b70e474a76ea9 WatchSource:0}: Error finding container ce9883c23cb33f3b86eebc830d0b3f2e8571325ba4808bf1544b70e474a76ea9: Status 404 returned error can't find the container with id ce9883c23cb33f3b86eebc830d0b3f2e8571325ba4808bf1544b70e474a76ea9 Jan 01 08:28:58 crc kubenswrapper[4867]: I0101 08:28:58.260606 4867 generic.go:334] "Generic (PLEG): container finished" podID="ed3ea167-3dde-4d3d-b36b-277e5368f1c9" containerID="aef129ecc0ac3ce02b207ca25600b4c322c872e8c9af9326a544a7369c6dab45" exitCode=0 Jan 01 08:28:58 crc kubenswrapper[4867]: I0101 08:28:58.260700 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29454255-49zxs" event={"ID":"ed3ea167-3dde-4d3d-b36b-277e5368f1c9","Type":"ContainerDied","Data":"aef129ecc0ac3ce02b207ca25600b4c322c872e8c9af9326a544a7369c6dab45"} Jan 01 08:28:58 crc kubenswrapper[4867]: I0101 08:28:58.266952 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bt5bw" event={"ID":"8cda336b-c663-4993-bdc1-66b729bf0740","Type":"ContainerStarted","Data":"661b50172ba99e1c4d18945a6619c6b5356936cd4642917f465552f1e1aeaf1e"} Jan 01 08:28:58 crc kubenswrapper[4867]: I0101 08:28:58.270917 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"b92773bfc4af8374aa4921b630b3ac921bf6033d85b1c93c4caeea7b0e2f8a5c"} Jan 01 08:28:58 crc kubenswrapper[4867]: I0101 08:28:58.270973 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"142254b6d7ea418b78edba9b381608e8be5de7fa7f67d8f71c9daa8bd880c733"} Jan 01 08:28:58 crc kubenswrapper[4867]: I0101 08:28:58.295930 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"ce9883c23cb33f3b86eebc830d0b3f2e8571325ba4808bf1544b70e474a76ea9"} Jan 01 08:28:58 crc kubenswrapper[4867]: I0101 08:28:58.337952 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2x67" event={"ID":"390347a2-a9b2-4441-8910-1be8ea15282c","Type":"ContainerStarted","Data":"3cc98a1959707c2e28819cb157af1d1c398db52fe5e88251693c2ba363f43336"} Jan 01 08:28:58 crc kubenswrapper[4867]: I0101 08:28:58.349623 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"4ce5748d9a23ce6bcc4067d5f5b606552944c3652dbf350e5ff889cf691fd3b7"} Jan 01 08:28:58 crc kubenswrapper[4867]: I0101 08:28:58.380775 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 01 08:28:58 crc kubenswrapper[4867]: I0101 08:28:58.381549 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 01 08:28:58 crc kubenswrapper[4867]: I0101 08:28:58.383375 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 01 08:28:58 crc kubenswrapper[4867]: I0101 08:28:58.391324 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 01 08:28:58 crc kubenswrapper[4867]: I0101 08:28:58.399514 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 01 08:28:58 crc kubenswrapper[4867]: I0101 08:28:58.573173 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3015ebab-9424-4495-ac2d-90ba00932f83-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3015ebab-9424-4495-ac2d-90ba00932f83\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 01 08:28:58 crc kubenswrapper[4867]: I0101 08:28:58.573561 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3015ebab-9424-4495-ac2d-90ba00932f83-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3015ebab-9424-4495-ac2d-90ba00932f83\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 01 08:28:58 crc kubenswrapper[4867]: I0101 08:28:58.608623 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-8tlg5" Jan 01 08:28:58 crc kubenswrapper[4867]: I0101 08:28:58.674712 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3015ebab-9424-4495-ac2d-90ba00932f83-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3015ebab-9424-4495-ac2d-90ba00932f83\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 01 08:28:58 crc kubenswrapper[4867]: I0101 08:28:58.674794 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3015ebab-9424-4495-ac2d-90ba00932f83-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3015ebab-9424-4495-ac2d-90ba00932f83\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 01 08:28:58 crc kubenswrapper[4867]: I0101 08:28:58.674915 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3015ebab-9424-4495-ac2d-90ba00932f83-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3015ebab-9424-4495-ac2d-90ba00932f83\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 01 08:28:58 crc kubenswrapper[4867]: I0101 08:28:58.699545 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3015ebab-9424-4495-ac2d-90ba00932f83-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3015ebab-9424-4495-ac2d-90ba00932f83\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 01 08:28:58 crc kubenswrapper[4867]: I0101 08:28:58.748373 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 01 08:28:58 crc kubenswrapper[4867]: I0101 08:28:58.782828 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-fxjs9" Jan 01 08:28:58 crc kubenswrapper[4867]: I0101 08:28:58.866658 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-7v969" Jan 01 08:28:58 crc kubenswrapper[4867]: I0101 08:28:58.870236 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-7v969" Jan 01 08:28:59 crc kubenswrapper[4867]: I0101 08:28:59.093897 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 01 08:28:59 crc kubenswrapper[4867]: I0101 08:28:59.382342 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"1f4d48fbe3f6a0fc23471b436d75aea4687a1103a4215028c434afd13a72f7a7"} Jan 01 08:28:59 crc kubenswrapper[4867]: I0101 08:28:59.382446 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:28:59 crc kubenswrapper[4867]: I0101 08:28:59.387941 4867 generic.go:334] "Generic (PLEG): container finished" podID="390347a2-a9b2-4441-8910-1be8ea15282c" containerID="47f2e3e60e83d25e4e911ad12935321eb27514a4651e9ae627d52ae9dea47771" exitCode=0 Jan 01 08:28:59 crc kubenswrapper[4867]: I0101 08:28:59.387997 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2x67" event={"ID":"390347a2-a9b2-4441-8910-1be8ea15282c","Type":"ContainerDied","Data":"47f2e3e60e83d25e4e911ad12935321eb27514a4651e9ae627d52ae9dea47771"} Jan 01 08:28:59 crc kubenswrapper[4867]: I0101 08:28:59.390686 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"3f3423e8ee8872989149ae66964063a763dfb3580c6060dc5eecf832481216db"} Jan 01 08:28:59 crc kubenswrapper[4867]: I0101 08:28:59.393286 4867 generic.go:334] "Generic (PLEG): container finished" podID="8cda336b-c663-4993-bdc1-66b729bf0740" containerID="0a6dcef94bc6eff3e35e95019b5f7d8c774fc9ed67b9fbb78f2c1dc26e34e760" exitCode=0 Jan 01 08:28:59 crc kubenswrapper[4867]: I0101 08:28:59.393336 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bt5bw" event={"ID":"8cda336b-c663-4993-bdc1-66b729bf0740","Type":"ContainerDied","Data":"0a6dcef94bc6eff3e35e95019b5f7d8c774fc9ed67b9fbb78f2c1dc26e34e760"} Jan 01 08:28:59 crc kubenswrapper[4867]: I0101 08:28:59.399877 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3015ebab-9424-4495-ac2d-90ba00932f83","Type":"ContainerStarted","Data":"387401624e02ec1699987d0fba87f99c9ecbee5e496e0f8ffe7e2bcba5f7d6c4"} Jan 01 08:28:59 crc kubenswrapper[4867]: I0101 08:28:59.419776 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-7v969" Jan 01 08:29:00 crc kubenswrapper[4867]: I0101 08:29:00.093295 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 01 08:29:00 crc kubenswrapper[4867]: I0101 08:29:00.094990 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 01 08:29:00 crc kubenswrapper[4867]: I0101 08:29:00.101916 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 01 08:29:00 crc kubenswrapper[4867]: I0101 08:29:00.102046 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 01 08:29:00 crc kubenswrapper[4867]: I0101 08:29:00.124621 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 01 08:29:00 crc kubenswrapper[4867]: I0101 08:29:00.226427 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ab09b2b2-b2b8-4c6d-90e3-8c8a0206eb08-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ab09b2b2-b2b8-4c6d-90e3-8c8a0206eb08\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 01 08:29:00 crc kubenswrapper[4867]: I0101 08:29:00.226525 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab09b2b2-b2b8-4c6d-90e3-8c8a0206eb08-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ab09b2b2-b2b8-4c6d-90e3-8c8a0206eb08\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 01 08:29:00 crc kubenswrapper[4867]: I0101 08:29:00.302775 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29454255-49zxs" Jan 01 08:29:00 crc kubenswrapper[4867]: I0101 08:29:00.327504 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab09b2b2-b2b8-4c6d-90e3-8c8a0206eb08-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ab09b2b2-b2b8-4c6d-90e3-8c8a0206eb08\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 01 08:29:00 crc kubenswrapper[4867]: I0101 08:29:00.327595 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ab09b2b2-b2b8-4c6d-90e3-8c8a0206eb08-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ab09b2b2-b2b8-4c6d-90e3-8c8a0206eb08\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 01 08:29:00 crc kubenswrapper[4867]: I0101 08:29:00.327699 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ab09b2b2-b2b8-4c6d-90e3-8c8a0206eb08-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ab09b2b2-b2b8-4c6d-90e3-8c8a0206eb08\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 01 08:29:00 crc kubenswrapper[4867]: I0101 08:29:00.383787 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab09b2b2-b2b8-4c6d-90e3-8c8a0206eb08-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ab09b2b2-b2b8-4c6d-90e3-8c8a0206eb08\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 01 08:29:00 crc kubenswrapper[4867]: I0101 08:29:00.416296 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 01 08:29:00 crc kubenswrapper[4867]: I0101 08:29:00.429224 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg47j\" (UniqueName: \"kubernetes.io/projected/ed3ea167-3dde-4d3d-b36b-277e5368f1c9-kube-api-access-tg47j\") pod \"ed3ea167-3dde-4d3d-b36b-277e5368f1c9\" (UID: \"ed3ea167-3dde-4d3d-b36b-277e5368f1c9\") " Jan 01 08:29:00 crc kubenswrapper[4867]: I0101 08:29:00.429362 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed3ea167-3dde-4d3d-b36b-277e5368f1c9-config-volume\") pod \"ed3ea167-3dde-4d3d-b36b-277e5368f1c9\" (UID: \"ed3ea167-3dde-4d3d-b36b-277e5368f1c9\") " Jan 01 08:29:00 crc kubenswrapper[4867]: I0101 08:29:00.429394 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed3ea167-3dde-4d3d-b36b-277e5368f1c9-secret-volume\") pod \"ed3ea167-3dde-4d3d-b36b-277e5368f1c9\" (UID: \"ed3ea167-3dde-4d3d-b36b-277e5368f1c9\") " Jan 01 08:29:00 crc kubenswrapper[4867]: I0101 08:29:00.431009 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed3ea167-3dde-4d3d-b36b-277e5368f1c9-config-volume" (OuterVolumeSpecName: "config-volume") pod "ed3ea167-3dde-4d3d-b36b-277e5368f1c9" (UID: "ed3ea167-3dde-4d3d-b36b-277e5368f1c9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:29:00 crc kubenswrapper[4867]: I0101 08:29:00.440268 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed3ea167-3dde-4d3d-b36b-277e5368f1c9-kube-api-access-tg47j" (OuterVolumeSpecName: "kube-api-access-tg47j") pod "ed3ea167-3dde-4d3d-b36b-277e5368f1c9" (UID: "ed3ea167-3dde-4d3d-b36b-277e5368f1c9"). InnerVolumeSpecName "kube-api-access-tg47j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:29:00 crc kubenswrapper[4867]: I0101 08:29:00.460239 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed3ea167-3dde-4d3d-b36b-277e5368f1c9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ed3ea167-3dde-4d3d-b36b-277e5368f1c9" (UID: "ed3ea167-3dde-4d3d-b36b-277e5368f1c9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:29:00 crc kubenswrapper[4867]: I0101 08:29:00.464813 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3015ebab-9424-4495-ac2d-90ba00932f83","Type":"ContainerStarted","Data":"45afba78b18c05d1d8b91f880d5ff13e86e9071d325571a8a8d7b061df62216a"} Jan 01 08:29:00 crc kubenswrapper[4867]: I0101 08:29:00.467345 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29454255-49zxs" Jan 01 08:29:00 crc kubenswrapper[4867]: I0101 08:29:00.468976 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29454255-49zxs" event={"ID":"ed3ea167-3dde-4d3d-b36b-277e5368f1c9","Type":"ContainerDied","Data":"114e92d1d20332eb1c35e284e0ef4f5ece705d2ed154a6f2259669ec173e3dc1"} Jan 01 08:29:00 crc kubenswrapper[4867]: I0101 08:29:00.469050 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="114e92d1d20332eb1c35e284e0ef4f5ece705d2ed154a6f2259669ec173e3dc1" Jan 01 08:29:00 crc kubenswrapper[4867]: I0101 08:29:00.489639 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.489620262 podStartE2EDuration="2.489620262s" podCreationTimestamp="2026-01-01 08:28:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:29:00.48595193 +0000 UTC m=+149.621220709" watchObservedRunningTime="2026-01-01 08:29:00.489620262 +0000 UTC m=+149.624889031" Jan 01 08:29:00 crc kubenswrapper[4867]: I0101 08:29:00.531913 4867 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed3ea167-3dde-4d3d-b36b-277e5368f1c9-config-volume\") on node \"crc\" DevicePath \"\"" Jan 01 08:29:00 crc kubenswrapper[4867]: I0101 08:29:00.531947 4867 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed3ea167-3dde-4d3d-b36b-277e5368f1c9-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 01 08:29:00 crc kubenswrapper[4867]: I0101 08:29:00.531957 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg47j\" (UniqueName: \"kubernetes.io/projected/ed3ea167-3dde-4d3d-b36b-277e5368f1c9-kube-api-access-tg47j\") on node \"crc\" DevicePath \"\"" Jan 01 08:29:00 crc kubenswrapper[4867]: I0101 08:29:00.732194 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 01 08:29:00 crc kubenswrapper[4867]: W0101 08:29:00.796002 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podab09b2b2_b2b8_4c6d_90e3_8c8a0206eb08.slice/crio-bf12157c777509ac1618e6b94c2402cee48e370f2a097c95b3c786379fa2543d WatchSource:0}: Error finding container bf12157c777509ac1618e6b94c2402cee48e370f2a097c95b3c786379fa2543d: Status 404 returned error can't find the container with id bf12157c777509ac1618e6b94c2402cee48e370f2a097c95b3c786379fa2543d Jan 01 08:29:01 crc kubenswrapper[4867]: I0101 08:29:01.493186 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-sqxbg_01fa587e-a8a9-4092-9462-905cf90cf1dc/cluster-samples-operator/0.log" Jan 01 08:29:01 crc kubenswrapper[4867]: I0101 08:29:01.493457 4867 generic.go:334] "Generic (PLEG): container finished" podID="01fa587e-a8a9-4092-9462-905cf90cf1dc" containerID="aff1bf3f81e37056144d26a9ae516adc301b9767d9ec5c45ae2d7b9fc07cd70d" exitCode=2 Jan 01 08:29:01 crc kubenswrapper[4867]: I0101 08:29:01.493506 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-sqxbg" event={"ID":"01fa587e-a8a9-4092-9462-905cf90cf1dc","Type":"ContainerDied","Data":"aff1bf3f81e37056144d26a9ae516adc301b9767d9ec5c45ae2d7b9fc07cd70d"} Jan 01 08:29:01 crc kubenswrapper[4867]: I0101 08:29:01.493925 4867 scope.go:117] "RemoveContainer" containerID="aff1bf3f81e37056144d26a9ae516adc301b9767d9ec5c45ae2d7b9fc07cd70d" Jan 01 08:29:01 crc kubenswrapper[4867]: I0101 08:29:01.515607 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ab09b2b2-b2b8-4c6d-90e3-8c8a0206eb08","Type":"ContainerStarted","Data":"bf12157c777509ac1618e6b94c2402cee48e370f2a097c95b3c786379fa2543d"} Jan 01 08:29:01 crc kubenswrapper[4867]: I0101 08:29:01.535213 4867 generic.go:334] "Generic (PLEG): container finished" podID="3015ebab-9424-4495-ac2d-90ba00932f83" containerID="45afba78b18c05d1d8b91f880d5ff13e86e9071d325571a8a8d7b061df62216a" exitCode=0 Jan 01 08:29:01 crc kubenswrapper[4867]: I0101 08:29:01.535261 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3015ebab-9424-4495-ac2d-90ba00932f83","Type":"ContainerDied","Data":"45afba78b18c05d1d8b91f880d5ff13e86e9071d325571a8a8d7b061df62216a"} Jan 01 08:29:02 crc kubenswrapper[4867]: E0101 08:29:02.319385 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podab09b2b2_b2b8_4c6d_90e3_8c8a0206eb08.slice/crio-8f02c3defdd0247c02fad1448b8c77806ec7654147eb17fa27f09544fbee8eae.scope\": RecentStats: unable to find data in memory cache]" Jan 01 08:29:02 crc kubenswrapper[4867]: I0101 08:29:02.560940 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-sqxbg_01fa587e-a8a9-4092-9462-905cf90cf1dc/cluster-samples-operator/0.log" Jan 01 08:29:02 crc kubenswrapper[4867]: I0101 08:29:02.561877 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-sqxbg" event={"ID":"01fa587e-a8a9-4092-9462-905cf90cf1dc","Type":"ContainerStarted","Data":"c31e3fecc570ef766597d3230845a659d2f6d7c8668ddea6a74df82fb757e7a0"} Jan 01 08:29:02 crc kubenswrapper[4867]: I0101 08:29:02.582295 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ab09b2b2-b2b8-4c6d-90e3-8c8a0206eb08","Type":"ContainerStarted","Data":"8f02c3defdd0247c02fad1448b8c77806ec7654147eb17fa27f09544fbee8eae"} Jan 01 08:29:02 crc kubenswrapper[4867]: I0101 08:29:02.623273 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.623253229 podStartE2EDuration="2.623253229s" podCreationTimestamp="2026-01-01 08:29:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:29:02.617332776 +0000 UTC m=+151.752601555" watchObservedRunningTime="2026-01-01 08:29:02.623253229 +0000 UTC m=+151.758521998" Jan 01 08:29:02 crc kubenswrapper[4867]: I0101 08:29:02.909115 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 01 08:29:03 crc kubenswrapper[4867]: I0101 08:29:03.079962 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3015ebab-9424-4495-ac2d-90ba00932f83-kube-api-access\") pod \"3015ebab-9424-4495-ac2d-90ba00932f83\" (UID: \"3015ebab-9424-4495-ac2d-90ba00932f83\") " Jan 01 08:29:03 crc kubenswrapper[4867]: I0101 08:29:03.080124 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3015ebab-9424-4495-ac2d-90ba00932f83-kubelet-dir\") pod \"3015ebab-9424-4495-ac2d-90ba00932f83\" (UID: \"3015ebab-9424-4495-ac2d-90ba00932f83\") " Jan 01 08:29:03 crc kubenswrapper[4867]: I0101 08:29:03.080489 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3015ebab-9424-4495-ac2d-90ba00932f83-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3015ebab-9424-4495-ac2d-90ba00932f83" (UID: "3015ebab-9424-4495-ac2d-90ba00932f83"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:29:03 crc kubenswrapper[4867]: I0101 08:29:03.081057 4867 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3015ebab-9424-4495-ac2d-90ba00932f83-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 01 08:29:03 crc kubenswrapper[4867]: I0101 08:29:03.110216 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3015ebab-9424-4495-ac2d-90ba00932f83-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3015ebab-9424-4495-ac2d-90ba00932f83" (UID: "3015ebab-9424-4495-ac2d-90ba00932f83"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:29:03 crc kubenswrapper[4867]: I0101 08:29:03.182569 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3015ebab-9424-4495-ac2d-90ba00932f83-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 01 08:29:03 crc kubenswrapper[4867]: I0101 08:29:03.600356 4867 generic.go:334] "Generic (PLEG): container finished" podID="ab09b2b2-b2b8-4c6d-90e3-8c8a0206eb08" containerID="8f02c3defdd0247c02fad1448b8c77806ec7654147eb17fa27f09544fbee8eae" exitCode=0 Jan 01 08:29:03 crc kubenswrapper[4867]: I0101 08:29:03.600485 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ab09b2b2-b2b8-4c6d-90e3-8c8a0206eb08","Type":"ContainerDied","Data":"8f02c3defdd0247c02fad1448b8c77806ec7654147eb17fa27f09544fbee8eae"} Jan 01 08:29:03 crc kubenswrapper[4867]: I0101 08:29:03.604690 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3015ebab-9424-4495-ac2d-90ba00932f83","Type":"ContainerDied","Data":"387401624e02ec1699987d0fba87f99c9ecbee5e496e0f8ffe7e2bcba5f7d6c4"} Jan 01 08:29:03 crc kubenswrapper[4867]: I0101 08:29:03.604758 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="387401624e02ec1699987d0fba87f99c9ecbee5e496e0f8ffe7e2bcba5f7d6c4" Jan 01 08:29:03 crc kubenswrapper[4867]: I0101 08:29:03.604858 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 01 08:29:04 crc kubenswrapper[4867]: I0101 08:29:04.267976 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-sbfbv" Jan 01 08:29:04 crc kubenswrapper[4867]: I0101 08:29:04.973843 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 01 08:29:05 crc kubenswrapper[4867]: I0101 08:29:05.115397 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ab09b2b2-b2b8-4c6d-90e3-8c8a0206eb08-kubelet-dir\") pod \"ab09b2b2-b2b8-4c6d-90e3-8c8a0206eb08\" (UID: \"ab09b2b2-b2b8-4c6d-90e3-8c8a0206eb08\") " Jan 01 08:29:05 crc kubenswrapper[4867]: I0101 08:29:05.115503 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab09b2b2-b2b8-4c6d-90e3-8c8a0206eb08-kube-api-access\") pod \"ab09b2b2-b2b8-4c6d-90e3-8c8a0206eb08\" (UID: \"ab09b2b2-b2b8-4c6d-90e3-8c8a0206eb08\") " Jan 01 08:29:05 crc kubenswrapper[4867]: I0101 08:29:05.116758 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab09b2b2-b2b8-4c6d-90e3-8c8a0206eb08-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ab09b2b2-b2b8-4c6d-90e3-8c8a0206eb08" (UID: "ab09b2b2-b2b8-4c6d-90e3-8c8a0206eb08"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:29:05 crc kubenswrapper[4867]: I0101 08:29:05.132367 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab09b2b2-b2b8-4c6d-90e3-8c8a0206eb08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ab09b2b2-b2b8-4c6d-90e3-8c8a0206eb08" (UID: "ab09b2b2-b2b8-4c6d-90e3-8c8a0206eb08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:29:05 crc kubenswrapper[4867]: I0101 08:29:05.217523 4867 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ab09b2b2-b2b8-4c6d-90e3-8c8a0206eb08-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 01 08:29:05 crc kubenswrapper[4867]: I0101 08:29:05.217557 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab09b2b2-b2b8-4c6d-90e3-8c8a0206eb08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 01 08:29:05 crc kubenswrapper[4867]: I0101 08:29:05.626249 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ab09b2b2-b2b8-4c6d-90e3-8c8a0206eb08","Type":"ContainerDied","Data":"bf12157c777509ac1618e6b94c2402cee48e370f2a097c95b3c786379fa2543d"} Jan 01 08:29:05 crc kubenswrapper[4867]: I0101 08:29:05.626565 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf12157c777509ac1618e6b94c2402cee48e370f2a097c95b3c786379fa2543d" Jan 01 08:29:05 crc kubenswrapper[4867]: I0101 08:29:05.626394 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 01 08:29:07 crc kubenswrapper[4867]: I0101 08:29:07.895328 4867 patch_prober.go:28] interesting pod/console-f9d7485db-6lsq2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.19:8443/health\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Jan 01 08:29:07 crc kubenswrapper[4867]: I0101 08:29:07.895384 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-6lsq2" podUID="25d57f2f-1353-417b-ba47-a0ceb1a4577e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.19:8443/health\": dial tcp 10.217.0.19:8443: connect: connection refused" Jan 01 08:29:09 crc kubenswrapper[4867]: I0101 08:29:09.506625 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:29:12 crc kubenswrapper[4867]: I0101 08:29:12.418958 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28af0def-191f-4949-b617-a7a07dd8145b-metrics-certs\") pod \"network-metrics-daemon-kv8wr\" (UID: \"28af0def-191f-4949-b617-a7a07dd8145b\") " pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:29:12 crc kubenswrapper[4867]: I0101 08:29:12.425467 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28af0def-191f-4949-b617-a7a07dd8145b-metrics-certs\") pod \"network-metrics-daemon-kv8wr\" (UID: \"28af0def-191f-4949-b617-a7a07dd8145b\") " pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:29:12 crc kubenswrapper[4867]: I0101 08:29:12.554768 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kv8wr" Jan 01 08:29:15 crc kubenswrapper[4867]: I0101 08:29:15.684915 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:29:17 crc kubenswrapper[4867]: I0101 08:29:17.900138 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-6lsq2" Jan 01 08:29:17 crc kubenswrapper[4867]: I0101 08:29:17.905012 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-6lsq2" Jan 01 08:29:21 crc kubenswrapper[4867]: I0101 08:29:21.331942 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 08:29:21 crc kubenswrapper[4867]: I0101 08:29:21.332071 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 08:29:28 crc kubenswrapper[4867]: I0101 08:29:28.621080 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-77tbv" Jan 01 08:29:30 crc kubenswrapper[4867]: E0101 08:29:30.667493 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 01 08:29:30 crc kubenswrapper[4867]: E0101 08:29:30.667981 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5n545,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-lk4zz_openshift-marketplace(a072e3d1-b363-49da-b227-6c6f7bb0aa9d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 01 08:29:30 crc kubenswrapper[4867]: E0101 08:29:30.669261 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-lk4zz" podUID="a072e3d1-b363-49da-b227-6c6f7bb0aa9d" Jan 01 08:29:33 crc kubenswrapper[4867]: E0101 08:29:33.327018 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 01 08:29:33 crc kubenswrapper[4867]: E0101 08:29:33.327558 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mcqn9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-gmjjl_openshift-marketplace(bcb25595-1b19-4e0b-a711-f3e0ed8e0689): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 01 08:29:33 crc kubenswrapper[4867]: E0101 08:29:33.328757 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-gmjjl" podUID="bcb25595-1b19-4e0b-a711-f3e0ed8e0689" Jan 01 08:29:35 crc kubenswrapper[4867]: E0101 08:29:35.903168 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-gmjjl" podUID="bcb25595-1b19-4e0b-a711-f3e0ed8e0689" Jan 01 08:29:35 crc kubenswrapper[4867]: E0101 08:29:35.903282 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-lk4zz" podUID="a072e3d1-b363-49da-b227-6c6f7bb0aa9d" Jan 01 08:29:35 crc kubenswrapper[4867]: E0101 08:29:35.980473 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 01 08:29:35 crc kubenswrapper[4867]: E0101 08:29:35.980934 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2tjmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-d2x67_openshift-marketplace(390347a2-a9b2-4441-8910-1be8ea15282c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 01 08:29:35 crc kubenswrapper[4867]: E0101 08:29:35.982138 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-d2x67" podUID="390347a2-a9b2-4441-8910-1be8ea15282c" Jan 01 08:29:36 crc kubenswrapper[4867]: E0101 08:29:36.008580 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 01 08:29:36 crc kubenswrapper[4867]: E0101 08:29:36.008762 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-622ml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-bt5bw_openshift-marketplace(8cda336b-c663-4993-bdc1-66b729bf0740): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 01 08:29:36 crc kubenswrapper[4867]: E0101 08:29:36.009939 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-bt5bw" podUID="8cda336b-c663-4993-bdc1-66b729bf0740" Jan 01 08:29:37 crc kubenswrapper[4867]: E0101 08:29:37.083360 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-d2x67" podUID="390347a2-a9b2-4441-8910-1be8ea15282c" Jan 01 08:29:37 crc kubenswrapper[4867]: E0101 08:29:37.083388 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bt5bw" podUID="8cda336b-c663-4993-bdc1-66b729bf0740" Jan 01 08:29:37 crc kubenswrapper[4867]: E0101 08:29:37.138279 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 01 08:29:37 crc kubenswrapper[4867]: E0101 08:29:37.138416 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m5hks,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-dr5fz_openshift-marketplace(8b1938e8-f894-481e-a3d9-9050583ee8c2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 01 08:29:37 crc kubenswrapper[4867]: E0101 08:29:37.140665 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-dr5fz" podUID="8b1938e8-f894-481e-a3d9-9050583ee8c2" Jan 01 08:29:37 crc kubenswrapper[4867]: E0101 08:29:37.144514 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 01 08:29:37 crc kubenswrapper[4867]: E0101 08:29:37.144645 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r2jb8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-wfbvj_openshift-marketplace(6ee33b06-e0e4-458d-8b01-76c6f2d62891): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 01 08:29:37 crc kubenswrapper[4867]: E0101 08:29:37.145804 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-wfbvj" podUID="6ee33b06-e0e4-458d-8b01-76c6f2d62891" Jan 01 08:29:37 crc kubenswrapper[4867]: I0101 08:29:37.285174 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 01 08:29:37 crc kubenswrapper[4867]: I0101 08:29:37.488347 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 01 08:29:37 crc kubenswrapper[4867]: E0101 08:29:37.488771 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed3ea167-3dde-4d3d-b36b-277e5368f1c9" containerName="collect-profiles" Jan 01 08:29:37 crc kubenswrapper[4867]: I0101 08:29:37.488799 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed3ea167-3dde-4d3d-b36b-277e5368f1c9" containerName="collect-profiles" Jan 01 08:29:37 crc kubenswrapper[4867]: E0101 08:29:37.488913 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3015ebab-9424-4495-ac2d-90ba00932f83" containerName="pruner" Jan 01 08:29:37 crc kubenswrapper[4867]: I0101 08:29:37.488935 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3015ebab-9424-4495-ac2d-90ba00932f83" containerName="pruner" Jan 01 08:29:37 crc kubenswrapper[4867]: E0101 08:29:37.489351 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab09b2b2-b2b8-4c6d-90e3-8c8a0206eb08" containerName="pruner" Jan 01 08:29:37 crc kubenswrapper[4867]: I0101 08:29:37.489376 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab09b2b2-b2b8-4c6d-90e3-8c8a0206eb08" containerName="pruner" Jan 01 08:29:37 crc kubenswrapper[4867]: I0101 08:29:37.489557 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab09b2b2-b2b8-4c6d-90e3-8c8a0206eb08" containerName="pruner" Jan 01 08:29:37 crc kubenswrapper[4867]: I0101 08:29:37.489597 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="3015ebab-9424-4495-ac2d-90ba00932f83" containerName="pruner" Jan 01 08:29:37 crc kubenswrapper[4867]: I0101 08:29:37.489614 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed3ea167-3dde-4d3d-b36b-277e5368f1c9" containerName="collect-profiles" Jan 01 08:29:37 crc kubenswrapper[4867]: I0101 08:29:37.490213 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 01 08:29:37 crc kubenswrapper[4867]: I0101 08:29:37.493201 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 01 08:29:37 crc kubenswrapper[4867]: I0101 08:29:37.494221 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 01 08:29:37 crc kubenswrapper[4867]: I0101 08:29:37.494515 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 01 08:29:37 crc kubenswrapper[4867]: I0101 08:29:37.616095 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/adf84973-88d1-4031-aec7-500e8b780799-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"adf84973-88d1-4031-aec7-500e8b780799\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 01 08:29:37 crc kubenswrapper[4867]: I0101 08:29:37.616390 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/adf84973-88d1-4031-aec7-500e8b780799-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"adf84973-88d1-4031-aec7-500e8b780799\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 01 08:29:37 crc kubenswrapper[4867]: I0101 08:29:37.718016 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/adf84973-88d1-4031-aec7-500e8b780799-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"adf84973-88d1-4031-aec7-500e8b780799\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 01 08:29:37 crc kubenswrapper[4867]: I0101 08:29:37.718051 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/adf84973-88d1-4031-aec7-500e8b780799-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"adf84973-88d1-4031-aec7-500e8b780799\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 01 08:29:37 crc kubenswrapper[4867]: I0101 08:29:37.718161 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/adf84973-88d1-4031-aec7-500e8b780799-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"adf84973-88d1-4031-aec7-500e8b780799\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 01 08:29:37 crc kubenswrapper[4867]: I0101 08:29:37.735336 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/adf84973-88d1-4031-aec7-500e8b780799-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"adf84973-88d1-4031-aec7-500e8b780799\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 01 08:29:37 crc kubenswrapper[4867]: I0101 08:29:37.819753 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 01 08:29:38 crc kubenswrapper[4867]: E0101 08:29:38.585539 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wfbvj" podUID="6ee33b06-e0e4-458d-8b01-76c6f2d62891" Jan 01 08:29:38 crc kubenswrapper[4867]: E0101 08:29:38.585818 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-dr5fz" podUID="8b1938e8-f894-481e-a3d9-9050583ee8c2" Jan 01 08:29:38 crc kubenswrapper[4867]: E0101 08:29:38.752780 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 01 08:29:38 crc kubenswrapper[4867]: E0101 08:29:38.753548 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j6r4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-qxlxw_openshift-marketplace(8723fd85-0062-4c7e-b113-f46b791257f4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 01 08:29:38 crc kubenswrapper[4867]: E0101 08:29:38.754968 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-qxlxw" podUID="8723fd85-0062-4c7e-b113-f46b791257f4" Jan 01 08:29:38 crc kubenswrapper[4867]: E0101 08:29:38.779647 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 01 08:29:38 crc kubenswrapper[4867]: E0101 08:29:38.779781 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-frr5l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-txdr6_openshift-marketplace(72494188-2bff-4e14-8a71-041a84c049f2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 01 08:29:38 crc kubenswrapper[4867]: E0101 08:29:38.781232 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-txdr6" podUID="72494188-2bff-4e14-8a71-041a84c049f2" Jan 01 08:29:38 crc kubenswrapper[4867]: E0101 08:29:38.817381 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-txdr6" podUID="72494188-2bff-4e14-8a71-041a84c049f2" Jan 01 08:29:38 crc kubenswrapper[4867]: E0101 08:29:38.817811 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qxlxw" podUID="8723fd85-0062-4c7e-b113-f46b791257f4" Jan 01 08:29:38 crc kubenswrapper[4867]: I0101 08:29:38.994129 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 01 08:29:39 crc kubenswrapper[4867]: W0101 08:29:39.006617 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podadf84973_88d1_4031_aec7_500e8b780799.slice/crio-36b120e3e2b119a1224e959f18caf6888a59fb9417facbe4026f9f81349d158f WatchSource:0}: Error finding container 36b120e3e2b119a1224e959f18caf6888a59fb9417facbe4026f9f81349d158f: Status 404 returned error can't find the container with id 36b120e3e2b119a1224e959f18caf6888a59fb9417facbe4026f9f81349d158f Jan 01 08:29:39 crc kubenswrapper[4867]: I0101 08:29:39.054020 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kv8wr"] Jan 01 08:29:39 crc kubenswrapper[4867]: W0101 08:29:39.065187 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28af0def_191f_4949_b617_a7a07dd8145b.slice/crio-7b1cdd52381c3fb733e93a3099acad77257f15f1e27e44f3683aab738bddaaf2 WatchSource:0}: Error finding container 7b1cdd52381c3fb733e93a3099acad77257f15f1e27e44f3683aab738bddaaf2: Status 404 returned error can't find the container with id 7b1cdd52381c3fb733e93a3099acad77257f15f1e27e44f3683aab738bddaaf2 Jan 01 08:29:39 crc kubenswrapper[4867]: I0101 08:29:39.829523 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kv8wr" event={"ID":"28af0def-191f-4949-b617-a7a07dd8145b","Type":"ContainerStarted","Data":"c0676e93ab835db9c6ff9aefbb334fdd2d4d028a04a63744bad92f648fc64565"} Jan 01 08:29:39 crc kubenswrapper[4867]: I0101 08:29:39.829965 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kv8wr" event={"ID":"28af0def-191f-4949-b617-a7a07dd8145b","Type":"ContainerStarted","Data":"5a7f6e9a74284ef3e8fa542e1537c6c23195d1115d4505b4286e5377caa56ca4"} Jan 01 08:29:39 crc kubenswrapper[4867]: I0101 08:29:39.830026 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kv8wr" event={"ID":"28af0def-191f-4949-b617-a7a07dd8145b","Type":"ContainerStarted","Data":"7b1cdd52381c3fb733e93a3099acad77257f15f1e27e44f3683aab738bddaaf2"} Jan 01 08:29:39 crc kubenswrapper[4867]: I0101 08:29:39.831687 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"adf84973-88d1-4031-aec7-500e8b780799","Type":"ContainerStarted","Data":"7af5a301417f90df2f9dd1a0d2099274d6b25046eda6ad96de4025b53e4fc75d"} Jan 01 08:29:39 crc kubenswrapper[4867]: I0101 08:29:39.831729 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"adf84973-88d1-4031-aec7-500e8b780799","Type":"ContainerStarted","Data":"36b120e3e2b119a1224e959f18caf6888a59fb9417facbe4026f9f81349d158f"} Jan 01 08:29:39 crc kubenswrapper[4867]: I0101 08:29:39.860328 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-kv8wr" podStartSLOduration=170.86030648 podStartE2EDuration="2m50.86030648s" podCreationTimestamp="2026-01-01 08:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:29:39.85061472 +0000 UTC m=+188.985883499" watchObservedRunningTime="2026-01-01 08:29:39.86030648 +0000 UTC m=+188.995575249" Jan 01 08:29:39 crc kubenswrapper[4867]: I0101 08:29:39.878673 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.878650262 podStartE2EDuration="2.878650262s" podCreationTimestamp="2026-01-01 08:29:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:29:39.87250608 +0000 UTC m=+189.007774929" watchObservedRunningTime="2026-01-01 08:29:39.878650262 +0000 UTC m=+189.013919041" Jan 01 08:29:40 crc kubenswrapper[4867]: I0101 08:29:40.842180 4867 generic.go:334] "Generic (PLEG): container finished" podID="adf84973-88d1-4031-aec7-500e8b780799" containerID="7af5a301417f90df2f9dd1a0d2099274d6b25046eda6ad96de4025b53e4fc75d" exitCode=0 Jan 01 08:29:40 crc kubenswrapper[4867]: I0101 08:29:40.842509 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"adf84973-88d1-4031-aec7-500e8b780799","Type":"ContainerDied","Data":"7af5a301417f90df2f9dd1a0d2099274d6b25046eda6ad96de4025b53e4fc75d"} Jan 01 08:29:42 crc kubenswrapper[4867]: I0101 08:29:42.092572 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 01 08:29:42 crc kubenswrapper[4867]: I0101 08:29:42.278070 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/adf84973-88d1-4031-aec7-500e8b780799-kubelet-dir\") pod \"adf84973-88d1-4031-aec7-500e8b780799\" (UID: \"adf84973-88d1-4031-aec7-500e8b780799\") " Jan 01 08:29:42 crc kubenswrapper[4867]: I0101 08:29:42.278230 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/adf84973-88d1-4031-aec7-500e8b780799-kube-api-access\") pod \"adf84973-88d1-4031-aec7-500e8b780799\" (UID: \"adf84973-88d1-4031-aec7-500e8b780799\") " Jan 01 08:29:42 crc kubenswrapper[4867]: I0101 08:29:42.278891 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/adf84973-88d1-4031-aec7-500e8b780799-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "adf84973-88d1-4031-aec7-500e8b780799" (UID: "adf84973-88d1-4031-aec7-500e8b780799"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:29:42 crc kubenswrapper[4867]: I0101 08:29:42.288627 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adf84973-88d1-4031-aec7-500e8b780799-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "adf84973-88d1-4031-aec7-500e8b780799" (UID: "adf84973-88d1-4031-aec7-500e8b780799"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:29:42 crc kubenswrapper[4867]: I0101 08:29:42.289907 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 01 08:29:42 crc kubenswrapper[4867]: E0101 08:29:42.290411 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adf84973-88d1-4031-aec7-500e8b780799" containerName="pruner" Jan 01 08:29:42 crc kubenswrapper[4867]: I0101 08:29:42.290447 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="adf84973-88d1-4031-aec7-500e8b780799" containerName="pruner" Jan 01 08:29:42 crc kubenswrapper[4867]: I0101 08:29:42.290588 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="adf84973-88d1-4031-aec7-500e8b780799" containerName="pruner" Jan 01 08:29:42 crc kubenswrapper[4867]: I0101 08:29:42.296896 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 01 08:29:42 crc kubenswrapper[4867]: I0101 08:29:42.297013 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 01 08:29:42 crc kubenswrapper[4867]: I0101 08:29:42.379987 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/adf84973-88d1-4031-aec7-500e8b780799-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 01 08:29:42 crc kubenswrapper[4867]: I0101 08:29:42.380019 4867 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/adf84973-88d1-4031-aec7-500e8b780799-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 01 08:29:42 crc kubenswrapper[4867]: I0101 08:29:42.480601 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a01a834-2d82-4263-8abd-362d32ab94ec-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7a01a834-2d82-4263-8abd-362d32ab94ec\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 01 08:29:42 crc kubenswrapper[4867]: I0101 08:29:42.480989 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a01a834-2d82-4263-8abd-362d32ab94ec-kube-api-access\") pod \"installer-9-crc\" (UID: \"7a01a834-2d82-4263-8abd-362d32ab94ec\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 01 08:29:42 crc kubenswrapper[4867]: I0101 08:29:42.481020 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7a01a834-2d82-4263-8abd-362d32ab94ec-var-lock\") pod \"installer-9-crc\" (UID: \"7a01a834-2d82-4263-8abd-362d32ab94ec\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 01 08:29:42 crc kubenswrapper[4867]: I0101 08:29:42.582731 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a01a834-2d82-4263-8abd-362d32ab94ec-kube-api-access\") pod \"installer-9-crc\" (UID: \"7a01a834-2d82-4263-8abd-362d32ab94ec\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 01 08:29:42 crc kubenswrapper[4867]: I0101 08:29:42.582795 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7a01a834-2d82-4263-8abd-362d32ab94ec-var-lock\") pod \"installer-9-crc\" (UID: \"7a01a834-2d82-4263-8abd-362d32ab94ec\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 01 08:29:42 crc kubenswrapper[4867]: I0101 08:29:42.582861 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a01a834-2d82-4263-8abd-362d32ab94ec-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7a01a834-2d82-4263-8abd-362d32ab94ec\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 01 08:29:42 crc kubenswrapper[4867]: I0101 08:29:42.582985 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a01a834-2d82-4263-8abd-362d32ab94ec-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7a01a834-2d82-4263-8abd-362d32ab94ec\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 01 08:29:42 crc kubenswrapper[4867]: I0101 08:29:42.583007 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7a01a834-2d82-4263-8abd-362d32ab94ec-var-lock\") pod \"installer-9-crc\" (UID: \"7a01a834-2d82-4263-8abd-362d32ab94ec\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 01 08:29:42 crc kubenswrapper[4867]: I0101 08:29:42.602658 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a01a834-2d82-4263-8abd-362d32ab94ec-kube-api-access\") pod \"installer-9-crc\" (UID: \"7a01a834-2d82-4263-8abd-362d32ab94ec\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 01 08:29:42 crc kubenswrapper[4867]: I0101 08:29:42.625025 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 01 08:29:42 crc kubenswrapper[4867]: I0101 08:29:42.852566 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 01 08:29:42 crc kubenswrapper[4867]: I0101 08:29:42.861234 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"adf84973-88d1-4031-aec7-500e8b780799","Type":"ContainerDied","Data":"36b120e3e2b119a1224e959f18caf6888a59fb9417facbe4026f9f81349d158f"} Jan 01 08:29:42 crc kubenswrapper[4867]: I0101 08:29:42.861274 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36b120e3e2b119a1224e959f18caf6888a59fb9417facbe4026f9f81349d158f" Jan 01 08:29:42 crc kubenswrapper[4867]: I0101 08:29:42.862158 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 01 08:29:42 crc kubenswrapper[4867]: W0101 08:29:42.867004 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7a01a834_2d82_4263_8abd_362d32ab94ec.slice/crio-2b1466554192a5557db0a90d759c9ea7875ac36603d043d8a51c01e808761d65 WatchSource:0}: Error finding container 2b1466554192a5557db0a90d759c9ea7875ac36603d043d8a51c01e808761d65: Status 404 returned error can't find the container with id 2b1466554192a5557db0a90d759c9ea7875ac36603d043d8a51c01e808761d65 Jan 01 08:29:43 crc kubenswrapper[4867]: I0101 08:29:43.869130 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7a01a834-2d82-4263-8abd-362d32ab94ec","Type":"ContainerStarted","Data":"dd48b94ccfa36e4a7327af4123737a3d7a8f019517a5078b5c3445869cdfdf96"} Jan 01 08:29:43 crc kubenswrapper[4867]: I0101 08:29:43.869660 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7a01a834-2d82-4263-8abd-362d32ab94ec","Type":"ContainerStarted","Data":"2b1466554192a5557db0a90d759c9ea7875ac36603d043d8a51c01e808761d65"} Jan 01 08:29:43 crc kubenswrapper[4867]: I0101 08:29:43.889127 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.889098831 podStartE2EDuration="1.889098831s" podCreationTimestamp="2026-01-01 08:29:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:29:43.883290651 +0000 UTC m=+193.018559410" watchObservedRunningTime="2026-01-01 08:29:43.889098831 +0000 UTC m=+193.024367640" Jan 01 08:29:49 crc kubenswrapper[4867]: I0101 08:29:49.911876 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lk4zz" event={"ID":"a072e3d1-b363-49da-b227-6c6f7bb0aa9d","Type":"ContainerStarted","Data":"033b9b5a4270270121849a19b0de200d2c12a2426c15b24b6103105c0004e6ad"} Jan 01 08:29:50 crc kubenswrapper[4867]: I0101 08:29:50.918400 4867 generic.go:334] "Generic (PLEG): container finished" podID="a072e3d1-b363-49da-b227-6c6f7bb0aa9d" containerID="033b9b5a4270270121849a19b0de200d2c12a2426c15b24b6103105c0004e6ad" exitCode=0 Jan 01 08:29:50 crc kubenswrapper[4867]: I0101 08:29:50.918463 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lk4zz" event={"ID":"a072e3d1-b363-49da-b227-6c6f7bb0aa9d","Type":"ContainerDied","Data":"033b9b5a4270270121849a19b0de200d2c12a2426c15b24b6103105c0004e6ad"} Jan 01 08:29:51 crc kubenswrapper[4867]: I0101 08:29:51.331002 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 08:29:51 crc kubenswrapper[4867]: I0101 08:29:51.331304 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 08:29:51 crc kubenswrapper[4867]: I0101 08:29:51.331356 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-69jph" Jan 01 08:29:51 crc kubenswrapper[4867]: I0101 08:29:51.331903 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df"} pod="openshift-machine-config-operator/machine-config-daemon-69jph" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 01 08:29:51 crc kubenswrapper[4867]: I0101 08:29:51.331991 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" containerID="cri-o://1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df" gracePeriod=600 Jan 01 08:29:51 crc kubenswrapper[4867]: I0101 08:29:51.926436 4867 generic.go:334] "Generic (PLEG): container finished" podID="6ee33b06-e0e4-458d-8b01-76c6f2d62891" containerID="efc423dce71aa970f5ae9fcfa4e92aec31ae9a14861d84892fe500a25ab1c5af" exitCode=0 Jan 01 08:29:51 crc kubenswrapper[4867]: I0101 08:29:51.926519 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfbvj" event={"ID":"6ee33b06-e0e4-458d-8b01-76c6f2d62891","Type":"ContainerDied","Data":"efc423dce71aa970f5ae9fcfa4e92aec31ae9a14861d84892fe500a25ab1c5af"} Jan 01 08:29:51 crc kubenswrapper[4867]: I0101 08:29:51.929952 4867 generic.go:334] "Generic (PLEG): container finished" podID="8cda336b-c663-4993-bdc1-66b729bf0740" containerID="8ae66c9e2538394bb0b512c6122c3dc23dce4f661047a44dc6b4a1d5d58b4229" exitCode=0 Jan 01 08:29:51 crc kubenswrapper[4867]: I0101 08:29:51.930027 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bt5bw" event={"ID":"8cda336b-c663-4993-bdc1-66b729bf0740","Type":"ContainerDied","Data":"8ae66c9e2538394bb0b512c6122c3dc23dce4f661047a44dc6b4a1d5d58b4229"} Jan 01 08:29:51 crc kubenswrapper[4867]: I0101 08:29:51.935331 4867 generic.go:334] "Generic (PLEG): container finished" podID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerID="1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df" exitCode=0 Jan 01 08:29:51 crc kubenswrapper[4867]: I0101 08:29:51.935416 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerDied","Data":"1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df"} Jan 01 08:29:51 crc kubenswrapper[4867]: I0101 08:29:51.935453 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerStarted","Data":"c53a76cc86937cf15114c3707751f587066a2ca805617f3c3a8c296d350279a5"} Jan 01 08:29:51 crc kubenswrapper[4867]: I0101 08:29:51.940224 4867 generic.go:334] "Generic (PLEG): container finished" podID="bcb25595-1b19-4e0b-a711-f3e0ed8e0689" containerID="eff0961e37310e35173a185b62a8daab00785a7eb8919489368c71e940318929" exitCode=0 Jan 01 08:29:51 crc kubenswrapper[4867]: I0101 08:29:51.940294 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmjjl" event={"ID":"bcb25595-1b19-4e0b-a711-f3e0ed8e0689","Type":"ContainerDied","Data":"eff0961e37310e35173a185b62a8daab00785a7eb8919489368c71e940318929"} Jan 01 08:29:51 crc kubenswrapper[4867]: I0101 08:29:51.947178 4867 generic.go:334] "Generic (PLEG): container finished" podID="72494188-2bff-4e14-8a71-041a84c049f2" containerID="18436afffc12676a4187fd5b32fe6f2e5cb5111f0e4a67e955a58545e0b4fae8" exitCode=0 Jan 01 08:29:51 crc kubenswrapper[4867]: I0101 08:29:51.947255 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txdr6" event={"ID":"72494188-2bff-4e14-8a71-041a84c049f2","Type":"ContainerDied","Data":"18436afffc12676a4187fd5b32fe6f2e5cb5111f0e4a67e955a58545e0b4fae8"} Jan 01 08:29:51 crc kubenswrapper[4867]: I0101 08:29:51.956262 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lk4zz" event={"ID":"a072e3d1-b363-49da-b227-6c6f7bb0aa9d","Type":"ContainerStarted","Data":"a1b65e6737845f0b67dac59022db3f1febb943212fb10a1c69a5cbdececdece9"} Jan 01 08:29:51 crc kubenswrapper[4867]: I0101 08:29:51.961716 4867 generic.go:334] "Generic (PLEG): container finished" podID="8b1938e8-f894-481e-a3d9-9050583ee8c2" containerID="bf5ed6a62eaadffa2363e6b3d39a98b154e19f18ea43ee5bd291f29218705812" exitCode=0 Jan 01 08:29:51 crc kubenswrapper[4867]: I0101 08:29:51.961758 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dr5fz" event={"ID":"8b1938e8-f894-481e-a3d9-9050583ee8c2","Type":"ContainerDied","Data":"bf5ed6a62eaadffa2363e6b3d39a98b154e19f18ea43ee5bd291f29218705812"} Jan 01 08:29:52 crc kubenswrapper[4867]: I0101 08:29:52.056065 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lk4zz" podStartSLOduration=2.742844431 podStartE2EDuration="59.056048704s" podCreationTimestamp="2026-01-01 08:28:53 +0000 UTC" firstStartedPulling="2026-01-01 08:28:55.064330652 +0000 UTC m=+144.199599421" lastFinishedPulling="2026-01-01 08:29:51.377534925 +0000 UTC m=+200.512803694" observedRunningTime="2026-01-01 08:29:52.052302663 +0000 UTC m=+201.187571432" watchObservedRunningTime="2026-01-01 08:29:52.056048704 +0000 UTC m=+201.191317473" Jan 01 08:29:52 crc kubenswrapper[4867]: I0101 08:29:52.967732 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2x67" event={"ID":"390347a2-a9b2-4441-8910-1be8ea15282c","Type":"ContainerStarted","Data":"1ed43ac00d20c5f55d0ca180433b712c9a7a44f84b52bc5c6276a44ea6cb8a38"} Jan 01 08:29:52 crc kubenswrapper[4867]: I0101 08:29:52.971167 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txdr6" event={"ID":"72494188-2bff-4e14-8a71-041a84c049f2","Type":"ContainerStarted","Data":"cf732a527b24a22230cbc93f100464a3b0313c727f4b9e86bdad55fc8617b607"} Jan 01 08:29:52 crc kubenswrapper[4867]: I0101 08:29:52.975255 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dr5fz" event={"ID":"8b1938e8-f894-481e-a3d9-9050583ee8c2","Type":"ContainerStarted","Data":"621d73540beb198de762909135607f1b2982a1456e070fd5e55998f11b70facb"} Jan 01 08:29:52 crc kubenswrapper[4867]: I0101 08:29:52.977293 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxlxw" event={"ID":"8723fd85-0062-4c7e-b113-f46b791257f4","Type":"ContainerStarted","Data":"9ee0b1cbdfef2c13f9a916d635df4512c2d7637516fcd1e834555a56de72cdf9"} Jan 01 08:29:52 crc kubenswrapper[4867]: I0101 08:29:52.979339 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bt5bw" event={"ID":"8cda336b-c663-4993-bdc1-66b729bf0740","Type":"ContainerStarted","Data":"8210508ee7ddd57330f451d559f97151cd04a7be2cf6f7d8266fa96699ae6d29"} Jan 01 08:29:52 crc kubenswrapper[4867]: I0101 08:29:52.981753 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfbvj" event={"ID":"6ee33b06-e0e4-458d-8b01-76c6f2d62891","Type":"ContainerStarted","Data":"aac72d2ea811fccbca4f67c209d552ed264dab681007e08e22e923e49730fa99"} Jan 01 08:29:52 crc kubenswrapper[4867]: I0101 08:29:52.984257 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmjjl" event={"ID":"bcb25595-1b19-4e0b-a711-f3e0ed8e0689","Type":"ContainerStarted","Data":"3ffe9b91f993517411ea8f0f9b4e1f15e16536c6076c98013b3944fba6c26145"} Jan 01 08:29:53 crc kubenswrapper[4867]: I0101 08:29:53.020139 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-txdr6" podStartSLOduration=2.528976787 podStartE2EDuration="1m0.020121101s" podCreationTimestamp="2026-01-01 08:28:53 +0000 UTC" firstStartedPulling="2026-01-01 08:28:55.062718274 +0000 UTC m=+144.197987053" lastFinishedPulling="2026-01-01 08:29:52.553862598 +0000 UTC m=+201.689131367" observedRunningTime="2026-01-01 08:29:53.018898928 +0000 UTC m=+202.154167727" watchObservedRunningTime="2026-01-01 08:29:53.020121101 +0000 UTC m=+202.155389870" Jan 01 08:29:53 crc kubenswrapper[4867]: I0101 08:29:53.036402 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gmjjl" podStartSLOduration=2.744079143 podStartE2EDuration="1m0.036378649s" podCreationTimestamp="2026-01-01 08:28:53 +0000 UTC" firstStartedPulling="2026-01-01 08:28:55.080544767 +0000 UTC m=+144.215813536" lastFinishedPulling="2026-01-01 08:29:52.372844273 +0000 UTC m=+201.508113042" observedRunningTime="2026-01-01 08:29:53.035683851 +0000 UTC m=+202.170952620" watchObservedRunningTime="2026-01-01 08:29:53.036378649 +0000 UTC m=+202.171647418" Jan 01 08:29:53 crc kubenswrapper[4867]: I0101 08:29:53.055185 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bt5bw" podStartSLOduration=4.090941344 podStartE2EDuration="57.055167256s" podCreationTimestamp="2026-01-01 08:28:56 +0000 UTC" firstStartedPulling="2026-01-01 08:28:59.394722496 +0000 UTC m=+148.529991255" lastFinishedPulling="2026-01-01 08:29:52.358948388 +0000 UTC m=+201.494217167" observedRunningTime="2026-01-01 08:29:53.053131451 +0000 UTC m=+202.188400240" watchObservedRunningTime="2026-01-01 08:29:53.055167256 +0000 UTC m=+202.190436025" Jan 01 08:29:53 crc kubenswrapper[4867]: I0101 08:29:53.072956 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dr5fz" podStartSLOduration=1.830093937 podStartE2EDuration="58.072938916s" podCreationTimestamp="2026-01-01 08:28:55 +0000 UTC" firstStartedPulling="2026-01-01 08:28:56.126614403 +0000 UTC m=+145.261883172" lastFinishedPulling="2026-01-01 08:29:52.369459382 +0000 UTC m=+201.504728151" observedRunningTime="2026-01-01 08:29:53.071490557 +0000 UTC m=+202.206759326" watchObservedRunningTime="2026-01-01 08:29:53.072938916 +0000 UTC m=+202.208207685" Jan 01 08:29:53 crc kubenswrapper[4867]: I0101 08:29:53.091021 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wfbvj" podStartSLOduration=2.963537111 podStartE2EDuration="58.091004063s" podCreationTimestamp="2026-01-01 08:28:55 +0000 UTC" firstStartedPulling="2026-01-01 08:28:57.182653538 +0000 UTC m=+146.317922307" lastFinishedPulling="2026-01-01 08:29:52.31012049 +0000 UTC m=+201.445389259" observedRunningTime="2026-01-01 08:29:53.090027197 +0000 UTC m=+202.225295966" watchObservedRunningTime="2026-01-01 08:29:53.091004063 +0000 UTC m=+202.226272832" Jan 01 08:29:53 crc kubenswrapper[4867]: I0101 08:29:53.636609 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gmjjl" Jan 01 08:29:53 crc kubenswrapper[4867]: I0101 08:29:53.636998 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gmjjl" Jan 01 08:29:53 crc kubenswrapper[4867]: I0101 08:29:53.815528 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-txdr6" Jan 01 08:29:53 crc kubenswrapper[4867]: I0101 08:29:53.815590 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-txdr6" Jan 01 08:29:53 crc kubenswrapper[4867]: I0101 08:29:53.990300 4867 generic.go:334] "Generic (PLEG): container finished" podID="8723fd85-0062-4c7e-b113-f46b791257f4" containerID="9ee0b1cbdfef2c13f9a916d635df4512c2d7637516fcd1e834555a56de72cdf9" exitCode=0 Jan 01 08:29:53 crc kubenswrapper[4867]: I0101 08:29:53.990377 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxlxw" event={"ID":"8723fd85-0062-4c7e-b113-f46b791257f4","Type":"ContainerDied","Data":"9ee0b1cbdfef2c13f9a916d635df4512c2d7637516fcd1e834555a56de72cdf9"} Jan 01 08:29:53 crc kubenswrapper[4867]: I0101 08:29:53.992024 4867 generic.go:334] "Generic (PLEG): container finished" podID="390347a2-a9b2-4441-8910-1be8ea15282c" containerID="1ed43ac00d20c5f55d0ca180433b712c9a7a44f84b52bc5c6276a44ea6cb8a38" exitCode=0 Jan 01 08:29:53 crc kubenswrapper[4867]: I0101 08:29:53.992443 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2x67" event={"ID":"390347a2-a9b2-4441-8910-1be8ea15282c","Type":"ContainerDied","Data":"1ed43ac00d20c5f55d0ca180433b712c9a7a44f84b52bc5c6276a44ea6cb8a38"} Jan 01 08:29:54 crc kubenswrapper[4867]: I0101 08:29:54.031834 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lk4zz" Jan 01 08:29:54 crc kubenswrapper[4867]: I0101 08:29:54.032207 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lk4zz" Jan 01 08:29:54 crc kubenswrapper[4867]: I0101 08:29:54.704960 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-gmjjl" podUID="bcb25595-1b19-4e0b-a711-f3e0ed8e0689" containerName="registry-server" probeResult="failure" output=< Jan 01 08:29:54 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Jan 01 08:29:54 crc kubenswrapper[4867]: > Jan 01 08:29:54 crc kubenswrapper[4867]: I0101 08:29:54.847591 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-txdr6" podUID="72494188-2bff-4e14-8a71-041a84c049f2" containerName="registry-server" probeResult="failure" output=< Jan 01 08:29:54 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Jan 01 08:29:54 crc kubenswrapper[4867]: > Jan 01 08:29:54 crc kubenswrapper[4867]: I0101 08:29:54.998641 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2x67" event={"ID":"390347a2-a9b2-4441-8910-1be8ea15282c","Type":"ContainerStarted","Data":"7913c0dc661080bbd0bcc8e184f1d5a5ec9e74ec3f0f29af4020cd982fa0a498"} Jan 01 08:29:55 crc kubenswrapper[4867]: I0101 08:29:55.001318 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxlxw" event={"ID":"8723fd85-0062-4c7e-b113-f46b791257f4","Type":"ContainerStarted","Data":"9958a470fe51fd14e4983c2894438d42602081d7d5b1bb7472be9aa1be38ed58"} Jan 01 08:29:55 crc kubenswrapper[4867]: I0101 08:29:55.042703 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d2x67" podStartSLOduration=3.04491554 podStartE2EDuration="58.042686041s" podCreationTimestamp="2026-01-01 08:28:57 +0000 UTC" firstStartedPulling="2026-01-01 08:28:59.390281055 +0000 UTC m=+148.525549824" lastFinishedPulling="2026-01-01 08:29:54.388051556 +0000 UTC m=+203.523320325" observedRunningTime="2026-01-01 08:29:55.021353676 +0000 UTC m=+204.156622435" watchObservedRunningTime="2026-01-01 08:29:55.042686041 +0000 UTC m=+204.177954800" Jan 01 08:29:55 crc kubenswrapper[4867]: I0101 08:29:55.043093 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qxlxw" podStartSLOduration=2.624070472 podStartE2EDuration="1m2.043088722s" podCreationTimestamp="2026-01-01 08:28:53 +0000 UTC" firstStartedPulling="2026-01-01 08:28:55.068184591 +0000 UTC m=+144.203453360" lastFinishedPulling="2026-01-01 08:29:54.487202841 +0000 UTC m=+203.622471610" observedRunningTime="2026-01-01 08:29:55.039818324 +0000 UTC m=+204.175087093" watchObservedRunningTime="2026-01-01 08:29:55.043088722 +0000 UTC m=+204.178357491" Jan 01 08:29:55 crc kubenswrapper[4867]: I0101 08:29:55.073162 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-lk4zz" podUID="a072e3d1-b363-49da-b227-6c6f7bb0aa9d" containerName="registry-server" probeResult="failure" output=< Jan 01 08:29:55 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Jan 01 08:29:55 crc kubenswrapper[4867]: > Jan 01 08:29:55 crc kubenswrapper[4867]: I0101 08:29:55.623509 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dr5fz" Jan 01 08:29:55 crc kubenswrapper[4867]: I0101 08:29:55.623577 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dr5fz" Jan 01 08:29:55 crc kubenswrapper[4867]: I0101 08:29:55.756693 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dr5fz" Jan 01 08:29:56 crc kubenswrapper[4867]: I0101 08:29:56.006905 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wfbvj" Jan 01 08:29:56 crc kubenswrapper[4867]: I0101 08:29:56.007401 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wfbvj" Jan 01 08:29:56 crc kubenswrapper[4867]: I0101 08:29:56.048294 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wfbvj" Jan 01 08:29:57 crc kubenswrapper[4867]: I0101 08:29:57.050351 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wfbvj" Jan 01 08:29:57 crc kubenswrapper[4867]: I0101 08:29:57.322495 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bt5bw" Jan 01 08:29:57 crc kubenswrapper[4867]: I0101 08:29:57.322532 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bt5bw" Jan 01 08:29:57 crc kubenswrapper[4867]: I0101 08:29:57.486495 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d2x67" Jan 01 08:29:57 crc kubenswrapper[4867]: I0101 08:29:57.486797 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d2x67" Jan 01 08:29:58 crc kubenswrapper[4867]: I0101 08:29:58.363592 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bt5bw" podUID="8cda336b-c663-4993-bdc1-66b729bf0740" containerName="registry-server" probeResult="failure" output=< Jan 01 08:29:58 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Jan 01 08:29:58 crc kubenswrapper[4867]: > Jan 01 08:29:58 crc kubenswrapper[4867]: I0101 08:29:58.523963 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d2x67" podUID="390347a2-a9b2-4441-8910-1be8ea15282c" containerName="registry-server" probeResult="failure" output=< Jan 01 08:29:58 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Jan 01 08:29:58 crc kubenswrapper[4867]: > Jan 01 08:30:00 crc kubenswrapper[4867]: I0101 08:30:00.145054 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfbvj"] Jan 01 08:30:00 crc kubenswrapper[4867]: I0101 08:30:00.146216 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wfbvj" podUID="6ee33b06-e0e4-458d-8b01-76c6f2d62891" containerName="registry-server" containerID="cri-o://aac72d2ea811fccbca4f67c209d552ed264dab681007e08e22e923e49730fa99" gracePeriod=2 Jan 01 08:30:00 crc kubenswrapper[4867]: I0101 08:30:00.153258 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29454270-wfdng"] Jan 01 08:30:00 crc kubenswrapper[4867]: I0101 08:30:00.154558 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29454270-wfdng" Jan 01 08:30:00 crc kubenswrapper[4867]: I0101 08:30:00.157800 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 01 08:30:00 crc kubenswrapper[4867]: I0101 08:30:00.162825 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 01 08:30:00 crc kubenswrapper[4867]: I0101 08:30:00.163403 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29454270-wfdng"] Jan 01 08:30:00 crc kubenswrapper[4867]: I0101 08:30:00.239495 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c691144-adcf-4de6-b068-db1692decd23-config-volume\") pod \"collect-profiles-29454270-wfdng\" (UID: \"2c691144-adcf-4de6-b068-db1692decd23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454270-wfdng" Jan 01 08:30:00 crc kubenswrapper[4867]: I0101 08:30:00.239623 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2c691144-adcf-4de6-b068-db1692decd23-secret-volume\") pod \"collect-profiles-29454270-wfdng\" (UID: \"2c691144-adcf-4de6-b068-db1692decd23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454270-wfdng" Jan 01 08:30:00 crc kubenswrapper[4867]: I0101 08:30:00.239663 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7g9d\" (UniqueName: \"kubernetes.io/projected/2c691144-adcf-4de6-b068-db1692decd23-kube-api-access-f7g9d\") pod \"collect-profiles-29454270-wfdng\" (UID: \"2c691144-adcf-4de6-b068-db1692decd23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454270-wfdng" Jan 01 08:30:00 crc kubenswrapper[4867]: I0101 08:30:00.340683 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2c691144-adcf-4de6-b068-db1692decd23-secret-volume\") pod \"collect-profiles-29454270-wfdng\" (UID: \"2c691144-adcf-4de6-b068-db1692decd23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454270-wfdng" Jan 01 08:30:00 crc kubenswrapper[4867]: I0101 08:30:00.340753 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7g9d\" (UniqueName: \"kubernetes.io/projected/2c691144-adcf-4de6-b068-db1692decd23-kube-api-access-f7g9d\") pod \"collect-profiles-29454270-wfdng\" (UID: \"2c691144-adcf-4de6-b068-db1692decd23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454270-wfdng" Jan 01 08:30:00 crc kubenswrapper[4867]: I0101 08:30:00.340809 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c691144-adcf-4de6-b068-db1692decd23-config-volume\") pod \"collect-profiles-29454270-wfdng\" (UID: \"2c691144-adcf-4de6-b068-db1692decd23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454270-wfdng" Jan 01 08:30:00 crc kubenswrapper[4867]: I0101 08:30:00.341959 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c691144-adcf-4de6-b068-db1692decd23-config-volume\") pod \"collect-profiles-29454270-wfdng\" (UID: \"2c691144-adcf-4de6-b068-db1692decd23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454270-wfdng" Jan 01 08:30:00 crc kubenswrapper[4867]: I0101 08:30:00.349229 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2c691144-adcf-4de6-b068-db1692decd23-secret-volume\") pod \"collect-profiles-29454270-wfdng\" (UID: \"2c691144-adcf-4de6-b068-db1692decd23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454270-wfdng" Jan 01 08:30:00 crc kubenswrapper[4867]: I0101 08:30:00.359391 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7g9d\" (UniqueName: \"kubernetes.io/projected/2c691144-adcf-4de6-b068-db1692decd23-kube-api-access-f7g9d\") pod \"collect-profiles-29454270-wfdng\" (UID: \"2c691144-adcf-4de6-b068-db1692decd23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454270-wfdng" Jan 01 08:30:00 crc kubenswrapper[4867]: I0101 08:30:00.554189 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29454270-wfdng" Jan 01 08:30:00 crc kubenswrapper[4867]: I0101 08:30:00.968606 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29454270-wfdng"] Jan 01 08:30:00 crc kubenswrapper[4867]: W0101 08:30:00.976467 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c691144_adcf_4de6_b068_db1692decd23.slice/crio-40ee9f0c27529e13b91d5f85a39f18afcf9b18e1c4ed673ba0b6573cd86dc883 WatchSource:0}: Error finding container 40ee9f0c27529e13b91d5f85a39f18afcf9b18e1c4ed673ba0b6573cd86dc883: Status 404 returned error can't find the container with id 40ee9f0c27529e13b91d5f85a39f18afcf9b18e1c4ed673ba0b6573cd86dc883 Jan 01 08:30:01 crc kubenswrapper[4867]: I0101 08:30:01.042352 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29454270-wfdng" event={"ID":"2c691144-adcf-4de6-b068-db1692decd23","Type":"ContainerStarted","Data":"40ee9f0c27529e13b91d5f85a39f18afcf9b18e1c4ed673ba0b6573cd86dc883"} Jan 01 08:30:03 crc kubenswrapper[4867]: I0101 08:30:03.687679 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gmjjl" Jan 01 08:30:03 crc kubenswrapper[4867]: I0101 08:30:03.737636 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gmjjl" Jan 01 08:30:03 crc kubenswrapper[4867]: I0101 08:30:03.856754 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-txdr6" Jan 01 08:30:03 crc kubenswrapper[4867]: I0101 08:30:03.910692 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-txdr6" Jan 01 08:30:04 crc kubenswrapper[4867]: I0101 08:30:04.101448 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lk4zz" Jan 01 08:30:04 crc kubenswrapper[4867]: I0101 08:30:04.166086 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lk4zz" Jan 01 08:30:04 crc kubenswrapper[4867]: I0101 08:30:04.263338 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qxlxw" Jan 01 08:30:04 crc kubenswrapper[4867]: I0101 08:30:04.263381 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qxlxw" Jan 01 08:30:04 crc kubenswrapper[4867]: I0101 08:30:04.320408 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qxlxw" Jan 01 08:30:05 crc kubenswrapper[4867]: I0101 08:30:05.063673 4867 generic.go:334] "Generic (PLEG): container finished" podID="2c691144-adcf-4de6-b068-db1692decd23" containerID="4b9ebd95cc3faeb089504ece3ff02d7506d20aa66d81d7e28f72045e46f07a0f" exitCode=0 Jan 01 08:30:05 crc kubenswrapper[4867]: I0101 08:30:05.063743 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29454270-wfdng" event={"ID":"2c691144-adcf-4de6-b068-db1692decd23","Type":"ContainerDied","Data":"4b9ebd95cc3faeb089504ece3ff02d7506d20aa66d81d7e28f72045e46f07a0f"} Jan 01 08:30:05 crc kubenswrapper[4867]: I0101 08:30:05.065327 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wfbvj_6ee33b06-e0e4-458d-8b01-76c6f2d62891/registry-server/0.log" Jan 01 08:30:05 crc kubenswrapper[4867]: I0101 08:30:05.065955 4867 generic.go:334] "Generic (PLEG): container finished" podID="6ee33b06-e0e4-458d-8b01-76c6f2d62891" containerID="aac72d2ea811fccbca4f67c209d552ed264dab681007e08e22e923e49730fa99" exitCode=137 Jan 01 08:30:05 crc kubenswrapper[4867]: I0101 08:30:05.066079 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfbvj" event={"ID":"6ee33b06-e0e4-458d-8b01-76c6f2d62891","Type":"ContainerDied","Data":"aac72d2ea811fccbca4f67c209d552ed264dab681007e08e22e923e49730fa99"} Jan 01 08:30:05 crc kubenswrapper[4867]: I0101 08:30:05.123581 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qxlxw" Jan 01 08:30:05 crc kubenswrapper[4867]: I0101 08:30:05.688992 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dr5fz" Jan 01 08:30:05 crc kubenswrapper[4867]: I0101 08:30:05.740364 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qxlxw"] Jan 01 08:30:06 crc kubenswrapper[4867]: E0101 08:30:06.007692 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aac72d2ea811fccbca4f67c209d552ed264dab681007e08e22e923e49730fa99 is running failed: container process not found" containerID="aac72d2ea811fccbca4f67c209d552ed264dab681007e08e22e923e49730fa99" cmd=["grpc_health_probe","-addr=:50051"] Jan 01 08:30:06 crc kubenswrapper[4867]: E0101 08:30:06.007937 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aac72d2ea811fccbca4f67c209d552ed264dab681007e08e22e923e49730fa99 is running failed: container process not found" containerID="aac72d2ea811fccbca4f67c209d552ed264dab681007e08e22e923e49730fa99" cmd=["grpc_health_probe","-addr=:50051"] Jan 01 08:30:06 crc kubenswrapper[4867]: E0101 08:30:06.008105 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aac72d2ea811fccbca4f67c209d552ed264dab681007e08e22e923e49730fa99 is running failed: container process not found" containerID="aac72d2ea811fccbca4f67c209d552ed264dab681007e08e22e923e49730fa99" cmd=["grpc_health_probe","-addr=:50051"] Jan 01 08:30:06 crc kubenswrapper[4867]: E0101 08:30:06.008140 4867 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aac72d2ea811fccbca4f67c209d552ed264dab681007e08e22e923e49730fa99 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-wfbvj" podUID="6ee33b06-e0e4-458d-8b01-76c6f2d62891" containerName="registry-server" Jan 01 08:30:06 crc kubenswrapper[4867]: I0101 08:30:06.054157 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wfbvj_6ee33b06-e0e4-458d-8b01-76c6f2d62891/registry-server/0.log" Jan 01 08:30:06 crc kubenswrapper[4867]: I0101 08:30:06.054682 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wfbvj" Jan 01 08:30:06 crc kubenswrapper[4867]: I0101 08:30:06.085451 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wfbvj_6ee33b06-e0e4-458d-8b01-76c6f2d62891/registry-server/0.log" Jan 01 08:30:06 crc kubenswrapper[4867]: I0101 08:30:06.089257 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfbvj" event={"ID":"6ee33b06-e0e4-458d-8b01-76c6f2d62891","Type":"ContainerDied","Data":"316ad113acd6d9d45f408d1e29ca166c36d79dd25614288b0bf32f87e776201f"} Jan 01 08:30:06 crc kubenswrapper[4867]: I0101 08:30:06.089295 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wfbvj" Jan 01 08:30:06 crc kubenswrapper[4867]: I0101 08:30:06.089343 4867 scope.go:117] "RemoveContainer" containerID="aac72d2ea811fccbca4f67c209d552ed264dab681007e08e22e923e49730fa99" Jan 01 08:30:06 crc kubenswrapper[4867]: I0101 08:30:06.107913 4867 scope.go:117] "RemoveContainer" containerID="efc423dce71aa970f5ae9fcfa4e92aec31ae9a14861d84892fe500a25ab1c5af" Jan 01 08:30:06 crc kubenswrapper[4867]: I0101 08:30:06.128178 4867 scope.go:117] "RemoveContainer" containerID="94a79ff6f7306d91bb450c1eb3024d8c9aa518c971bde605321d4f3ac7509f5d" Jan 01 08:30:06 crc kubenswrapper[4867]: I0101 08:30:06.230586 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ee33b06-e0e4-458d-8b01-76c6f2d62891-utilities\") pod \"6ee33b06-e0e4-458d-8b01-76c6f2d62891\" (UID: \"6ee33b06-e0e4-458d-8b01-76c6f2d62891\") " Jan 01 08:30:06 crc kubenswrapper[4867]: I0101 08:30:06.230637 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ee33b06-e0e4-458d-8b01-76c6f2d62891-catalog-content\") pod \"6ee33b06-e0e4-458d-8b01-76c6f2d62891\" (UID: \"6ee33b06-e0e4-458d-8b01-76c6f2d62891\") " Jan 01 08:30:06 crc kubenswrapper[4867]: I0101 08:30:06.230736 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2jb8\" (UniqueName: \"kubernetes.io/projected/6ee33b06-e0e4-458d-8b01-76c6f2d62891-kube-api-access-r2jb8\") pod \"6ee33b06-e0e4-458d-8b01-76c6f2d62891\" (UID: \"6ee33b06-e0e4-458d-8b01-76c6f2d62891\") " Jan 01 08:30:06 crc kubenswrapper[4867]: I0101 08:30:06.231791 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ee33b06-e0e4-458d-8b01-76c6f2d62891-utilities" (OuterVolumeSpecName: "utilities") pod "6ee33b06-e0e4-458d-8b01-76c6f2d62891" (UID: "6ee33b06-e0e4-458d-8b01-76c6f2d62891"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:30:06 crc kubenswrapper[4867]: I0101 08:30:06.233610 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ee33b06-e0e4-458d-8b01-76c6f2d62891-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 08:30:06 crc kubenswrapper[4867]: I0101 08:30:06.253074 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee33b06-e0e4-458d-8b01-76c6f2d62891-kube-api-access-r2jb8" (OuterVolumeSpecName: "kube-api-access-r2jb8") pod "6ee33b06-e0e4-458d-8b01-76c6f2d62891" (UID: "6ee33b06-e0e4-458d-8b01-76c6f2d62891"). InnerVolumeSpecName "kube-api-access-r2jb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:30:06 crc kubenswrapper[4867]: I0101 08:30:06.274739 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ee33b06-e0e4-458d-8b01-76c6f2d62891-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ee33b06-e0e4-458d-8b01-76c6f2d62891" (UID: "6ee33b06-e0e4-458d-8b01-76c6f2d62891"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:30:06 crc kubenswrapper[4867]: I0101 08:30:06.331831 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lk4zz"] Jan 01 08:30:06 crc kubenswrapper[4867]: I0101 08:30:06.332077 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lk4zz" podUID="a072e3d1-b363-49da-b227-6c6f7bb0aa9d" containerName="registry-server" containerID="cri-o://a1b65e6737845f0b67dac59022db3f1febb943212fb10a1c69a5cbdececdece9" gracePeriod=2 Jan 01 08:30:06 crc kubenswrapper[4867]: I0101 08:30:06.335497 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2jb8\" (UniqueName: \"kubernetes.io/projected/6ee33b06-e0e4-458d-8b01-76c6f2d62891-kube-api-access-r2jb8\") on node \"crc\" DevicePath \"\"" Jan 01 08:30:06 crc kubenswrapper[4867]: I0101 08:30:06.335527 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ee33b06-e0e4-458d-8b01-76c6f2d62891-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 08:30:06 crc kubenswrapper[4867]: I0101 08:30:06.358680 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29454270-wfdng" Jan 01 08:30:06 crc kubenswrapper[4867]: I0101 08:30:06.431298 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfbvj"] Jan 01 08:30:06 crc kubenswrapper[4867]: I0101 08:30:06.433707 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfbvj"] Jan 01 08:30:06 crc kubenswrapper[4867]: I0101 08:30:06.436112 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2c691144-adcf-4de6-b068-db1692decd23-secret-volume\") pod \"2c691144-adcf-4de6-b068-db1692decd23\" (UID: \"2c691144-adcf-4de6-b068-db1692decd23\") " Jan 01 08:30:06 crc kubenswrapper[4867]: I0101 08:30:06.436148 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c691144-adcf-4de6-b068-db1692decd23-config-volume\") pod \"2c691144-adcf-4de6-b068-db1692decd23\" (UID: \"2c691144-adcf-4de6-b068-db1692decd23\") " Jan 01 08:30:06 crc kubenswrapper[4867]: I0101 08:30:06.436174 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7g9d\" (UniqueName: \"kubernetes.io/projected/2c691144-adcf-4de6-b068-db1692decd23-kube-api-access-f7g9d\") pod \"2c691144-adcf-4de6-b068-db1692decd23\" (UID: \"2c691144-adcf-4de6-b068-db1692decd23\") " Jan 01 08:30:06 crc kubenswrapper[4867]: I0101 08:30:06.437040 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c691144-adcf-4de6-b068-db1692decd23-config-volume" (OuterVolumeSpecName: "config-volume") pod "2c691144-adcf-4de6-b068-db1692decd23" (UID: "2c691144-adcf-4de6-b068-db1692decd23"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:30:06 crc kubenswrapper[4867]: I0101 08:30:06.440046 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c691144-adcf-4de6-b068-db1692decd23-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2c691144-adcf-4de6-b068-db1692decd23" (UID: "2c691144-adcf-4de6-b068-db1692decd23"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:30:06 crc kubenswrapper[4867]: I0101 08:30:06.442070 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c691144-adcf-4de6-b068-db1692decd23-kube-api-access-f7g9d" (OuterVolumeSpecName: "kube-api-access-f7g9d") pod "2c691144-adcf-4de6-b068-db1692decd23" (UID: "2c691144-adcf-4de6-b068-db1692decd23"). InnerVolumeSpecName "kube-api-access-f7g9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:30:06 crc kubenswrapper[4867]: I0101 08:30:06.537047 4867 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2c691144-adcf-4de6-b068-db1692decd23-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 01 08:30:06 crc kubenswrapper[4867]: I0101 08:30:06.537085 4867 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c691144-adcf-4de6-b068-db1692decd23-config-volume\") on node \"crc\" DevicePath \"\"" Jan 01 08:30:06 crc kubenswrapper[4867]: I0101 08:30:06.537095 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7g9d\" (UniqueName: \"kubernetes.io/projected/2c691144-adcf-4de6-b068-db1692decd23-kube-api-access-f7g9d\") on node \"crc\" DevicePath \"\"" Jan 01 08:30:07 crc kubenswrapper[4867]: I0101 08:30:07.096636 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29454270-wfdng" event={"ID":"2c691144-adcf-4de6-b068-db1692decd23","Type":"ContainerDied","Data":"40ee9f0c27529e13b91d5f85a39f18afcf9b18e1c4ed673ba0b6573cd86dc883"} Jan 01 08:30:07 crc kubenswrapper[4867]: I0101 08:30:07.096696 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40ee9f0c27529e13b91d5f85a39f18afcf9b18e1c4ed673ba0b6573cd86dc883" Jan 01 08:30:07 crc kubenswrapper[4867]: I0101 08:30:07.096979 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29454270-wfdng" Jan 01 08:30:07 crc kubenswrapper[4867]: I0101 08:30:07.098379 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qxlxw" podUID="8723fd85-0062-4c7e-b113-f46b791257f4" containerName="registry-server" containerID="cri-o://9958a470fe51fd14e4983c2894438d42602081d7d5b1bb7472be9aa1be38ed58" gracePeriod=2 Jan 01 08:30:07 crc kubenswrapper[4867]: I0101 08:30:07.137561 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee33b06-e0e4-458d-8b01-76c6f2d62891" path="/var/lib/kubelet/pods/6ee33b06-e0e4-458d-8b01-76c6f2d62891/volumes" Jan 01 08:30:07 crc kubenswrapper[4867]: I0101 08:30:07.383193 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bt5bw" Jan 01 08:30:07 crc kubenswrapper[4867]: I0101 08:30:07.457851 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bt5bw" Jan 01 08:30:07 crc kubenswrapper[4867]: I0101 08:30:07.541379 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d2x67" Jan 01 08:30:07 crc kubenswrapper[4867]: I0101 08:30:07.617534 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d2x67" Jan 01 08:30:08 crc kubenswrapper[4867]: I0101 08:30:08.646061 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lk4zz" Jan 01 08:30:08 crc kubenswrapper[4867]: I0101 08:30:08.683215 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5n545\" (UniqueName: \"kubernetes.io/projected/a072e3d1-b363-49da-b227-6c6f7bb0aa9d-kube-api-access-5n545\") pod \"a072e3d1-b363-49da-b227-6c6f7bb0aa9d\" (UID: \"a072e3d1-b363-49da-b227-6c6f7bb0aa9d\") " Jan 01 08:30:08 crc kubenswrapper[4867]: I0101 08:30:08.683286 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a072e3d1-b363-49da-b227-6c6f7bb0aa9d-utilities\") pod \"a072e3d1-b363-49da-b227-6c6f7bb0aa9d\" (UID: \"a072e3d1-b363-49da-b227-6c6f7bb0aa9d\") " Jan 01 08:30:08 crc kubenswrapper[4867]: I0101 08:30:08.683313 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a072e3d1-b363-49da-b227-6c6f7bb0aa9d-catalog-content\") pod \"a072e3d1-b363-49da-b227-6c6f7bb0aa9d\" (UID: \"a072e3d1-b363-49da-b227-6c6f7bb0aa9d\") " Jan 01 08:30:08 crc kubenswrapper[4867]: I0101 08:30:08.684165 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a072e3d1-b363-49da-b227-6c6f7bb0aa9d-utilities" (OuterVolumeSpecName: "utilities") pod "a072e3d1-b363-49da-b227-6c6f7bb0aa9d" (UID: "a072e3d1-b363-49da-b227-6c6f7bb0aa9d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:30:08 crc kubenswrapper[4867]: I0101 08:30:08.686850 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a072e3d1-b363-49da-b227-6c6f7bb0aa9d-kube-api-access-5n545" (OuterVolumeSpecName: "kube-api-access-5n545") pod "a072e3d1-b363-49da-b227-6c6f7bb0aa9d" (UID: "a072e3d1-b363-49da-b227-6c6f7bb0aa9d"). InnerVolumeSpecName "kube-api-access-5n545". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:30:08 crc kubenswrapper[4867]: I0101 08:30:08.759565 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a072e3d1-b363-49da-b227-6c6f7bb0aa9d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a072e3d1-b363-49da-b227-6c6f7bb0aa9d" (UID: "a072e3d1-b363-49da-b227-6c6f7bb0aa9d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:30:08 crc kubenswrapper[4867]: I0101 08:30:08.784953 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a072e3d1-b363-49da-b227-6c6f7bb0aa9d-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 08:30:08 crc kubenswrapper[4867]: I0101 08:30:08.785193 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a072e3d1-b363-49da-b227-6c6f7bb0aa9d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 08:30:08 crc kubenswrapper[4867]: I0101 08:30:08.785278 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5n545\" (UniqueName: \"kubernetes.io/projected/a072e3d1-b363-49da-b227-6c6f7bb0aa9d-kube-api-access-5n545\") on node \"crc\" DevicePath \"\"" Jan 01 08:30:09 crc kubenswrapper[4867]: I0101 08:30:09.113742 4867 generic.go:334] "Generic (PLEG): container finished" podID="a072e3d1-b363-49da-b227-6c6f7bb0aa9d" containerID="a1b65e6737845f0b67dac59022db3f1febb943212fb10a1c69a5cbdececdece9" exitCode=0 Jan 01 08:30:09 crc kubenswrapper[4867]: I0101 08:30:09.113811 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lk4zz" event={"ID":"a072e3d1-b363-49da-b227-6c6f7bb0aa9d","Type":"ContainerDied","Data":"a1b65e6737845f0b67dac59022db3f1febb943212fb10a1c69a5cbdececdece9"} Jan 01 08:30:09 crc kubenswrapper[4867]: I0101 08:30:09.113852 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lk4zz" event={"ID":"a072e3d1-b363-49da-b227-6c6f7bb0aa9d","Type":"ContainerDied","Data":"832e8064b4a9229cf65a6ff0975be503c8dacfad85f27ea81ac44dfc73aebb65"} Jan 01 08:30:09 crc kubenswrapper[4867]: I0101 08:30:09.113845 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lk4zz" Jan 01 08:30:09 crc kubenswrapper[4867]: I0101 08:30:09.113869 4867 scope.go:117] "RemoveContainer" containerID="a1b65e6737845f0b67dac59022db3f1febb943212fb10a1c69a5cbdececdece9" Jan 01 08:30:09 crc kubenswrapper[4867]: I0101 08:30:09.117598 4867 generic.go:334] "Generic (PLEG): container finished" podID="8723fd85-0062-4c7e-b113-f46b791257f4" containerID="9958a470fe51fd14e4983c2894438d42602081d7d5b1bb7472be9aa1be38ed58" exitCode=0 Jan 01 08:30:09 crc kubenswrapper[4867]: I0101 08:30:09.117623 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxlxw" event={"ID":"8723fd85-0062-4c7e-b113-f46b791257f4","Type":"ContainerDied","Data":"9958a470fe51fd14e4983c2894438d42602081d7d5b1bb7472be9aa1be38ed58"} Jan 01 08:30:09 crc kubenswrapper[4867]: I0101 08:30:09.133741 4867 scope.go:117] "RemoveContainer" containerID="033b9b5a4270270121849a19b0de200d2c12a2426c15b24b6103105c0004e6ad" Jan 01 08:30:09 crc kubenswrapper[4867]: I0101 08:30:09.156863 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lk4zz"] Jan 01 08:30:09 crc kubenswrapper[4867]: I0101 08:30:09.164206 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lk4zz"] Jan 01 08:30:09 crc kubenswrapper[4867]: I0101 08:30:09.193801 4867 scope.go:117] "RemoveContainer" containerID="223b5521c8d122a0bce447ed243859acdf7f74480ac420951b1926b60afd6c55" Jan 01 08:30:09 crc kubenswrapper[4867]: I0101 08:30:09.217566 4867 scope.go:117] "RemoveContainer" containerID="a1b65e6737845f0b67dac59022db3f1febb943212fb10a1c69a5cbdececdece9" Jan 01 08:30:09 crc kubenswrapper[4867]: E0101 08:30:09.218141 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1b65e6737845f0b67dac59022db3f1febb943212fb10a1c69a5cbdececdece9\": container with ID starting with a1b65e6737845f0b67dac59022db3f1febb943212fb10a1c69a5cbdececdece9 not found: ID does not exist" containerID="a1b65e6737845f0b67dac59022db3f1febb943212fb10a1c69a5cbdececdece9" Jan 01 08:30:09 crc kubenswrapper[4867]: I0101 08:30:09.218188 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1b65e6737845f0b67dac59022db3f1febb943212fb10a1c69a5cbdececdece9"} err="failed to get container status \"a1b65e6737845f0b67dac59022db3f1febb943212fb10a1c69a5cbdececdece9\": rpc error: code = NotFound desc = could not find container \"a1b65e6737845f0b67dac59022db3f1febb943212fb10a1c69a5cbdececdece9\": container with ID starting with a1b65e6737845f0b67dac59022db3f1febb943212fb10a1c69a5cbdececdece9 not found: ID does not exist" Jan 01 08:30:09 crc kubenswrapper[4867]: I0101 08:30:09.218219 4867 scope.go:117] "RemoveContainer" containerID="033b9b5a4270270121849a19b0de200d2c12a2426c15b24b6103105c0004e6ad" Jan 01 08:30:09 crc kubenswrapper[4867]: E0101 08:30:09.218688 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"033b9b5a4270270121849a19b0de200d2c12a2426c15b24b6103105c0004e6ad\": container with ID starting with 033b9b5a4270270121849a19b0de200d2c12a2426c15b24b6103105c0004e6ad not found: ID does not exist" containerID="033b9b5a4270270121849a19b0de200d2c12a2426c15b24b6103105c0004e6ad" Jan 01 08:30:09 crc kubenswrapper[4867]: I0101 08:30:09.218818 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"033b9b5a4270270121849a19b0de200d2c12a2426c15b24b6103105c0004e6ad"} err="failed to get container status \"033b9b5a4270270121849a19b0de200d2c12a2426c15b24b6103105c0004e6ad\": rpc error: code = NotFound desc = could not find container \"033b9b5a4270270121849a19b0de200d2c12a2426c15b24b6103105c0004e6ad\": container with ID starting with 033b9b5a4270270121849a19b0de200d2c12a2426c15b24b6103105c0004e6ad not found: ID does not exist" Jan 01 08:30:09 crc kubenswrapper[4867]: I0101 08:30:09.218931 4867 scope.go:117] "RemoveContainer" containerID="223b5521c8d122a0bce447ed243859acdf7f74480ac420951b1926b60afd6c55" Jan 01 08:30:09 crc kubenswrapper[4867]: E0101 08:30:09.219424 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"223b5521c8d122a0bce447ed243859acdf7f74480ac420951b1926b60afd6c55\": container with ID starting with 223b5521c8d122a0bce447ed243859acdf7f74480ac420951b1926b60afd6c55 not found: ID does not exist" containerID="223b5521c8d122a0bce447ed243859acdf7f74480ac420951b1926b60afd6c55" Jan 01 08:30:09 crc kubenswrapper[4867]: I0101 08:30:09.219460 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"223b5521c8d122a0bce447ed243859acdf7f74480ac420951b1926b60afd6c55"} err="failed to get container status \"223b5521c8d122a0bce447ed243859acdf7f74480ac420951b1926b60afd6c55\": rpc error: code = NotFound desc = could not find container \"223b5521c8d122a0bce447ed243859acdf7f74480ac420951b1926b60afd6c55\": container with ID starting with 223b5521c8d122a0bce447ed243859acdf7f74480ac420951b1926b60afd6c55 not found: ID does not exist" Jan 01 08:30:11 crc kubenswrapper[4867]: I0101 08:30:11.146835 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a072e3d1-b363-49da-b227-6c6f7bb0aa9d" path="/var/lib/kubelet/pods/a072e3d1-b363-49da-b227-6c6f7bb0aa9d/volumes" Jan 01 08:30:11 crc kubenswrapper[4867]: I0101 08:30:11.147618 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d2x67"] Jan 01 08:30:11 crc kubenswrapper[4867]: I0101 08:30:11.148062 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d2x67" podUID="390347a2-a9b2-4441-8910-1be8ea15282c" containerName="registry-server" containerID="cri-o://7913c0dc661080bbd0bcc8e184f1d5a5ec9e74ec3f0f29af4020cd982fa0a498" gracePeriod=2 Jan 01 08:30:11 crc kubenswrapper[4867]: I0101 08:30:11.405029 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qxlxw" Jan 01 08:30:11 crc kubenswrapper[4867]: I0101 08:30:11.524648 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8723fd85-0062-4c7e-b113-f46b791257f4-utilities\") pod \"8723fd85-0062-4c7e-b113-f46b791257f4\" (UID: \"8723fd85-0062-4c7e-b113-f46b791257f4\") " Jan 01 08:30:11 crc kubenswrapper[4867]: I0101 08:30:11.524739 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6r4f\" (UniqueName: \"kubernetes.io/projected/8723fd85-0062-4c7e-b113-f46b791257f4-kube-api-access-j6r4f\") pod \"8723fd85-0062-4c7e-b113-f46b791257f4\" (UID: \"8723fd85-0062-4c7e-b113-f46b791257f4\") " Jan 01 08:30:11 crc kubenswrapper[4867]: I0101 08:30:11.524788 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8723fd85-0062-4c7e-b113-f46b791257f4-catalog-content\") pod \"8723fd85-0062-4c7e-b113-f46b791257f4\" (UID: \"8723fd85-0062-4c7e-b113-f46b791257f4\") " Jan 01 08:30:11 crc kubenswrapper[4867]: I0101 08:30:11.525910 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8723fd85-0062-4c7e-b113-f46b791257f4-utilities" (OuterVolumeSpecName: "utilities") pod "8723fd85-0062-4c7e-b113-f46b791257f4" (UID: "8723fd85-0062-4c7e-b113-f46b791257f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:30:11 crc kubenswrapper[4867]: I0101 08:30:11.531100 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8723fd85-0062-4c7e-b113-f46b791257f4-kube-api-access-j6r4f" (OuterVolumeSpecName: "kube-api-access-j6r4f") pod "8723fd85-0062-4c7e-b113-f46b791257f4" (UID: "8723fd85-0062-4c7e-b113-f46b791257f4"). InnerVolumeSpecName "kube-api-access-j6r4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:30:11 crc kubenswrapper[4867]: I0101 08:30:11.580691 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8723fd85-0062-4c7e-b113-f46b791257f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8723fd85-0062-4c7e-b113-f46b791257f4" (UID: "8723fd85-0062-4c7e-b113-f46b791257f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:30:11 crc kubenswrapper[4867]: I0101 08:30:11.626119 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6r4f\" (UniqueName: \"kubernetes.io/projected/8723fd85-0062-4c7e-b113-f46b791257f4-kube-api-access-j6r4f\") on node \"crc\" DevicePath \"\"" Jan 01 08:30:11 crc kubenswrapper[4867]: I0101 08:30:11.626168 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8723fd85-0062-4c7e-b113-f46b791257f4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 08:30:11 crc kubenswrapper[4867]: I0101 08:30:11.626185 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8723fd85-0062-4c7e-b113-f46b791257f4-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 08:30:12 crc kubenswrapper[4867]: I0101 08:30:12.014211 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d2x67" Jan 01 08:30:12 crc kubenswrapper[4867]: I0101 08:30:12.131383 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/390347a2-a9b2-4441-8910-1be8ea15282c-catalog-content\") pod \"390347a2-a9b2-4441-8910-1be8ea15282c\" (UID: \"390347a2-a9b2-4441-8910-1be8ea15282c\") " Jan 01 08:30:12 crc kubenswrapper[4867]: I0101 08:30:12.131454 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/390347a2-a9b2-4441-8910-1be8ea15282c-utilities\") pod \"390347a2-a9b2-4441-8910-1be8ea15282c\" (UID: \"390347a2-a9b2-4441-8910-1be8ea15282c\") " Jan 01 08:30:12 crc kubenswrapper[4867]: I0101 08:30:12.131494 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tjmv\" (UniqueName: \"kubernetes.io/projected/390347a2-a9b2-4441-8910-1be8ea15282c-kube-api-access-2tjmv\") pod \"390347a2-a9b2-4441-8910-1be8ea15282c\" (UID: \"390347a2-a9b2-4441-8910-1be8ea15282c\") " Jan 01 08:30:12 crc kubenswrapper[4867]: I0101 08:30:12.133022 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/390347a2-a9b2-4441-8910-1be8ea15282c-utilities" (OuterVolumeSpecName: "utilities") pod "390347a2-a9b2-4441-8910-1be8ea15282c" (UID: "390347a2-a9b2-4441-8910-1be8ea15282c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:30:12 crc kubenswrapper[4867]: I0101 08:30:12.134686 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/390347a2-a9b2-4441-8910-1be8ea15282c-kube-api-access-2tjmv" (OuterVolumeSpecName: "kube-api-access-2tjmv") pod "390347a2-a9b2-4441-8910-1be8ea15282c" (UID: "390347a2-a9b2-4441-8910-1be8ea15282c"). InnerVolumeSpecName "kube-api-access-2tjmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:30:12 crc kubenswrapper[4867]: I0101 08:30:12.148973 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxlxw" event={"ID":"8723fd85-0062-4c7e-b113-f46b791257f4","Type":"ContainerDied","Data":"e7803ea7e41964f3c4f8c817cc180ab756bbaea08ecd5086cb0876619a68af98"} Jan 01 08:30:12 crc kubenswrapper[4867]: I0101 08:30:12.149016 4867 scope.go:117] "RemoveContainer" containerID="9958a470fe51fd14e4983c2894438d42602081d7d5b1bb7472be9aa1be38ed58" Jan 01 08:30:12 crc kubenswrapper[4867]: I0101 08:30:12.149029 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qxlxw" Jan 01 08:30:12 crc kubenswrapper[4867]: I0101 08:30:12.154361 4867 generic.go:334] "Generic (PLEG): container finished" podID="390347a2-a9b2-4441-8910-1be8ea15282c" containerID="7913c0dc661080bbd0bcc8e184f1d5a5ec9e74ec3f0f29af4020cd982fa0a498" exitCode=0 Jan 01 08:30:12 crc kubenswrapper[4867]: I0101 08:30:12.154468 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d2x67" Jan 01 08:30:12 crc kubenswrapper[4867]: I0101 08:30:12.154913 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2x67" event={"ID":"390347a2-a9b2-4441-8910-1be8ea15282c","Type":"ContainerDied","Data":"7913c0dc661080bbd0bcc8e184f1d5a5ec9e74ec3f0f29af4020cd982fa0a498"} Jan 01 08:30:12 crc kubenswrapper[4867]: I0101 08:30:12.154940 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2x67" event={"ID":"390347a2-a9b2-4441-8910-1be8ea15282c","Type":"ContainerDied","Data":"3cc98a1959707c2e28819cb157af1d1c398db52fe5e88251693c2ba363f43336"} Jan 01 08:30:12 crc kubenswrapper[4867]: I0101 08:30:12.173662 4867 scope.go:117] "RemoveContainer" containerID="9ee0b1cbdfef2c13f9a916d635df4512c2d7637516fcd1e834555a56de72cdf9" Jan 01 08:30:12 crc kubenswrapper[4867]: I0101 08:30:12.195468 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qxlxw"] Jan 01 08:30:12 crc kubenswrapper[4867]: I0101 08:30:12.200646 4867 scope.go:117] "RemoveContainer" containerID="366d38b8d87b466d6cee49ee248f8b91652f0dc34c3f9457d33d8a132c1bbb74" Jan 01 08:30:12 crc kubenswrapper[4867]: I0101 08:30:12.202269 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qxlxw"] Jan 01 08:30:12 crc kubenswrapper[4867]: I0101 08:30:12.233069 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/390347a2-a9b2-4441-8910-1be8ea15282c-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 08:30:12 crc kubenswrapper[4867]: I0101 08:30:12.233100 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tjmv\" (UniqueName: \"kubernetes.io/projected/390347a2-a9b2-4441-8910-1be8ea15282c-kube-api-access-2tjmv\") on node \"crc\" DevicePath \"\"" Jan 01 08:30:12 crc kubenswrapper[4867]: I0101 08:30:12.242768 4867 scope.go:117] "RemoveContainer" containerID="7913c0dc661080bbd0bcc8e184f1d5a5ec9e74ec3f0f29af4020cd982fa0a498" Jan 01 08:30:12 crc kubenswrapper[4867]: I0101 08:30:12.256068 4867 scope.go:117] "RemoveContainer" containerID="1ed43ac00d20c5f55d0ca180433b712c9a7a44f84b52bc5c6276a44ea6cb8a38" Jan 01 08:30:12 crc kubenswrapper[4867]: I0101 08:30:12.290111 4867 scope.go:117] "RemoveContainer" containerID="47f2e3e60e83d25e4e911ad12935321eb27514a4651e9ae627d52ae9dea47771" Jan 01 08:30:12 crc kubenswrapper[4867]: I0101 08:30:12.294918 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/390347a2-a9b2-4441-8910-1be8ea15282c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "390347a2-a9b2-4441-8910-1be8ea15282c" (UID: "390347a2-a9b2-4441-8910-1be8ea15282c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:30:12 crc kubenswrapper[4867]: I0101 08:30:12.306251 4867 scope.go:117] "RemoveContainer" containerID="7913c0dc661080bbd0bcc8e184f1d5a5ec9e74ec3f0f29af4020cd982fa0a498" Jan 01 08:30:12 crc kubenswrapper[4867]: E0101 08:30:12.306644 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7913c0dc661080bbd0bcc8e184f1d5a5ec9e74ec3f0f29af4020cd982fa0a498\": container with ID starting with 7913c0dc661080bbd0bcc8e184f1d5a5ec9e74ec3f0f29af4020cd982fa0a498 not found: ID does not exist" containerID="7913c0dc661080bbd0bcc8e184f1d5a5ec9e74ec3f0f29af4020cd982fa0a498" Jan 01 08:30:12 crc kubenswrapper[4867]: I0101 08:30:12.306682 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7913c0dc661080bbd0bcc8e184f1d5a5ec9e74ec3f0f29af4020cd982fa0a498"} err="failed to get container status \"7913c0dc661080bbd0bcc8e184f1d5a5ec9e74ec3f0f29af4020cd982fa0a498\": rpc error: code = NotFound desc = could not find container \"7913c0dc661080bbd0bcc8e184f1d5a5ec9e74ec3f0f29af4020cd982fa0a498\": container with ID starting with 7913c0dc661080bbd0bcc8e184f1d5a5ec9e74ec3f0f29af4020cd982fa0a498 not found: ID does not exist" Jan 01 08:30:12 crc kubenswrapper[4867]: I0101 08:30:12.306714 4867 scope.go:117] "RemoveContainer" containerID="1ed43ac00d20c5f55d0ca180433b712c9a7a44f84b52bc5c6276a44ea6cb8a38" Jan 01 08:30:12 crc kubenswrapper[4867]: E0101 08:30:12.308070 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ed43ac00d20c5f55d0ca180433b712c9a7a44f84b52bc5c6276a44ea6cb8a38\": container with ID starting with 1ed43ac00d20c5f55d0ca180433b712c9a7a44f84b52bc5c6276a44ea6cb8a38 not found: ID does not exist" containerID="1ed43ac00d20c5f55d0ca180433b712c9a7a44f84b52bc5c6276a44ea6cb8a38" Jan 01 08:30:12 crc kubenswrapper[4867]: I0101 08:30:12.308104 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ed43ac00d20c5f55d0ca180433b712c9a7a44f84b52bc5c6276a44ea6cb8a38"} err="failed to get container status \"1ed43ac00d20c5f55d0ca180433b712c9a7a44f84b52bc5c6276a44ea6cb8a38\": rpc error: code = NotFound desc = could not find container \"1ed43ac00d20c5f55d0ca180433b712c9a7a44f84b52bc5c6276a44ea6cb8a38\": container with ID starting with 1ed43ac00d20c5f55d0ca180433b712c9a7a44f84b52bc5c6276a44ea6cb8a38 not found: ID does not exist" Jan 01 08:30:12 crc kubenswrapper[4867]: I0101 08:30:12.308127 4867 scope.go:117] "RemoveContainer" containerID="47f2e3e60e83d25e4e911ad12935321eb27514a4651e9ae627d52ae9dea47771" Jan 01 08:30:12 crc kubenswrapper[4867]: E0101 08:30:12.308380 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47f2e3e60e83d25e4e911ad12935321eb27514a4651e9ae627d52ae9dea47771\": container with ID starting with 47f2e3e60e83d25e4e911ad12935321eb27514a4651e9ae627d52ae9dea47771 not found: ID does not exist" containerID="47f2e3e60e83d25e4e911ad12935321eb27514a4651e9ae627d52ae9dea47771" Jan 01 08:30:12 crc kubenswrapper[4867]: I0101 08:30:12.308429 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47f2e3e60e83d25e4e911ad12935321eb27514a4651e9ae627d52ae9dea47771"} err="failed to get container status \"47f2e3e60e83d25e4e911ad12935321eb27514a4651e9ae627d52ae9dea47771\": rpc error: code = NotFound desc = could not find container \"47f2e3e60e83d25e4e911ad12935321eb27514a4651e9ae627d52ae9dea47771\": container with ID starting with 47f2e3e60e83d25e4e911ad12935321eb27514a4651e9ae627d52ae9dea47771 not found: ID does not exist" Jan 01 08:30:12 crc kubenswrapper[4867]: I0101 08:30:12.334094 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/390347a2-a9b2-4441-8910-1be8ea15282c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 08:30:12 crc kubenswrapper[4867]: I0101 08:30:12.483163 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d2x67"] Jan 01 08:30:12 crc kubenswrapper[4867]: I0101 08:30:12.487219 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d2x67"] Jan 01 08:30:13 crc kubenswrapper[4867]: I0101 08:30:13.139706 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="390347a2-a9b2-4441-8910-1be8ea15282c" path="/var/lib/kubelet/pods/390347a2-a9b2-4441-8910-1be8ea15282c/volumes" Jan 01 08:30:13 crc kubenswrapper[4867]: I0101 08:30:13.141762 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8723fd85-0062-4c7e-b113-f46b791257f4" path="/var/lib/kubelet/pods/8723fd85-0062-4c7e-b113-f46b791257f4/volumes" Jan 01 08:30:17 crc kubenswrapper[4867]: I0101 08:30:17.796411 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wn4kc"] Jan 01 08:30:20 crc kubenswrapper[4867]: I0101 08:30:20.883754 4867 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 01 08:30:20 crc kubenswrapper[4867]: E0101 08:30:20.884499 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a072e3d1-b363-49da-b227-6c6f7bb0aa9d" containerName="extract-utilities" Jan 01 08:30:20 crc kubenswrapper[4867]: I0101 08:30:20.884520 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="a072e3d1-b363-49da-b227-6c6f7bb0aa9d" containerName="extract-utilities" Jan 01 08:30:20 crc kubenswrapper[4867]: E0101 08:30:20.884538 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c691144-adcf-4de6-b068-db1692decd23" containerName="collect-profiles" Jan 01 08:30:20 crc kubenswrapper[4867]: I0101 08:30:20.884550 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c691144-adcf-4de6-b068-db1692decd23" containerName="collect-profiles" Jan 01 08:30:20 crc kubenswrapper[4867]: E0101 08:30:20.884567 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ee33b06-e0e4-458d-8b01-76c6f2d62891" containerName="extract-content" Jan 01 08:30:20 crc kubenswrapper[4867]: I0101 08:30:20.884580 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ee33b06-e0e4-458d-8b01-76c6f2d62891" containerName="extract-content" Jan 01 08:30:20 crc kubenswrapper[4867]: E0101 08:30:20.884598 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8723fd85-0062-4c7e-b113-f46b791257f4" containerName="extract-content" Jan 01 08:30:20 crc kubenswrapper[4867]: I0101 08:30:20.884609 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8723fd85-0062-4c7e-b113-f46b791257f4" containerName="extract-content" Jan 01 08:30:20 crc kubenswrapper[4867]: E0101 08:30:20.884626 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="390347a2-a9b2-4441-8910-1be8ea15282c" containerName="registry-server" Jan 01 08:30:20 crc kubenswrapper[4867]: I0101 08:30:20.884638 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="390347a2-a9b2-4441-8910-1be8ea15282c" containerName="registry-server" Jan 01 08:30:20 crc kubenswrapper[4867]: E0101 08:30:20.884653 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8723fd85-0062-4c7e-b113-f46b791257f4" containerName="extract-utilities" Jan 01 08:30:20 crc kubenswrapper[4867]: I0101 08:30:20.884665 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8723fd85-0062-4c7e-b113-f46b791257f4" containerName="extract-utilities" Jan 01 08:30:20 crc kubenswrapper[4867]: E0101 08:30:20.884685 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="390347a2-a9b2-4441-8910-1be8ea15282c" containerName="extract-utilities" Jan 01 08:30:20 crc kubenswrapper[4867]: I0101 08:30:20.884696 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="390347a2-a9b2-4441-8910-1be8ea15282c" containerName="extract-utilities" Jan 01 08:30:20 crc kubenswrapper[4867]: E0101 08:30:20.884712 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="390347a2-a9b2-4441-8910-1be8ea15282c" containerName="extract-content" Jan 01 08:30:20 crc kubenswrapper[4867]: I0101 08:30:20.884725 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="390347a2-a9b2-4441-8910-1be8ea15282c" containerName="extract-content" Jan 01 08:30:20 crc kubenswrapper[4867]: E0101 08:30:20.884745 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8723fd85-0062-4c7e-b113-f46b791257f4" containerName="registry-server" Jan 01 08:30:20 crc kubenswrapper[4867]: I0101 08:30:20.884757 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8723fd85-0062-4c7e-b113-f46b791257f4" containerName="registry-server" Jan 01 08:30:20 crc kubenswrapper[4867]: E0101 08:30:20.884778 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a072e3d1-b363-49da-b227-6c6f7bb0aa9d" containerName="registry-server" Jan 01 08:30:20 crc kubenswrapper[4867]: I0101 08:30:20.884789 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="a072e3d1-b363-49da-b227-6c6f7bb0aa9d" containerName="registry-server" Jan 01 08:30:20 crc kubenswrapper[4867]: E0101 08:30:20.884815 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a072e3d1-b363-49da-b227-6c6f7bb0aa9d" containerName="extract-content" Jan 01 08:30:20 crc kubenswrapper[4867]: I0101 08:30:20.884826 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="a072e3d1-b363-49da-b227-6c6f7bb0aa9d" containerName="extract-content" Jan 01 08:30:20 crc kubenswrapper[4867]: E0101 08:30:20.884842 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ee33b06-e0e4-458d-8b01-76c6f2d62891" containerName="registry-server" Jan 01 08:30:20 crc kubenswrapper[4867]: I0101 08:30:20.884854 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ee33b06-e0e4-458d-8b01-76c6f2d62891" containerName="registry-server" Jan 01 08:30:20 crc kubenswrapper[4867]: E0101 08:30:20.884872 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ee33b06-e0e4-458d-8b01-76c6f2d62891" containerName="extract-utilities" Jan 01 08:30:20 crc kubenswrapper[4867]: I0101 08:30:20.884918 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ee33b06-e0e4-458d-8b01-76c6f2d62891" containerName="extract-utilities" Jan 01 08:30:20 crc kubenswrapper[4867]: I0101 08:30:20.885089 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ee33b06-e0e4-458d-8b01-76c6f2d62891" containerName="registry-server" Jan 01 08:30:20 crc kubenswrapper[4867]: I0101 08:30:20.885113 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8723fd85-0062-4c7e-b113-f46b791257f4" containerName="registry-server" Jan 01 08:30:20 crc kubenswrapper[4867]: I0101 08:30:20.885135 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="a072e3d1-b363-49da-b227-6c6f7bb0aa9d" containerName="registry-server" Jan 01 08:30:20 crc kubenswrapper[4867]: I0101 08:30:20.885155 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="390347a2-a9b2-4441-8910-1be8ea15282c" containerName="registry-server" Jan 01 08:30:20 crc kubenswrapper[4867]: I0101 08:30:20.885170 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c691144-adcf-4de6-b068-db1692decd23" containerName="collect-profiles" Jan 01 08:30:20 crc kubenswrapper[4867]: I0101 08:30:20.885688 4867 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 01 08:30:20 crc kubenswrapper[4867]: I0101 08:30:20.885930 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 01 08:30:20 crc kubenswrapper[4867]: I0101 08:30:20.886266 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0" gracePeriod=15 Jan 01 08:30:20 crc kubenswrapper[4867]: I0101 08:30:20.886356 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66" gracePeriod=15 Jan 01 08:30:20 crc kubenswrapper[4867]: I0101 08:30:20.886409 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0" gracePeriod=15 Jan 01 08:30:20 crc kubenswrapper[4867]: I0101 08:30:20.886323 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a" gracePeriod=15 Jan 01 08:30:20 crc kubenswrapper[4867]: I0101 08:30:20.886146 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967" gracePeriod=15 Jan 01 08:30:20 crc kubenswrapper[4867]: I0101 08:30:20.887285 4867 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 01 08:30:20 crc kubenswrapper[4867]: E0101 08:30:20.887581 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 01 08:30:20 crc kubenswrapper[4867]: I0101 08:30:20.887600 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 01 08:30:20 crc kubenswrapper[4867]: E0101 08:30:20.887621 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 01 08:30:20 crc kubenswrapper[4867]: I0101 08:30:20.887635 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 01 08:30:20 crc kubenswrapper[4867]: E0101 08:30:20.887669 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 01 08:30:20 crc kubenswrapper[4867]: I0101 08:30:20.887682 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 01 08:30:20 crc kubenswrapper[4867]: E0101 08:30:20.887703 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 01 08:30:20 crc kubenswrapper[4867]: I0101 08:30:20.887716 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 01 08:30:20 crc kubenswrapper[4867]: E0101 08:30:20.887734 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 01 08:30:20 crc kubenswrapper[4867]: I0101 08:30:20.887745 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 01 08:30:20 crc kubenswrapper[4867]: E0101 08:30:20.887763 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 01 08:30:20 crc kubenswrapper[4867]: I0101 08:30:20.887775 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 01 08:30:20 crc kubenswrapper[4867]: I0101 08:30:20.887973 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 01 08:30:20 crc kubenswrapper[4867]: I0101 08:30:20.887992 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 01 08:30:20 crc kubenswrapper[4867]: I0101 08:30:20.888009 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 01 08:30:20 crc kubenswrapper[4867]: I0101 08:30:20.888028 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 01 08:30:20 crc kubenswrapper[4867]: I0101 08:30:20.888044 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 01 08:30:21 crc kubenswrapper[4867]: I0101 08:30:21.050076 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 01 08:30:21 crc kubenswrapper[4867]: I0101 08:30:21.050439 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:30:21 crc kubenswrapper[4867]: I0101 08:30:21.050475 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:30:21 crc kubenswrapper[4867]: I0101 08:30:21.050506 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:30:21 crc kubenswrapper[4867]: I0101 08:30:21.050532 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 01 08:30:21 crc kubenswrapper[4867]: I0101 08:30:21.050572 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 01 08:30:21 crc kubenswrapper[4867]: I0101 08:30:21.050595 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 01 08:30:21 crc kubenswrapper[4867]: I0101 08:30:21.050626 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 01 08:30:21 crc kubenswrapper[4867]: I0101 08:30:21.089193 4867 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 01 08:30:21 crc kubenswrapper[4867]: I0101 08:30:21.089283 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 01 08:30:21 crc kubenswrapper[4867]: I0101 08:30:21.131787 4867 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 01 08:30:21 crc kubenswrapper[4867]: I0101 08:30:21.152372 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 01 08:30:21 crc kubenswrapper[4867]: I0101 08:30:21.152565 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 01 08:30:21 crc kubenswrapper[4867]: I0101 08:30:21.152621 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 01 08:30:21 crc kubenswrapper[4867]: I0101 08:30:21.152684 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 01 08:30:21 crc kubenswrapper[4867]: I0101 08:30:21.152758 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 01 08:30:21 crc kubenswrapper[4867]: I0101 08:30:21.152790 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 01 08:30:21 crc kubenswrapper[4867]: I0101 08:30:21.152807 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 01 08:30:21 crc kubenswrapper[4867]: I0101 08:30:21.152821 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 01 08:30:21 crc kubenswrapper[4867]: I0101 08:30:21.152839 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:30:21 crc kubenswrapper[4867]: I0101 08:30:21.152868 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 01 08:30:21 crc kubenswrapper[4867]: I0101 08:30:21.152934 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:30:21 crc kubenswrapper[4867]: I0101 08:30:21.152771 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 01 08:30:21 crc kubenswrapper[4867]: I0101 08:30:21.152963 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:30:21 crc kubenswrapper[4867]: I0101 08:30:21.153007 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:30:21 crc kubenswrapper[4867]: I0101 08:30:21.153038 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:30:21 crc kubenswrapper[4867]: I0101 08:30:21.153107 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:30:21 crc kubenswrapper[4867]: I0101 08:30:21.209341 4867 generic.go:334] "Generic (PLEG): container finished" podID="7a01a834-2d82-4263-8abd-362d32ab94ec" containerID="dd48b94ccfa36e4a7327af4123737a3d7a8f019517a5078b5c3445869cdfdf96" exitCode=0 Jan 01 08:30:21 crc kubenswrapper[4867]: I0101 08:30:21.209438 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7a01a834-2d82-4263-8abd-362d32ab94ec","Type":"ContainerDied","Data":"dd48b94ccfa36e4a7327af4123737a3d7a8f019517a5078b5c3445869cdfdf96"} Jan 01 08:30:21 crc kubenswrapper[4867]: I0101 08:30:21.210224 4867 status_manager.go:851] "Failed to get status for pod" podUID="7a01a834-2d82-4263-8abd-362d32ab94ec" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 01 08:30:21 crc kubenswrapper[4867]: I0101 08:30:21.214830 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 01 08:30:21 crc kubenswrapper[4867]: I0101 08:30:21.216144 4867 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a" exitCode=0 Jan 01 08:30:21 crc kubenswrapper[4867]: I0101 08:30:21.216184 4867 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0" exitCode=0 Jan 01 08:30:21 crc kubenswrapper[4867]: I0101 08:30:21.216199 4867 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66" exitCode=0 Jan 01 08:30:21 crc kubenswrapper[4867]: I0101 08:30:21.216215 4867 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0" exitCode=2 Jan 01 08:30:22 crc kubenswrapper[4867]: I0101 08:30:22.558189 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 01 08:30:22 crc kubenswrapper[4867]: I0101 08:30:22.560153 4867 status_manager.go:851] "Failed to get status for pod" podUID="7a01a834-2d82-4263-8abd-362d32ab94ec" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 01 08:30:22 crc kubenswrapper[4867]: I0101 08:30:22.673946 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7a01a834-2d82-4263-8abd-362d32ab94ec-var-lock\") pod \"7a01a834-2d82-4263-8abd-362d32ab94ec\" (UID: \"7a01a834-2d82-4263-8abd-362d32ab94ec\") " Jan 01 08:30:22 crc kubenswrapper[4867]: I0101 08:30:22.674049 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a01a834-2d82-4263-8abd-362d32ab94ec-var-lock" (OuterVolumeSpecName: "var-lock") pod "7a01a834-2d82-4263-8abd-362d32ab94ec" (UID: "7a01a834-2d82-4263-8abd-362d32ab94ec"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:30:22 crc kubenswrapper[4867]: I0101 08:30:22.674080 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a01a834-2d82-4263-8abd-362d32ab94ec-kube-api-access\") pod \"7a01a834-2d82-4263-8abd-362d32ab94ec\" (UID: \"7a01a834-2d82-4263-8abd-362d32ab94ec\") " Jan 01 08:30:22 crc kubenswrapper[4867]: I0101 08:30:22.675069 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a01a834-2d82-4263-8abd-362d32ab94ec-kubelet-dir\") pod \"7a01a834-2d82-4263-8abd-362d32ab94ec\" (UID: \"7a01a834-2d82-4263-8abd-362d32ab94ec\") " Jan 01 08:30:22 crc kubenswrapper[4867]: I0101 08:30:22.675125 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a01a834-2d82-4263-8abd-362d32ab94ec-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7a01a834-2d82-4263-8abd-362d32ab94ec" (UID: "7a01a834-2d82-4263-8abd-362d32ab94ec"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:30:22 crc kubenswrapper[4867]: I0101 08:30:22.675441 4867 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a01a834-2d82-4263-8abd-362d32ab94ec-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 01 08:30:22 crc kubenswrapper[4867]: I0101 08:30:22.675479 4867 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7a01a834-2d82-4263-8abd-362d32ab94ec-var-lock\") on node \"crc\" DevicePath \"\"" Jan 01 08:30:22 crc kubenswrapper[4867]: I0101 08:30:22.682829 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a01a834-2d82-4263-8abd-362d32ab94ec-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7a01a834-2d82-4263-8abd-362d32ab94ec" (UID: "7a01a834-2d82-4263-8abd-362d32ab94ec"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:30:22 crc kubenswrapper[4867]: I0101 08:30:22.776221 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a01a834-2d82-4263-8abd-362d32ab94ec-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 01 08:30:23 crc kubenswrapper[4867]: I0101 08:30:23.227298 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7a01a834-2d82-4263-8abd-362d32ab94ec","Type":"ContainerDied","Data":"2b1466554192a5557db0a90d759c9ea7875ac36603d043d8a51c01e808761d65"} Jan 01 08:30:23 crc kubenswrapper[4867]: I0101 08:30:23.227645 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b1466554192a5557db0a90d759c9ea7875ac36603d043d8a51c01e808761d65" Jan 01 08:30:23 crc kubenswrapper[4867]: I0101 08:30:23.227706 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 01 08:30:23 crc kubenswrapper[4867]: I0101 08:30:23.230625 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 01 08:30:23 crc kubenswrapper[4867]: I0101 08:30:23.231609 4867 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967" exitCode=0 Jan 01 08:30:23 crc kubenswrapper[4867]: I0101 08:30:23.231652 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3c66e1048223d85789e48ccd81b6d9080382dcaca8b147d703bfe926a7fa313" Jan 01 08:30:23 crc kubenswrapper[4867]: E0101 08:30:23.265622 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-conmon-5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967.scope\": RecentStats: unable to find data in memory cache]" Jan 01 08:30:23 crc kubenswrapper[4867]: I0101 08:30:23.286083 4867 status_manager.go:851] "Failed to get status for pod" podUID="7a01a834-2d82-4263-8abd-362d32ab94ec" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 01 08:30:23 crc kubenswrapper[4867]: I0101 08:30:23.289045 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 01 08:30:23 crc kubenswrapper[4867]: I0101 08:30:23.289856 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:30:23 crc kubenswrapper[4867]: I0101 08:30:23.290109 4867 status_manager.go:851] "Failed to get status for pod" podUID="7a01a834-2d82-4263-8abd-362d32ab94ec" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 01 08:30:23 crc kubenswrapper[4867]: I0101 08:30:23.290304 4867 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 01 08:30:23 crc kubenswrapper[4867]: I0101 08:30:23.308906 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 01 08:30:23 crc kubenswrapper[4867]: I0101 08:30:23.309000 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:30:23 crc kubenswrapper[4867]: I0101 08:30:23.309108 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 01 08:30:23 crc kubenswrapper[4867]: I0101 08:30:23.309130 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:30:23 crc kubenswrapper[4867]: I0101 08:30:23.309343 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 01 08:30:23 crc kubenswrapper[4867]: I0101 08:30:23.309409 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:30:23 crc kubenswrapper[4867]: I0101 08:30:23.309756 4867 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 01 08:30:23 crc kubenswrapper[4867]: I0101 08:30:23.309833 4867 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 01 08:30:23 crc kubenswrapper[4867]: I0101 08:30:23.309918 4867 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 01 08:30:24 crc kubenswrapper[4867]: I0101 08:30:24.237374 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:30:24 crc kubenswrapper[4867]: I0101 08:30:24.265194 4867 status_manager.go:851] "Failed to get status for pod" podUID="7a01a834-2d82-4263-8abd-362d32ab94ec" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 01 08:30:24 crc kubenswrapper[4867]: I0101 08:30:24.265787 4867 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 01 08:30:25 crc kubenswrapper[4867]: I0101 08:30:25.141803 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 01 08:30:25 crc kubenswrapper[4867]: E0101 08:30:25.686535 4867 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 01 08:30:25 crc kubenswrapper[4867]: E0101 08:30:25.688615 4867 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 01 08:30:25 crc kubenswrapper[4867]: E0101 08:30:25.689640 4867 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 01 08:30:25 crc kubenswrapper[4867]: E0101 08:30:25.689961 4867 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 01 08:30:25 crc kubenswrapper[4867]: E0101 08:30:25.690220 4867 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 01 08:30:25 crc kubenswrapper[4867]: I0101 08:30:25.690279 4867 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 01 08:30:25 crc kubenswrapper[4867]: E0101 08:30:25.690679 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="200ms" Jan 01 08:30:25 crc kubenswrapper[4867]: E0101 08:30:25.891986 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="400ms" Jan 01 08:30:25 crc kubenswrapper[4867]: E0101 08:30:25.928786 4867 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.12:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 01 08:30:25 crc kubenswrapper[4867]: I0101 08:30:25.929560 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 01 08:30:25 crc kubenswrapper[4867]: E0101 08:30:25.969702 4867 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.12:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18868e2c97fc2b46 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-01 08:30:25.96908935 +0000 UTC m=+235.104358149,LastTimestamp:2026-01-01 08:30:25.96908935 +0000 UTC m=+235.104358149,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 01 08:30:26 crc kubenswrapper[4867]: I0101 08:30:26.262453 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"a5d6a43197956fcee4f8be0b0c143f082bfcecb5dd194deef45346a62a885391"} Jan 01 08:30:26 crc kubenswrapper[4867]: I0101 08:30:26.262841 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"db08ab7cf8ab397d7bca0cfa849f39033071d05a9c9fdd89ae8c6cf1bd87ecfd"} Jan 01 08:30:26 crc kubenswrapper[4867]: I0101 08:30:26.263528 4867 status_manager.go:851] "Failed to get status for pod" podUID="7a01a834-2d82-4263-8abd-362d32ab94ec" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 01 08:30:26 crc kubenswrapper[4867]: E0101 08:30:26.263617 4867 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.12:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 01 08:30:26 crc kubenswrapper[4867]: E0101 08:30:26.292992 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="800ms" Jan 01 08:30:27 crc kubenswrapper[4867]: E0101 08:30:27.094399 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="1.6s" Jan 01 08:30:27 crc kubenswrapper[4867]: E0101 08:30:27.146962 4867 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.12:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" volumeName="registry-storage" Jan 01 08:30:28 crc kubenswrapper[4867]: E0101 08:30:28.049471 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:30:28Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:30:28Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:30:28Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-01T08:30:28Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 01 08:30:28 crc kubenswrapper[4867]: E0101 08:30:28.050046 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 01 08:30:28 crc kubenswrapper[4867]: E0101 08:30:28.050552 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 01 08:30:28 crc kubenswrapper[4867]: E0101 08:30:28.051023 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 01 08:30:28 crc kubenswrapper[4867]: E0101 08:30:28.051482 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 01 08:30:28 crc kubenswrapper[4867]: E0101 08:30:28.052061 4867 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 01 08:30:28 crc kubenswrapper[4867]: E0101 08:30:28.695577 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="3.2s" Jan 01 08:30:31 crc kubenswrapper[4867]: I0101 08:30:31.134826 4867 status_manager.go:851] "Failed to get status for pod" podUID="7a01a834-2d82-4263-8abd-362d32ab94ec" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 01 08:30:31 crc kubenswrapper[4867]: E0101 08:30:31.896910 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="6.4s" Jan 01 08:30:34 crc kubenswrapper[4867]: E0101 08:30:34.191938 4867 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.12:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18868e2c97fc2b46 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-01 08:30:25.96908935 +0000 UTC m=+235.104358149,LastTimestamp:2026-01-01 08:30:25.96908935 +0000 UTC m=+235.104358149,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 01 08:30:35 crc kubenswrapper[4867]: I0101 08:30:35.322592 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 01 08:30:35 crc kubenswrapper[4867]: I0101 08:30:35.322689 4867 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="fb9b9aae16cc1c29ffb288ab01b54fa559cfe599c48f3ed97fe62bcc6e5b3288" exitCode=1 Jan 01 08:30:35 crc kubenswrapper[4867]: I0101 08:30:35.322737 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"fb9b9aae16cc1c29ffb288ab01b54fa559cfe599c48f3ed97fe62bcc6e5b3288"} Jan 01 08:30:35 crc kubenswrapper[4867]: I0101 08:30:35.323502 4867 scope.go:117] "RemoveContainer" containerID="fb9b9aae16cc1c29ffb288ab01b54fa559cfe599c48f3ed97fe62bcc6e5b3288" Jan 01 08:30:35 crc kubenswrapper[4867]: I0101 08:30:35.323952 4867 status_manager.go:851] "Failed to get status for pod" podUID="7a01a834-2d82-4263-8abd-362d32ab94ec" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 01 08:30:35 crc kubenswrapper[4867]: I0101 08:30:35.324565 4867 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 01 08:30:36 crc kubenswrapper[4867]: I0101 08:30:36.128591 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:30:36 crc kubenswrapper[4867]: I0101 08:30:36.130348 4867 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 01 08:30:36 crc kubenswrapper[4867]: I0101 08:30:36.131003 4867 status_manager.go:851] "Failed to get status for pod" podUID="7a01a834-2d82-4263-8abd-362d32ab94ec" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 01 08:30:36 crc kubenswrapper[4867]: I0101 08:30:36.156559 4867 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="04fcb27f-6a52-491e-ad08-b0c273c9ff52" Jan 01 08:30:36 crc kubenswrapper[4867]: I0101 08:30:36.156610 4867 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="04fcb27f-6a52-491e-ad08-b0c273c9ff52" Jan 01 08:30:36 crc kubenswrapper[4867]: E0101 08:30:36.157358 4867 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:30:36 crc kubenswrapper[4867]: I0101 08:30:36.158030 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:30:36 crc kubenswrapper[4867]: I0101 08:30:36.337573 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 01 08:30:36 crc kubenswrapper[4867]: I0101 08:30:36.337714 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"82847b036354e1a74368983ccb47957385072fe8faf69d4dd7f53de83505e17d"} Jan 01 08:30:36 crc kubenswrapper[4867]: I0101 08:30:36.338966 4867 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 01 08:30:36 crc kubenswrapper[4867]: I0101 08:30:36.339513 4867 status_manager.go:851] "Failed to get status for pod" podUID="7a01a834-2d82-4263-8abd-362d32ab94ec" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 01 08:30:36 crc kubenswrapper[4867]: I0101 08:30:36.340448 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f4985edfd55ab6eb69d862c51b7f34bd3782053f1798aacdc963b4e2004ad90d"} Jan 01 08:30:37 crc kubenswrapper[4867]: I0101 08:30:37.351207 4867 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="6aa7143c204e81df456d9633460280bc556f8a2adc048f1554dc2c1380d14eb2" exitCode=0 Jan 01 08:30:37 crc kubenswrapper[4867]: I0101 08:30:37.351293 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"6aa7143c204e81df456d9633460280bc556f8a2adc048f1554dc2c1380d14eb2"} Jan 01 08:30:37 crc kubenswrapper[4867]: I0101 08:30:37.351672 4867 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="04fcb27f-6a52-491e-ad08-b0c273c9ff52" Jan 01 08:30:37 crc kubenswrapper[4867]: I0101 08:30:37.351713 4867 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="04fcb27f-6a52-491e-ad08-b0c273c9ff52" Jan 01 08:30:37 crc kubenswrapper[4867]: E0101 08:30:37.352412 4867 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:30:37 crc kubenswrapper[4867]: I0101 08:30:37.353638 4867 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 01 08:30:37 crc kubenswrapper[4867]: I0101 08:30:37.357297 4867 status_manager.go:851] "Failed to get status for pod" podUID="7a01a834-2d82-4263-8abd-362d32ab94ec" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 01 08:30:38 crc kubenswrapper[4867]: I0101 08:30:38.258045 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 01 08:30:38 crc kubenswrapper[4867]: I0101 08:30:38.376213 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9903ff097b211bd19ebc455eb0de07801327e216a309a4b561a3a02d4ec592b6"} Jan 01 08:30:38 crc kubenswrapper[4867]: I0101 08:30:38.376256 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5a2ed2ee56be460b3471c49ee659b3ec36d22effaf4241af95707d408cf94759"} Jan 01 08:30:38 crc kubenswrapper[4867]: I0101 08:30:38.376273 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e68496b1325a8cbbfda3da07ffc6254a102352d4a929d46bd5853f6fb95ffadb"} Jan 01 08:30:39 crc kubenswrapper[4867]: I0101 08:30:39.387248 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0d55d8656443e0b04c5d0e1ea5ac3ba1d27808bffac8fcb936ceb8b7ba87f086"} Jan 01 08:30:39 crc kubenswrapper[4867]: I0101 08:30:39.387289 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"082da958d165da5dcef7b4c527d59584d79ee9c1a88aab70c28417b242a7592f"} Jan 01 08:30:39 crc kubenswrapper[4867]: I0101 08:30:39.387493 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:30:39 crc kubenswrapper[4867]: I0101 08:30:39.387701 4867 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="04fcb27f-6a52-491e-ad08-b0c273c9ff52" Jan 01 08:30:39 crc kubenswrapper[4867]: I0101 08:30:39.387736 4867 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="04fcb27f-6a52-491e-ad08-b0c273c9ff52" Jan 01 08:30:41 crc kubenswrapper[4867]: I0101 08:30:41.158503 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:30:41 crc kubenswrapper[4867]: I0101 08:30:41.158951 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:30:41 crc kubenswrapper[4867]: I0101 08:30:41.167989 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:30:42 crc kubenswrapper[4867]: I0101 08:30:42.824615 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" podUID="158aa7f6-a8d3-4a58-a437-5962f1fc90a2" containerName="oauth-openshift" containerID="cri-o://6bc09841d5f6ad466a25de95cfaa72daafa8154248ed7db0238842b0147fb3fd" gracePeriod=15 Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.214560 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.296255 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-ocp-branding-template\") pod \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.296364 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-user-idp-0-file-data\") pod \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.296394 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-cliconfig\") pod \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.297298 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-user-template-provider-selection\") pod \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.297354 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-router-certs\") pod \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.297412 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-audit-dir\") pod \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.297465 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-audit-policies\") pod \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.297497 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-trusted-ca-bundle\") pod \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.297486 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "158aa7f6-a8d3-4a58-a437-5962f1fc90a2" (UID: "158aa7f6-a8d3-4a58-a437-5962f1fc90a2"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.297552 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-user-template-login\") pod \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.297565 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "158aa7f6-a8d3-4a58-a437-5962f1fc90a2" (UID: "158aa7f6-a8d3-4a58-a437-5962f1fc90a2"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.297581 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hhnd\" (UniqueName: \"kubernetes.io/projected/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-kube-api-access-2hhnd\") pod \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.297622 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-user-template-error\") pod \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.297662 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-service-ca\") pod \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.297696 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-session\") pod \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.297755 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-serving-cert\") pod \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\" (UID: \"158aa7f6-a8d3-4a58-a437-5962f1fc90a2\") " Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.298196 4867 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.298217 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.298594 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "158aa7f6-a8d3-4a58-a437-5962f1fc90a2" (UID: "158aa7f6-a8d3-4a58-a437-5962f1fc90a2"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.299793 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "158aa7f6-a8d3-4a58-a437-5962f1fc90a2" (UID: "158aa7f6-a8d3-4a58-a437-5962f1fc90a2"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.299946 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "158aa7f6-a8d3-4a58-a437-5962f1fc90a2" (UID: "158aa7f6-a8d3-4a58-a437-5962f1fc90a2"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.305569 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "158aa7f6-a8d3-4a58-a437-5962f1fc90a2" (UID: "158aa7f6-a8d3-4a58-a437-5962f1fc90a2"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.306046 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "158aa7f6-a8d3-4a58-a437-5962f1fc90a2" (UID: "158aa7f6-a8d3-4a58-a437-5962f1fc90a2"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.306497 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "158aa7f6-a8d3-4a58-a437-5962f1fc90a2" (UID: "158aa7f6-a8d3-4a58-a437-5962f1fc90a2"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.307017 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "158aa7f6-a8d3-4a58-a437-5962f1fc90a2" (UID: "158aa7f6-a8d3-4a58-a437-5962f1fc90a2"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.307424 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "158aa7f6-a8d3-4a58-a437-5962f1fc90a2" (UID: "158aa7f6-a8d3-4a58-a437-5962f1fc90a2"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.307773 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "158aa7f6-a8d3-4a58-a437-5962f1fc90a2" (UID: "158aa7f6-a8d3-4a58-a437-5962f1fc90a2"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.308387 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-kube-api-access-2hhnd" (OuterVolumeSpecName: "kube-api-access-2hhnd") pod "158aa7f6-a8d3-4a58-a437-5962f1fc90a2" (UID: "158aa7f6-a8d3-4a58-a437-5962f1fc90a2"). InnerVolumeSpecName "kube-api-access-2hhnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.308429 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "158aa7f6-a8d3-4a58-a437-5962f1fc90a2" (UID: "158aa7f6-a8d3-4a58-a437-5962f1fc90a2"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.310231 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "158aa7f6-a8d3-4a58-a437-5962f1fc90a2" (UID: "158aa7f6-a8d3-4a58-a437-5962f1fc90a2"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.399342 4867 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.399402 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.399423 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hhnd\" (UniqueName: \"kubernetes.io/projected/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-kube-api-access-2hhnd\") on node \"crc\" DevicePath \"\"" Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.399442 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.399460 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.399478 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.399498 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.399519 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.399537 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.399555 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.399573 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.399591 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/158aa7f6-a8d3-4a58-a437-5962f1fc90a2-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.434371 4867 generic.go:334] "Generic (PLEG): container finished" podID="158aa7f6-a8d3-4a58-a437-5962f1fc90a2" containerID="6bc09841d5f6ad466a25de95cfaa72daafa8154248ed7db0238842b0147fb3fd" exitCode=0 Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.434440 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" event={"ID":"158aa7f6-a8d3-4a58-a437-5962f1fc90a2","Type":"ContainerDied","Data":"6bc09841d5f6ad466a25de95cfaa72daafa8154248ed7db0238842b0147fb3fd"} Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.434482 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" event={"ID":"158aa7f6-a8d3-4a58-a437-5962f1fc90a2","Type":"ContainerDied","Data":"0b603f3b8f36c65d699b0b8691a489c873f55c6091901e2dd962d79065ffb475"} Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.434513 4867 scope.go:117] "RemoveContainer" containerID="6bc09841d5f6ad466a25de95cfaa72daafa8154248ed7db0238842b0147fb3fd" Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.434519 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wn4kc" Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.474137 4867 scope.go:117] "RemoveContainer" containerID="6bc09841d5f6ad466a25de95cfaa72daafa8154248ed7db0238842b0147fb3fd" Jan 01 08:30:43 crc kubenswrapper[4867]: E0101 08:30:43.475764 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bc09841d5f6ad466a25de95cfaa72daafa8154248ed7db0238842b0147fb3fd\": container with ID starting with 6bc09841d5f6ad466a25de95cfaa72daafa8154248ed7db0238842b0147fb3fd not found: ID does not exist" containerID="6bc09841d5f6ad466a25de95cfaa72daafa8154248ed7db0238842b0147fb3fd" Jan 01 08:30:43 crc kubenswrapper[4867]: I0101 08:30:43.475831 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bc09841d5f6ad466a25de95cfaa72daafa8154248ed7db0238842b0147fb3fd"} err="failed to get container status \"6bc09841d5f6ad466a25de95cfaa72daafa8154248ed7db0238842b0147fb3fd\": rpc error: code = NotFound desc = could not find container \"6bc09841d5f6ad466a25de95cfaa72daafa8154248ed7db0238842b0147fb3fd\": container with ID starting with 6bc09841d5f6ad466a25de95cfaa72daafa8154248ed7db0238842b0147fb3fd not found: ID does not exist" Jan 01 08:30:44 crc kubenswrapper[4867]: I0101 08:30:44.397697 4867 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:30:44 crc kubenswrapper[4867]: I0101 08:30:44.443401 4867 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="04fcb27f-6a52-491e-ad08-b0c273c9ff52" Jan 01 08:30:44 crc kubenswrapper[4867]: I0101 08:30:44.443447 4867 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="04fcb27f-6a52-491e-ad08-b0c273c9ff52" Jan 01 08:30:44 crc kubenswrapper[4867]: I0101 08:30:44.449944 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:30:44 crc kubenswrapper[4867]: I0101 08:30:44.484350 4867 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="7e220ddf-68ea-42da-a70c-26a9600c1ecb" Jan 01 08:30:44 crc kubenswrapper[4867]: E0101 08:30:44.514177 4867 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\": Failed to watch *v1.Secret: unknown (get secrets)" logger="UnhandledError" Jan 01 08:30:44 crc kubenswrapper[4867]: E0101 08:30:44.575304 4867 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-znhcc\": Failed to watch *v1.Secret: unknown (get secrets)" logger="UnhandledError" Jan 01 08:30:44 crc kubenswrapper[4867]: I0101 08:30:44.588018 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 01 08:30:44 crc kubenswrapper[4867]: I0101 08:30:44.588438 4867 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 01 08:30:44 crc kubenswrapper[4867]: I0101 08:30:44.588486 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 01 08:30:44 crc kubenswrapper[4867]: E0101 08:30:44.791613 4867 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"audit\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError" Jan 01 08:30:44 crc kubenswrapper[4867]: E0101 08:30:44.908737 4867 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError" Jan 01 08:30:44 crc kubenswrapper[4867]: E0101 08:30:44.983033 4867 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError" Jan 01 08:30:45 crc kubenswrapper[4867]: E0101 08:30:45.333988 4867 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-session\": Failed to watch *v1.Secret: unknown (get secrets)" logger="UnhandledError" Jan 01 08:30:45 crc kubenswrapper[4867]: I0101 08:30:45.449366 4867 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="04fcb27f-6a52-491e-ad08-b0c273c9ff52" Jan 01 08:30:45 crc kubenswrapper[4867]: I0101 08:30:45.449421 4867 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="04fcb27f-6a52-491e-ad08-b0c273c9ff52" Jan 01 08:30:45 crc kubenswrapper[4867]: I0101 08:30:45.453381 4867 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="7e220ddf-68ea-42da-a70c-26a9600c1ecb" Jan 01 08:30:53 crc kubenswrapper[4867]: I0101 08:30:53.736391 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 01 08:30:53 crc kubenswrapper[4867]: I0101 08:30:53.836791 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 01 08:30:54 crc kubenswrapper[4867]: I0101 08:30:54.588469 4867 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 01 08:30:54 crc kubenswrapper[4867]: I0101 08:30:54.588560 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 01 08:30:54 crc kubenswrapper[4867]: I0101 08:30:54.945112 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 01 08:30:55 crc kubenswrapper[4867]: I0101 08:30:55.025415 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 01 08:30:55 crc kubenswrapper[4867]: I0101 08:30:55.046792 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 01 08:30:55 crc kubenswrapper[4867]: I0101 08:30:55.396729 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 01 08:30:55 crc kubenswrapper[4867]: I0101 08:30:55.410967 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 01 08:30:55 crc kubenswrapper[4867]: I0101 08:30:55.467917 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 01 08:30:55 crc kubenswrapper[4867]: I0101 08:30:55.556987 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 01 08:30:55 crc kubenswrapper[4867]: I0101 08:30:55.742725 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 01 08:30:56 crc kubenswrapper[4867]: I0101 08:30:56.045841 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 01 08:30:56 crc kubenswrapper[4867]: I0101 08:30:56.189584 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 01 08:30:56 crc kubenswrapper[4867]: I0101 08:30:56.290929 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 01 08:30:56 crc kubenswrapper[4867]: I0101 08:30:56.308606 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 01 08:30:56 crc kubenswrapper[4867]: I0101 08:30:56.349957 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 01 08:30:56 crc kubenswrapper[4867]: I0101 08:30:56.358578 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 01 08:30:56 crc kubenswrapper[4867]: I0101 08:30:56.418862 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 01 08:30:56 crc kubenswrapper[4867]: I0101 08:30:56.747229 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 01 08:30:56 crc kubenswrapper[4867]: I0101 08:30:56.761558 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 01 08:30:56 crc kubenswrapper[4867]: I0101 08:30:56.786203 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 01 08:30:56 crc kubenswrapper[4867]: I0101 08:30:56.950852 4867 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 01 08:30:56 crc kubenswrapper[4867]: I0101 08:30:56.978347 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 01 08:30:57 crc kubenswrapper[4867]: I0101 08:30:57.039316 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 01 08:30:57 crc kubenswrapper[4867]: I0101 08:30:57.522015 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 01 08:30:57 crc kubenswrapper[4867]: I0101 08:30:57.555079 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 01 08:30:57 crc kubenswrapper[4867]: I0101 08:30:57.558344 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 01 08:30:57 crc kubenswrapper[4867]: I0101 08:30:57.616617 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 01 08:30:57 crc kubenswrapper[4867]: I0101 08:30:57.687110 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 01 08:30:57 crc kubenswrapper[4867]: I0101 08:30:57.798751 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 01 08:30:57 crc kubenswrapper[4867]: I0101 08:30:57.843619 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 01 08:30:57 crc kubenswrapper[4867]: I0101 08:30:57.902231 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 01 08:30:58 crc kubenswrapper[4867]: I0101 08:30:58.108607 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 01 08:30:58 crc kubenswrapper[4867]: I0101 08:30:58.111761 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 01 08:30:58 crc kubenswrapper[4867]: I0101 08:30:58.267036 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 01 08:30:58 crc kubenswrapper[4867]: I0101 08:30:58.275604 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 01 08:30:58 crc kubenswrapper[4867]: I0101 08:30:58.337496 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 01 08:30:58 crc kubenswrapper[4867]: I0101 08:30:58.340223 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 01 08:30:58 crc kubenswrapper[4867]: I0101 08:30:58.442972 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 01 08:30:58 crc kubenswrapper[4867]: I0101 08:30:58.455398 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 01 08:30:58 crc kubenswrapper[4867]: I0101 08:30:58.467229 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 01 08:30:58 crc kubenswrapper[4867]: I0101 08:30:58.615333 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 01 08:30:58 crc kubenswrapper[4867]: I0101 08:30:58.621615 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 01 08:30:58 crc kubenswrapper[4867]: I0101 08:30:58.649617 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 01 08:30:58 crc kubenswrapper[4867]: I0101 08:30:58.669782 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 01 08:30:58 crc kubenswrapper[4867]: I0101 08:30:58.694875 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 01 08:30:58 crc kubenswrapper[4867]: I0101 08:30:58.856159 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 01 08:30:58 crc kubenswrapper[4867]: I0101 08:30:58.963693 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 01 08:30:59 crc kubenswrapper[4867]: I0101 08:30:59.094942 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 01 08:30:59 crc kubenswrapper[4867]: I0101 08:30:59.212290 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 01 08:30:59 crc kubenswrapper[4867]: I0101 08:30:59.329946 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 01 08:30:59 crc kubenswrapper[4867]: I0101 08:30:59.361106 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 01 08:30:59 crc kubenswrapper[4867]: I0101 08:30:59.399210 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 01 08:30:59 crc kubenswrapper[4867]: I0101 08:30:59.411493 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 01 08:30:59 crc kubenswrapper[4867]: I0101 08:30:59.527340 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 01 08:30:59 crc kubenswrapper[4867]: I0101 08:30:59.532667 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 01 08:30:59 crc kubenswrapper[4867]: I0101 08:30:59.580842 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 01 08:30:59 crc kubenswrapper[4867]: I0101 08:30:59.612772 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 01 08:30:59 crc kubenswrapper[4867]: I0101 08:30:59.647213 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 01 08:30:59 crc kubenswrapper[4867]: I0101 08:30:59.664697 4867 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 01 08:30:59 crc kubenswrapper[4867]: I0101 08:30:59.672117 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-wn4kc"] Jan 01 08:30:59 crc kubenswrapper[4867]: I0101 08:30:59.672232 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 01 08:30:59 crc kubenswrapper[4867]: I0101 08:30:59.676565 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 01 08:30:59 crc kubenswrapper[4867]: I0101 08:30:59.694431 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=15.694412068 podStartE2EDuration="15.694412068s" podCreationTimestamp="2026-01-01 08:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:30:59.690544672 +0000 UTC m=+268.825813481" watchObservedRunningTime="2026-01-01 08:30:59.694412068 +0000 UTC m=+268.829680837" Jan 01 08:30:59 crc kubenswrapper[4867]: I0101 08:30:59.709317 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 01 08:30:59 crc kubenswrapper[4867]: I0101 08:30:59.855751 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 01 08:30:59 crc kubenswrapper[4867]: I0101 08:30:59.892223 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 01 08:30:59 crc kubenswrapper[4867]: I0101 08:30:59.929718 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 01 08:30:59 crc kubenswrapper[4867]: I0101 08:30:59.975979 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 01 08:31:00 crc kubenswrapper[4867]: I0101 08:31:00.057100 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 01 08:31:00 crc kubenswrapper[4867]: I0101 08:31:00.203237 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 01 08:31:00 crc kubenswrapper[4867]: I0101 08:31:00.238151 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 01 08:31:00 crc kubenswrapper[4867]: I0101 08:31:00.252800 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 01 08:31:00 crc kubenswrapper[4867]: I0101 08:31:00.282293 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 01 08:31:00 crc kubenswrapper[4867]: I0101 08:31:00.327941 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 01 08:31:00 crc kubenswrapper[4867]: I0101 08:31:00.458236 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 01 08:31:00 crc kubenswrapper[4867]: I0101 08:31:00.478499 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 01 08:31:00 crc kubenswrapper[4867]: I0101 08:31:00.492997 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 01 08:31:00 crc kubenswrapper[4867]: I0101 08:31:00.495843 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 01 08:31:00 crc kubenswrapper[4867]: I0101 08:31:00.506284 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 01 08:31:00 crc kubenswrapper[4867]: I0101 08:31:00.569857 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 01 08:31:00 crc kubenswrapper[4867]: I0101 08:31:00.632269 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 01 08:31:00 crc kubenswrapper[4867]: I0101 08:31:00.681510 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 01 08:31:00 crc kubenswrapper[4867]: I0101 08:31:00.726380 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 01 08:31:00 crc kubenswrapper[4867]: I0101 08:31:00.761087 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 01 08:31:00 crc kubenswrapper[4867]: I0101 08:31:00.859834 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 01 08:31:00 crc kubenswrapper[4867]: I0101 08:31:00.880254 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 01 08:31:00 crc kubenswrapper[4867]: I0101 08:31:00.923787 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 01 08:31:00 crc kubenswrapper[4867]: I0101 08:31:00.971107 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 01 08:31:01 crc kubenswrapper[4867]: I0101 08:31:01.141253 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="158aa7f6-a8d3-4a58-a437-5962f1fc90a2" path="/var/lib/kubelet/pods/158aa7f6-a8d3-4a58-a437-5962f1fc90a2/volumes" Jan 01 08:31:01 crc kubenswrapper[4867]: I0101 08:31:01.141442 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 01 08:31:01 crc kubenswrapper[4867]: I0101 08:31:01.165408 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 01 08:31:01 crc kubenswrapper[4867]: I0101 08:31:01.241725 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 01 08:31:01 crc kubenswrapper[4867]: I0101 08:31:01.265004 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 01 08:31:01 crc kubenswrapper[4867]: I0101 08:31:01.287564 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 01 08:31:01 crc kubenswrapper[4867]: I0101 08:31:01.291675 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 01 08:31:01 crc kubenswrapper[4867]: I0101 08:31:01.311472 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 01 08:31:01 crc kubenswrapper[4867]: I0101 08:31:01.340359 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 01 08:31:01 crc kubenswrapper[4867]: I0101 08:31:01.376369 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 01 08:31:01 crc kubenswrapper[4867]: I0101 08:31:01.536352 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 01 08:31:01 crc kubenswrapper[4867]: I0101 08:31:01.566496 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 01 08:31:01 crc kubenswrapper[4867]: I0101 08:31:01.588819 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 01 08:31:01 crc kubenswrapper[4867]: I0101 08:31:01.676533 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 01 08:31:01 crc kubenswrapper[4867]: I0101 08:31:01.792037 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 01 08:31:01 crc kubenswrapper[4867]: I0101 08:31:01.817245 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 01 08:31:01 crc kubenswrapper[4867]: I0101 08:31:01.859833 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 01 08:31:01 crc kubenswrapper[4867]: I0101 08:31:01.884694 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 01 08:31:01 crc kubenswrapper[4867]: I0101 08:31:01.940183 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 01 08:31:01 crc kubenswrapper[4867]: I0101 08:31:01.940192 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 01 08:31:01 crc kubenswrapper[4867]: I0101 08:31:01.953023 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 01 08:31:02 crc kubenswrapper[4867]: I0101 08:31:02.086151 4867 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 01 08:31:02 crc kubenswrapper[4867]: I0101 08:31:02.099812 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 01 08:31:02 crc kubenswrapper[4867]: I0101 08:31:02.116977 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 01 08:31:02 crc kubenswrapper[4867]: I0101 08:31:02.129673 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 01 08:31:02 crc kubenswrapper[4867]: I0101 08:31:02.156380 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 01 08:31:02 crc kubenswrapper[4867]: I0101 08:31:02.207172 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 01 08:31:02 crc kubenswrapper[4867]: I0101 08:31:02.237459 4867 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 01 08:31:02 crc kubenswrapper[4867]: I0101 08:31:02.241104 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 01 08:31:02 crc kubenswrapper[4867]: I0101 08:31:02.251451 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 01 08:31:02 crc kubenswrapper[4867]: I0101 08:31:02.270537 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 01 08:31:02 crc kubenswrapper[4867]: I0101 08:31:02.380065 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 01 08:31:02 crc kubenswrapper[4867]: I0101 08:31:02.482205 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 01 08:31:02 crc kubenswrapper[4867]: I0101 08:31:02.490484 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 01 08:31:02 crc kubenswrapper[4867]: I0101 08:31:02.578948 4867 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 01 08:31:02 crc kubenswrapper[4867]: I0101 08:31:02.604782 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 01 08:31:02 crc kubenswrapper[4867]: I0101 08:31:02.614901 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 01 08:31:02 crc kubenswrapper[4867]: I0101 08:31:02.635123 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 01 08:31:02 crc kubenswrapper[4867]: I0101 08:31:02.840558 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 01 08:31:02 crc kubenswrapper[4867]: I0101 08:31:02.896118 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 01 08:31:02 crc kubenswrapper[4867]: I0101 08:31:02.948047 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 01 08:31:02 crc kubenswrapper[4867]: I0101 08:31:02.958269 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 01 08:31:02 crc kubenswrapper[4867]: I0101 08:31:02.993295 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 01 08:31:03 crc kubenswrapper[4867]: I0101 08:31:03.052307 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 01 08:31:03 crc kubenswrapper[4867]: I0101 08:31:03.100533 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 01 08:31:03 crc kubenswrapper[4867]: I0101 08:31:03.128237 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 01 08:31:03 crc kubenswrapper[4867]: I0101 08:31:03.195986 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 01 08:31:03 crc kubenswrapper[4867]: I0101 08:31:03.239527 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 01 08:31:03 crc kubenswrapper[4867]: I0101 08:31:03.361807 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 01 08:31:03 crc kubenswrapper[4867]: I0101 08:31:03.406703 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 01 08:31:03 crc kubenswrapper[4867]: I0101 08:31:03.422525 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 01 08:31:03 crc kubenswrapper[4867]: I0101 08:31:03.437559 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 01 08:31:03 crc kubenswrapper[4867]: I0101 08:31:03.611106 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 01 08:31:03 crc kubenswrapper[4867]: I0101 08:31:03.648223 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 01 08:31:03 crc kubenswrapper[4867]: I0101 08:31:03.654441 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 01 08:31:03 crc kubenswrapper[4867]: I0101 08:31:03.690591 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 01 08:31:03 crc kubenswrapper[4867]: I0101 08:31:03.775914 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 01 08:31:03 crc kubenswrapper[4867]: I0101 08:31:03.907822 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 01 08:31:04 crc kubenswrapper[4867]: I0101 08:31:04.017078 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 01 08:31:04 crc kubenswrapper[4867]: I0101 08:31:04.018982 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 01 08:31:04 crc kubenswrapper[4867]: I0101 08:31:04.053917 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 01 08:31:04 crc kubenswrapper[4867]: I0101 08:31:04.149501 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 01 08:31:04 crc kubenswrapper[4867]: I0101 08:31:04.277630 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 01 08:31:04 crc kubenswrapper[4867]: I0101 08:31:04.419161 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 01 08:31:04 crc kubenswrapper[4867]: I0101 08:31:04.433843 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 01 08:31:04 crc kubenswrapper[4867]: I0101 08:31:04.588478 4867 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 01 08:31:04 crc kubenswrapper[4867]: I0101 08:31:04.588544 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 01 08:31:04 crc kubenswrapper[4867]: I0101 08:31:04.588605 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 01 08:31:04 crc kubenswrapper[4867]: I0101 08:31:04.589249 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"82847b036354e1a74368983ccb47957385072fe8faf69d4dd7f53de83505e17d"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 01 08:31:04 crc kubenswrapper[4867]: I0101 08:31:04.589410 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://82847b036354e1a74368983ccb47957385072fe8faf69d4dd7f53de83505e17d" gracePeriod=30 Jan 01 08:31:04 crc kubenswrapper[4867]: I0101 08:31:04.665259 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 01 08:31:04 crc kubenswrapper[4867]: I0101 08:31:04.684374 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 01 08:31:04 crc kubenswrapper[4867]: I0101 08:31:04.684556 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 01 08:31:04 crc kubenswrapper[4867]: I0101 08:31:04.686939 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 01 08:31:04 crc kubenswrapper[4867]: I0101 08:31:04.781329 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 01 08:31:04 crc kubenswrapper[4867]: I0101 08:31:04.986552 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.005776 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.171586 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.269847 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.315435 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.388767 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-df7774cfb-qsxzg"] Jan 01 08:31:05 crc kubenswrapper[4867]: E0101 08:31:05.389075 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a01a834-2d82-4263-8abd-362d32ab94ec" containerName="installer" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.389097 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a01a834-2d82-4263-8abd-362d32ab94ec" containerName="installer" Jan 01 08:31:05 crc kubenswrapper[4867]: E0101 08:31:05.389120 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="158aa7f6-a8d3-4a58-a437-5962f1fc90a2" containerName="oauth-openshift" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.389132 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="158aa7f6-a8d3-4a58-a437-5962f1fc90a2" containerName="oauth-openshift" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.389321 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a01a834-2d82-4263-8abd-362d32ab94ec" containerName="installer" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.389346 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="158aa7f6-a8d3-4a58-a437-5962f1fc90a2" containerName="oauth-openshift" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.389951 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.392131 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.393004 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.393030 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.393005 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.398071 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.398478 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.398733 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.399107 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.399505 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.399613 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.400307 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.402610 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.411953 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-df7774cfb-qsxzg"] Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.444365 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.451197 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.457664 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.463223 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.473343 4867 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.499416 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.525105 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.584274 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9e25e574-4afb-46a0-9cce-e055779d4bee-v4-0-config-user-template-error\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.584325 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9e25e574-4afb-46a0-9cce-e055779d4bee-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.584361 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9e25e574-4afb-46a0-9cce-e055779d4bee-v4-0-config-system-cliconfig\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.584396 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9e25e574-4afb-46a0-9cce-e055779d4bee-audit-policies\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.584432 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9e25e574-4afb-46a0-9cce-e055779d4bee-v4-0-config-system-router-certs\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.585005 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9e25e574-4afb-46a0-9cce-e055779d4bee-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.585127 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e25e574-4afb-46a0-9cce-e055779d4bee-v4-0-config-system-serving-cert\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.585167 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e25e574-4afb-46a0-9cce-e055779d4bee-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.585265 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfs4j\" (UniqueName: \"kubernetes.io/projected/9e25e574-4afb-46a0-9cce-e055779d4bee-kube-api-access-sfs4j\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.585294 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9e25e574-4afb-46a0-9cce-e055779d4bee-v4-0-config-user-template-login\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.585311 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9e25e574-4afb-46a0-9cce-e055779d4bee-v4-0-config-system-service-ca\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.585331 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9e25e574-4afb-46a0-9cce-e055779d4bee-audit-dir\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.585352 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9e25e574-4afb-46a0-9cce-e055779d4bee-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.585371 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9e25e574-4afb-46a0-9cce-e055779d4bee-v4-0-config-system-session\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.614843 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.680453 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.686771 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9e25e574-4afb-46a0-9cce-e055779d4bee-v4-0-config-system-cliconfig\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.686848 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9e25e574-4afb-46a0-9cce-e055779d4bee-audit-policies\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.686929 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9e25e574-4afb-46a0-9cce-e055779d4bee-v4-0-config-system-router-certs\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.686997 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9e25e574-4afb-46a0-9cce-e055779d4bee-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.687049 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e25e574-4afb-46a0-9cce-e055779d4bee-v4-0-config-system-serving-cert\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.687098 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e25e574-4afb-46a0-9cce-e055779d4bee-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.687185 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfs4j\" (UniqueName: \"kubernetes.io/projected/9e25e574-4afb-46a0-9cce-e055779d4bee-kube-api-access-sfs4j\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.687233 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9e25e574-4afb-46a0-9cce-e055779d4bee-v4-0-config-user-template-login\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.687270 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9e25e574-4afb-46a0-9cce-e055779d4bee-v4-0-config-system-service-ca\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.687316 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9e25e574-4afb-46a0-9cce-e055779d4bee-audit-dir\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.687369 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9e25e574-4afb-46a0-9cce-e055779d4bee-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.687412 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9e25e574-4afb-46a0-9cce-e055779d4bee-v4-0-config-system-session\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.687461 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9e25e574-4afb-46a0-9cce-e055779d4bee-v4-0-config-user-template-error\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.687484 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9e25e574-4afb-46a0-9cce-e055779d4bee-v4-0-config-system-cliconfig\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.687484 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9e25e574-4afb-46a0-9cce-e055779d4bee-audit-policies\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.687509 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9e25e574-4afb-46a0-9cce-e055779d4bee-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.688020 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9e25e574-4afb-46a0-9cce-e055779d4bee-audit-dir\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.688583 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9e25e574-4afb-46a0-9cce-e055779d4bee-v4-0-config-system-service-ca\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.689691 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e25e574-4afb-46a0-9cce-e055779d4bee-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.692640 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e25e574-4afb-46a0-9cce-e055779d4bee-v4-0-config-system-serving-cert\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.692875 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9e25e574-4afb-46a0-9cce-e055779d4bee-v4-0-config-system-session\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.693023 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9e25e574-4afb-46a0-9cce-e055779d4bee-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.693184 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9e25e574-4afb-46a0-9cce-e055779d4bee-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.693239 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9e25e574-4afb-46a0-9cce-e055779d4bee-v4-0-config-user-template-error\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.693743 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9e25e574-4afb-46a0-9cce-e055779d4bee-v4-0-config-system-router-certs\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.693745 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9e25e574-4afb-46a0-9cce-e055779d4bee-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.694183 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9e25e574-4afb-46a0-9cce-e055779d4bee-v4-0-config-user-template-login\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.706327 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.707386 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfs4j\" (UniqueName: \"kubernetes.io/projected/9e25e574-4afb-46a0-9cce-e055779d4bee-kube-api-access-sfs4j\") pod \"oauth-openshift-df7774cfb-qsxzg\" (UID: \"9e25e574-4afb-46a0-9cce-e055779d4bee\") " pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.735334 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.740109 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.841000 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 01 08:31:05 crc kubenswrapper[4867]: I0101 08:31:05.946017 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-df7774cfb-qsxzg"] Jan 01 08:31:06 crc kubenswrapper[4867]: I0101 08:31:06.015723 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 01 08:31:06 crc kubenswrapper[4867]: I0101 08:31:06.085430 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 01 08:31:06 crc kubenswrapper[4867]: I0101 08:31:06.339974 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 01 08:31:06 crc kubenswrapper[4867]: I0101 08:31:06.357540 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 01 08:31:06 crc kubenswrapper[4867]: I0101 08:31:06.371325 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 01 08:31:06 crc kubenswrapper[4867]: I0101 08:31:06.436101 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 01 08:31:06 crc kubenswrapper[4867]: I0101 08:31:06.452876 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 01 08:31:06 crc kubenswrapper[4867]: I0101 08:31:06.459320 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 01 08:31:06 crc kubenswrapper[4867]: I0101 08:31:06.526784 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 01 08:31:06 crc kubenswrapper[4867]: I0101 08:31:06.676873 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 01 08:31:06 crc kubenswrapper[4867]: I0101 08:31:06.680762 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 01 08:31:06 crc kubenswrapper[4867]: I0101 08:31:06.731158 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" event={"ID":"9e25e574-4afb-46a0-9cce-e055779d4bee","Type":"ContainerStarted","Data":"c3c83af5f4280b1383fe470b6f57ea98d5bfc71e4875dfb4ef002ae26dd7ccc1"} Jan 01 08:31:06 crc kubenswrapper[4867]: I0101 08:31:06.731228 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" event={"ID":"9e25e574-4afb-46a0-9cce-e055779d4bee","Type":"ContainerStarted","Data":"2d7cf09a8da1ff1e475a0e850ba1d75abd21b4c2b377c21fb105efbaa6194c74"} Jan 01 08:31:06 crc kubenswrapper[4867]: I0101 08:31:06.733225 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:06 crc kubenswrapper[4867]: I0101 08:31:06.783525 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 01 08:31:06 crc kubenswrapper[4867]: I0101 08:31:06.823945 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" Jan 01 08:31:06 crc kubenswrapper[4867]: I0101 08:31:06.849179 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 01 08:31:06 crc kubenswrapper[4867]: I0101 08:31:06.860585 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-df7774cfb-qsxzg" podStartSLOduration=49.86056443 podStartE2EDuration="49.86056443s" podCreationTimestamp="2026-01-01 08:30:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:31:06.772038562 +0000 UTC m=+275.907307351" watchObservedRunningTime="2026-01-01 08:31:06.86056443 +0000 UTC m=+275.995833209" Jan 01 08:31:06 crc kubenswrapper[4867]: I0101 08:31:06.906684 4867 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 01 08:31:06 crc kubenswrapper[4867]: I0101 08:31:06.907070 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://a5d6a43197956fcee4f8be0b0c143f082bfcecb5dd194deef45346a62a885391" gracePeriod=5 Jan 01 08:31:07 crc kubenswrapper[4867]: I0101 08:31:07.019823 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 01 08:31:07 crc kubenswrapper[4867]: I0101 08:31:07.102970 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 01 08:31:07 crc kubenswrapper[4867]: I0101 08:31:07.115357 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 01 08:31:07 crc kubenswrapper[4867]: I0101 08:31:07.224190 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 01 08:31:07 crc kubenswrapper[4867]: I0101 08:31:07.265385 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 01 08:31:07 crc kubenswrapper[4867]: I0101 08:31:07.327018 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 01 08:31:07 crc kubenswrapper[4867]: I0101 08:31:07.346483 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 01 08:31:07 crc kubenswrapper[4867]: I0101 08:31:07.378778 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 01 08:31:07 crc kubenswrapper[4867]: I0101 08:31:07.394396 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 01 08:31:07 crc kubenswrapper[4867]: I0101 08:31:07.425214 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 01 08:31:07 crc kubenswrapper[4867]: I0101 08:31:07.503717 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 01 08:31:07 crc kubenswrapper[4867]: I0101 08:31:07.545617 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 01 08:31:07 crc kubenswrapper[4867]: I0101 08:31:07.693375 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 01 08:31:07 crc kubenswrapper[4867]: I0101 08:31:07.701991 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 01 08:31:07 crc kubenswrapper[4867]: I0101 08:31:07.724144 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 01 08:31:07 crc kubenswrapper[4867]: I0101 08:31:07.724605 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 01 08:31:07 crc kubenswrapper[4867]: I0101 08:31:07.744844 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 01 08:31:07 crc kubenswrapper[4867]: I0101 08:31:07.747173 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 01 08:31:07 crc kubenswrapper[4867]: I0101 08:31:07.754022 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 01 08:31:07 crc kubenswrapper[4867]: I0101 08:31:07.799954 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 01 08:31:07 crc kubenswrapper[4867]: I0101 08:31:07.805260 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 01 08:31:07 crc kubenswrapper[4867]: I0101 08:31:07.874503 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 01 08:31:07 crc kubenswrapper[4867]: I0101 08:31:07.970771 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 01 08:31:07 crc kubenswrapper[4867]: I0101 08:31:07.971912 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 01 08:31:07 crc kubenswrapper[4867]: I0101 08:31:07.972128 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 01 08:31:07 crc kubenswrapper[4867]: I0101 08:31:07.980727 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 01 08:31:08 crc kubenswrapper[4867]: I0101 08:31:08.062356 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 01 08:31:08 crc kubenswrapper[4867]: I0101 08:31:08.069745 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 01 08:31:08 crc kubenswrapper[4867]: I0101 08:31:08.154945 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 01 08:31:08 crc kubenswrapper[4867]: I0101 08:31:08.219158 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 01 08:31:08 crc kubenswrapper[4867]: I0101 08:31:08.361556 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 01 08:31:08 crc kubenswrapper[4867]: I0101 08:31:08.390038 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 01 08:31:08 crc kubenswrapper[4867]: I0101 08:31:08.473744 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 01 08:31:08 crc kubenswrapper[4867]: I0101 08:31:08.526770 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 01 08:31:08 crc kubenswrapper[4867]: I0101 08:31:08.624318 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 01 08:31:08 crc kubenswrapper[4867]: I0101 08:31:08.627955 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 01 08:31:08 crc kubenswrapper[4867]: I0101 08:31:08.789497 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 01 08:31:08 crc kubenswrapper[4867]: I0101 08:31:08.796317 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 01 08:31:08 crc kubenswrapper[4867]: I0101 08:31:08.869559 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 01 08:31:09 crc kubenswrapper[4867]: I0101 08:31:09.014444 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 01 08:31:09 crc kubenswrapper[4867]: I0101 08:31:09.073789 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 01 08:31:09 crc kubenswrapper[4867]: I0101 08:31:09.092532 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 01 08:31:09 crc kubenswrapper[4867]: I0101 08:31:09.138923 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 01 08:31:09 crc kubenswrapper[4867]: I0101 08:31:09.144694 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 01 08:31:09 crc kubenswrapper[4867]: I0101 08:31:09.366091 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 01 08:31:09 crc kubenswrapper[4867]: I0101 08:31:09.783013 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 01 08:31:10 crc kubenswrapper[4867]: I0101 08:31:10.077729 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 01 08:31:10 crc kubenswrapper[4867]: I0101 08:31:10.350693 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 01 08:31:12 crc kubenswrapper[4867]: I0101 08:31:12.507128 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 01 08:31:12 crc kubenswrapper[4867]: I0101 08:31:12.507245 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 01 08:31:12 crc kubenswrapper[4867]: I0101 08:31:12.699467 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 01 08:31:12 crc kubenswrapper[4867]: I0101 08:31:12.699540 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 01 08:31:12 crc kubenswrapper[4867]: I0101 08:31:12.699615 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 01 08:31:12 crc kubenswrapper[4867]: I0101 08:31:12.699638 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:31:12 crc kubenswrapper[4867]: I0101 08:31:12.699663 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 01 08:31:12 crc kubenswrapper[4867]: I0101 08:31:12.699733 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:31:12 crc kubenswrapper[4867]: I0101 08:31:12.699749 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 01 08:31:12 crc kubenswrapper[4867]: I0101 08:31:12.699780 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:31:12 crc kubenswrapper[4867]: I0101 08:31:12.699869 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:31:12 crc kubenswrapper[4867]: I0101 08:31:12.700316 4867 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 01 08:31:12 crc kubenswrapper[4867]: I0101 08:31:12.700340 4867 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 01 08:31:12 crc kubenswrapper[4867]: I0101 08:31:12.700357 4867 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 01 08:31:12 crc kubenswrapper[4867]: I0101 08:31:12.700374 4867 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 01 08:31:12 crc kubenswrapper[4867]: I0101 08:31:12.709963 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:31:12 crc kubenswrapper[4867]: I0101 08:31:12.775525 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 01 08:31:12 crc kubenswrapper[4867]: I0101 08:31:12.775618 4867 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="a5d6a43197956fcee4f8be0b0c143f082bfcecb5dd194deef45346a62a885391" exitCode=137 Jan 01 08:31:12 crc kubenswrapper[4867]: I0101 08:31:12.775682 4867 scope.go:117] "RemoveContainer" containerID="a5d6a43197956fcee4f8be0b0c143f082bfcecb5dd194deef45346a62a885391" Jan 01 08:31:12 crc kubenswrapper[4867]: I0101 08:31:12.775719 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 01 08:31:12 crc kubenswrapper[4867]: I0101 08:31:12.800868 4867 scope.go:117] "RemoveContainer" containerID="a5d6a43197956fcee4f8be0b0c143f082bfcecb5dd194deef45346a62a885391" Jan 01 08:31:12 crc kubenswrapper[4867]: I0101 08:31:12.801171 4867 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 01 08:31:12 crc kubenswrapper[4867]: E0101 08:31:12.801450 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5d6a43197956fcee4f8be0b0c143f082bfcecb5dd194deef45346a62a885391\": container with ID starting with a5d6a43197956fcee4f8be0b0c143f082bfcecb5dd194deef45346a62a885391 not found: ID does not exist" containerID="a5d6a43197956fcee4f8be0b0c143f082bfcecb5dd194deef45346a62a885391" Jan 01 08:31:12 crc kubenswrapper[4867]: I0101 08:31:12.801491 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5d6a43197956fcee4f8be0b0c143f082bfcecb5dd194deef45346a62a885391"} err="failed to get container status \"a5d6a43197956fcee4f8be0b0c143f082bfcecb5dd194deef45346a62a885391\": rpc error: code = NotFound desc = could not find container \"a5d6a43197956fcee4f8be0b0c143f082bfcecb5dd194deef45346a62a885391\": container with ID starting with a5d6a43197956fcee4f8be0b0c143f082bfcecb5dd194deef45346a62a885391 not found: ID does not exist" Jan 01 08:31:13 crc kubenswrapper[4867]: I0101 08:31:13.140930 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 01 08:31:23 crc kubenswrapper[4867]: I0101 08:31:23.857222 4867 generic.go:334] "Generic (PLEG): container finished" podID="15e74714-78ff-4351-9088-ddf6672ce8a5" containerID="c6f191078c9db97f18c9b8f987c8115ad6dac694b70e0fdf253e4f0def7e6cc0" exitCode=0 Jan 01 08:31:23 crc kubenswrapper[4867]: I0101 08:31:23.857641 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8tlg5" event={"ID":"15e74714-78ff-4351-9088-ddf6672ce8a5","Type":"ContainerDied","Data":"c6f191078c9db97f18c9b8f987c8115ad6dac694b70e0fdf253e4f0def7e6cc0"} Jan 01 08:31:23 crc kubenswrapper[4867]: I0101 08:31:23.858601 4867 scope.go:117] "RemoveContainer" containerID="c6f191078c9db97f18c9b8f987c8115ad6dac694b70e0fdf253e4f0def7e6cc0" Jan 01 08:31:24 crc kubenswrapper[4867]: I0101 08:31:24.868586 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8tlg5" event={"ID":"15e74714-78ff-4351-9088-ddf6672ce8a5","Type":"ContainerStarted","Data":"1447ad6f9262dece572323ff65442ecfbbed514c58127d49e99b67eb414e325a"} Jan 01 08:31:24 crc kubenswrapper[4867]: I0101 08:31:24.869576 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-8tlg5" Jan 01 08:31:24 crc kubenswrapper[4867]: I0101 08:31:24.870800 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-8tlg5" Jan 01 08:31:25 crc kubenswrapper[4867]: I0101 08:31:25.712996 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 01 08:31:29 crc kubenswrapper[4867]: I0101 08:31:29.229315 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 01 08:31:30 crc kubenswrapper[4867]: I0101 08:31:30.995242 4867 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 01 08:31:33 crc kubenswrapper[4867]: I0101 08:31:33.234278 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 01 08:31:34 crc kubenswrapper[4867]: I0101 08:31:34.936667 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 01 08:31:34 crc kubenswrapper[4867]: I0101 08:31:34.938254 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 01 08:31:34 crc kubenswrapper[4867]: I0101 08:31:34.938304 4867 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="82847b036354e1a74368983ccb47957385072fe8faf69d4dd7f53de83505e17d" exitCode=137 Jan 01 08:31:34 crc kubenswrapper[4867]: I0101 08:31:34.938331 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"82847b036354e1a74368983ccb47957385072fe8faf69d4dd7f53de83505e17d"} Jan 01 08:31:34 crc kubenswrapper[4867]: I0101 08:31:34.938362 4867 scope.go:117] "RemoveContainer" containerID="fb9b9aae16cc1c29ffb288ab01b54fa559cfe599c48f3ed97fe62bcc6e5b3288" Jan 01 08:31:35 crc kubenswrapper[4867]: I0101 08:31:35.947429 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 01 08:31:35 crc kubenswrapper[4867]: I0101 08:31:35.948597 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6cb9e63933fe0b89eb83a0421754a60f2f5139cde04b74221320c8279cac3677"} Jan 01 08:31:38 crc kubenswrapper[4867]: I0101 08:31:38.258192 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 01 08:31:38 crc kubenswrapper[4867]: I0101 08:31:38.491143 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 01 08:31:41 crc kubenswrapper[4867]: I0101 08:31:41.222841 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 01 08:31:41 crc kubenswrapper[4867]: I0101 08:31:41.513833 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 01 08:31:41 crc kubenswrapper[4867]: I0101 08:31:41.993647 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 01 08:31:44 crc kubenswrapper[4867]: I0101 08:31:44.588382 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 01 08:31:44 crc kubenswrapper[4867]: I0101 08:31:44.595054 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 01 08:31:45 crc kubenswrapper[4867]: I0101 08:31:45.011504 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 01 08:31:47 crc kubenswrapper[4867]: I0101 08:31:47.071779 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 01 08:31:47 crc kubenswrapper[4867]: I0101 08:31:47.654411 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 01 08:31:51 crc kubenswrapper[4867]: I0101 08:31:51.331228 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 08:31:51 crc kubenswrapper[4867]: I0101 08:31:51.332204 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 08:31:54 crc kubenswrapper[4867]: I0101 08:31:54.652594 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jjglf"] Jan 01 08:31:54 crc kubenswrapper[4867]: I0101 08:31:54.653289 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-jjglf" podUID="02e8282b-37b6-4539-ad59-fae4c4c65a45" containerName="controller-manager" containerID="cri-o://288b30b3cc52e5102272912ca3e9a712cda2a4805fff6260e20ba74ac72476ff" gracePeriod=30 Jan 01 08:31:54 crc kubenswrapper[4867]: I0101 08:31:54.677171 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j8lqg"] Jan 01 08:31:54 crc kubenswrapper[4867]: I0101 08:31:54.677465 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j8lqg" podUID="9aff785a-03ef-4b1a-93d6-e2674725b053" containerName="route-controller-manager" containerID="cri-o://009ccc0c7237d7edf16767bc1699bf6c13201f9ef5a80deb5da6618467562d9b" gracePeriod=30 Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.058463 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jjglf" Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.081937 4867 generic.go:334] "Generic (PLEG): container finished" podID="02e8282b-37b6-4539-ad59-fae4c4c65a45" containerID="288b30b3cc52e5102272912ca3e9a712cda2a4805fff6260e20ba74ac72476ff" exitCode=0 Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.082004 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jjglf" event={"ID":"02e8282b-37b6-4539-ad59-fae4c4c65a45","Type":"ContainerDied","Data":"288b30b3cc52e5102272912ca3e9a712cda2a4805fff6260e20ba74ac72476ff"} Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.082030 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jjglf" event={"ID":"02e8282b-37b6-4539-ad59-fae4c4c65a45","Type":"ContainerDied","Data":"3b257c2977e4df99d1928656e5c44fde6cf619b30be51455b966ca0bda8595f0"} Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.082054 4867 scope.go:117] "RemoveContainer" containerID="288b30b3cc52e5102272912ca3e9a712cda2a4805fff6260e20ba74ac72476ff" Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.082141 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jjglf" Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.084315 4867 generic.go:334] "Generic (PLEG): container finished" podID="9aff785a-03ef-4b1a-93d6-e2674725b053" containerID="009ccc0c7237d7edf16767bc1699bf6c13201f9ef5a80deb5da6618467562d9b" exitCode=0 Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.084349 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j8lqg" event={"ID":"9aff785a-03ef-4b1a-93d6-e2674725b053","Type":"ContainerDied","Data":"009ccc0c7237d7edf16767bc1699bf6c13201f9ef5a80deb5da6618467562d9b"} Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.096546 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwd6x\" (UniqueName: \"kubernetes.io/projected/02e8282b-37b6-4539-ad59-fae4c4c65a45-kube-api-access-dwd6x\") pod \"02e8282b-37b6-4539-ad59-fae4c4c65a45\" (UID: \"02e8282b-37b6-4539-ad59-fae4c4c65a45\") " Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.096592 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02e8282b-37b6-4539-ad59-fae4c4c65a45-serving-cert\") pod \"02e8282b-37b6-4539-ad59-fae4c4c65a45\" (UID: \"02e8282b-37b6-4539-ad59-fae4c4c65a45\") " Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.096617 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02e8282b-37b6-4539-ad59-fae4c4c65a45-client-ca\") pod \"02e8282b-37b6-4539-ad59-fae4c4c65a45\" (UID: \"02e8282b-37b6-4539-ad59-fae4c4c65a45\") " Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.096697 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02e8282b-37b6-4539-ad59-fae4c4c65a45-proxy-ca-bundles\") pod \"02e8282b-37b6-4539-ad59-fae4c4c65a45\" (UID: \"02e8282b-37b6-4539-ad59-fae4c4c65a45\") " Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.096728 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02e8282b-37b6-4539-ad59-fae4c4c65a45-config\") pod \"02e8282b-37b6-4539-ad59-fae4c4c65a45\" (UID: \"02e8282b-37b6-4539-ad59-fae4c4c65a45\") " Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.097461 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02e8282b-37b6-4539-ad59-fae4c4c65a45-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "02e8282b-37b6-4539-ad59-fae4c4c65a45" (UID: "02e8282b-37b6-4539-ad59-fae4c4c65a45"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.097498 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02e8282b-37b6-4539-ad59-fae4c4c65a45-client-ca" (OuterVolumeSpecName: "client-ca") pod "02e8282b-37b6-4539-ad59-fae4c4c65a45" (UID: "02e8282b-37b6-4539-ad59-fae4c4c65a45"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.097506 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02e8282b-37b6-4539-ad59-fae4c4c65a45-config" (OuterVolumeSpecName: "config") pod "02e8282b-37b6-4539-ad59-fae4c4c65a45" (UID: "02e8282b-37b6-4539-ad59-fae4c4c65a45"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.103966 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02e8282b-37b6-4539-ad59-fae4c4c65a45-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "02e8282b-37b6-4539-ad59-fae4c4c65a45" (UID: "02e8282b-37b6-4539-ad59-fae4c4c65a45"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.104358 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02e8282b-37b6-4539-ad59-fae4c4c65a45-kube-api-access-dwd6x" (OuterVolumeSpecName: "kube-api-access-dwd6x") pod "02e8282b-37b6-4539-ad59-fae4c4c65a45" (UID: "02e8282b-37b6-4539-ad59-fae4c4c65a45"). InnerVolumeSpecName "kube-api-access-dwd6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.115022 4867 scope.go:117] "RemoveContainer" containerID="288b30b3cc52e5102272912ca3e9a712cda2a4805fff6260e20ba74ac72476ff" Jan 01 08:31:55 crc kubenswrapper[4867]: E0101 08:31:55.115710 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"288b30b3cc52e5102272912ca3e9a712cda2a4805fff6260e20ba74ac72476ff\": container with ID starting with 288b30b3cc52e5102272912ca3e9a712cda2a4805fff6260e20ba74ac72476ff not found: ID does not exist" containerID="288b30b3cc52e5102272912ca3e9a712cda2a4805fff6260e20ba74ac72476ff" Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.115760 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"288b30b3cc52e5102272912ca3e9a712cda2a4805fff6260e20ba74ac72476ff"} err="failed to get container status \"288b30b3cc52e5102272912ca3e9a712cda2a4805fff6260e20ba74ac72476ff\": rpc error: code = NotFound desc = could not find container \"288b30b3cc52e5102272912ca3e9a712cda2a4805fff6260e20ba74ac72476ff\": container with ID starting with 288b30b3cc52e5102272912ca3e9a712cda2a4805fff6260e20ba74ac72476ff not found: ID does not exist" Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.129433 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j8lqg" Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.197565 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dfqd\" (UniqueName: \"kubernetes.io/projected/9aff785a-03ef-4b1a-93d6-e2674725b053-kube-api-access-6dfqd\") pod \"9aff785a-03ef-4b1a-93d6-e2674725b053\" (UID: \"9aff785a-03ef-4b1a-93d6-e2674725b053\") " Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.197681 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9aff785a-03ef-4b1a-93d6-e2674725b053-client-ca\") pod \"9aff785a-03ef-4b1a-93d6-e2674725b053\" (UID: \"9aff785a-03ef-4b1a-93d6-e2674725b053\") " Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.197714 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aff785a-03ef-4b1a-93d6-e2674725b053-serving-cert\") pod \"9aff785a-03ef-4b1a-93d6-e2674725b053\" (UID: \"9aff785a-03ef-4b1a-93d6-e2674725b053\") " Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.197734 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aff785a-03ef-4b1a-93d6-e2674725b053-config\") pod \"9aff785a-03ef-4b1a-93d6-e2674725b053\" (UID: \"9aff785a-03ef-4b1a-93d6-e2674725b053\") " Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.198412 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9aff785a-03ef-4b1a-93d6-e2674725b053-client-ca" (OuterVolumeSpecName: "client-ca") pod "9aff785a-03ef-4b1a-93d6-e2674725b053" (UID: "9aff785a-03ef-4b1a-93d6-e2674725b053"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.198551 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9aff785a-03ef-4b1a-93d6-e2674725b053-config" (OuterVolumeSpecName: "config") pod "9aff785a-03ef-4b1a-93d6-e2674725b053" (UID: "9aff785a-03ef-4b1a-93d6-e2674725b053"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.198014 4867 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02e8282b-37b6-4539-ad59-fae4c4c65a45-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.198943 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02e8282b-37b6-4539-ad59-fae4c4c65a45-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.198955 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwd6x\" (UniqueName: \"kubernetes.io/projected/02e8282b-37b6-4539-ad59-fae4c4c65a45-kube-api-access-dwd6x\") on node \"crc\" DevicePath \"\"" Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.198967 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02e8282b-37b6-4539-ad59-fae4c4c65a45-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.198975 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02e8282b-37b6-4539-ad59-fae4c4c65a45-client-ca\") on node \"crc\" DevicePath \"\"" Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.201626 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9aff785a-03ef-4b1a-93d6-e2674725b053-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9aff785a-03ef-4b1a-93d6-e2674725b053" (UID: "9aff785a-03ef-4b1a-93d6-e2674725b053"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.202339 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9aff785a-03ef-4b1a-93d6-e2674725b053-kube-api-access-6dfqd" (OuterVolumeSpecName: "kube-api-access-6dfqd") pod "9aff785a-03ef-4b1a-93d6-e2674725b053" (UID: "9aff785a-03ef-4b1a-93d6-e2674725b053"). InnerVolumeSpecName "kube-api-access-6dfqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.300537 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9aff785a-03ef-4b1a-93d6-e2674725b053-client-ca\") on node \"crc\" DevicePath \"\"" Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.300582 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aff785a-03ef-4b1a-93d6-e2674725b053-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.300598 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aff785a-03ef-4b1a-93d6-e2674725b053-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.300614 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dfqd\" (UniqueName: \"kubernetes.io/projected/9aff785a-03ef-4b1a-93d6-e2674725b053-kube-api-access-6dfqd\") on node \"crc\" DevicePath \"\"" Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.401881 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jjglf"] Jan 01 08:31:55 crc kubenswrapper[4867]: I0101 08:31:55.406475 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jjglf"] Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.092218 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j8lqg" event={"ID":"9aff785a-03ef-4b1a-93d6-e2674725b053","Type":"ContainerDied","Data":"60445b314ea401024b0dab3b21a319a87fbe1b6c595219210a1c682cc3b81b05"} Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.092563 4867 scope.go:117] "RemoveContainer" containerID="009ccc0c7237d7edf16767bc1699bf6c13201f9ef5a80deb5da6618467562d9b" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.092264 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j8lqg" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.125638 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c9596ff9f-2c8kb"] Jan 01 08:31:56 crc kubenswrapper[4867]: E0101 08:31:56.125922 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02e8282b-37b6-4539-ad59-fae4c4c65a45" containerName="controller-manager" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.125938 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="02e8282b-37b6-4539-ad59-fae4c4c65a45" containerName="controller-manager" Jan 01 08:31:56 crc kubenswrapper[4867]: E0101 08:31:56.125952 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.125960 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 01 08:31:56 crc kubenswrapper[4867]: E0101 08:31:56.125973 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aff785a-03ef-4b1a-93d6-e2674725b053" containerName="route-controller-manager" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.125982 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aff785a-03ef-4b1a-93d6-e2674725b053" containerName="route-controller-manager" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.126089 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="9aff785a-03ef-4b1a-93d6-e2674725b053" containerName="route-controller-manager" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.126103 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.126117 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="02e8282b-37b6-4539-ad59-fae4c4c65a45" containerName="controller-manager" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.126521 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c9596ff9f-2c8kb" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.129471 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.129513 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.129471 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.129593 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.131262 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.131372 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.142389 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7d4c67957-jq6wx"] Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.143414 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d4c67957-jq6wx" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.147530 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.147620 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.147694 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.148456 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.148602 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.151247 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j8lqg"] Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.151551 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.156062 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j8lqg"] Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.160532 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.166015 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c9596ff9f-2c8kb"] Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.173347 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d4c67957-jq6wx"] Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.212454 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/514e8940-921e-47c0-a6b1-423b3b97ddb3-client-ca\") pod \"route-controller-manager-c9596ff9f-2c8kb\" (UID: \"514e8940-921e-47c0-a6b1-423b3b97ddb3\") " pod="openshift-route-controller-manager/route-controller-manager-c9596ff9f-2c8kb" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.212516 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/514e8940-921e-47c0-a6b1-423b3b97ddb3-config\") pod \"route-controller-manager-c9596ff9f-2c8kb\" (UID: \"514e8940-921e-47c0-a6b1-423b3b97ddb3\") " pod="openshift-route-controller-manager/route-controller-manager-c9596ff9f-2c8kb" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.212627 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9pcq\" (UniqueName: \"kubernetes.io/projected/514e8940-921e-47c0-a6b1-423b3b97ddb3-kube-api-access-n9pcq\") pod \"route-controller-manager-c9596ff9f-2c8kb\" (UID: \"514e8940-921e-47c0-a6b1-423b3b97ddb3\") " pod="openshift-route-controller-manager/route-controller-manager-c9596ff9f-2c8kb" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.212664 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/514e8940-921e-47c0-a6b1-423b3b97ddb3-serving-cert\") pod \"route-controller-manager-c9596ff9f-2c8kb\" (UID: \"514e8940-921e-47c0-a6b1-423b3b97ddb3\") " pod="openshift-route-controller-manager/route-controller-manager-c9596ff9f-2c8kb" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.212722 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec998a9c-5cc8-437f-91b5-57621f40803a-serving-cert\") pod \"controller-manager-7d4c67957-jq6wx\" (UID: \"ec998a9c-5cc8-437f-91b5-57621f40803a\") " pod="openshift-controller-manager/controller-manager-7d4c67957-jq6wx" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.212869 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec998a9c-5cc8-437f-91b5-57621f40803a-client-ca\") pod \"controller-manager-7d4c67957-jq6wx\" (UID: \"ec998a9c-5cc8-437f-91b5-57621f40803a\") " pod="openshift-controller-manager/controller-manager-7d4c67957-jq6wx" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.212948 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec998a9c-5cc8-437f-91b5-57621f40803a-proxy-ca-bundles\") pod \"controller-manager-7d4c67957-jq6wx\" (UID: \"ec998a9c-5cc8-437f-91b5-57621f40803a\") " pod="openshift-controller-manager/controller-manager-7d4c67957-jq6wx" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.212984 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec998a9c-5cc8-437f-91b5-57621f40803a-config\") pod \"controller-manager-7d4c67957-jq6wx\" (UID: \"ec998a9c-5cc8-437f-91b5-57621f40803a\") " pod="openshift-controller-manager/controller-manager-7d4c67957-jq6wx" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.213153 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qk22\" (UniqueName: \"kubernetes.io/projected/ec998a9c-5cc8-437f-91b5-57621f40803a-kube-api-access-4qk22\") pod \"controller-manager-7d4c67957-jq6wx\" (UID: \"ec998a9c-5cc8-437f-91b5-57621f40803a\") " pod="openshift-controller-manager/controller-manager-7d4c67957-jq6wx" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.314318 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qk22\" (UniqueName: \"kubernetes.io/projected/ec998a9c-5cc8-437f-91b5-57621f40803a-kube-api-access-4qk22\") pod \"controller-manager-7d4c67957-jq6wx\" (UID: \"ec998a9c-5cc8-437f-91b5-57621f40803a\") " pod="openshift-controller-manager/controller-manager-7d4c67957-jq6wx" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.314424 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/514e8940-921e-47c0-a6b1-423b3b97ddb3-client-ca\") pod \"route-controller-manager-c9596ff9f-2c8kb\" (UID: \"514e8940-921e-47c0-a6b1-423b3b97ddb3\") " pod="openshift-route-controller-manager/route-controller-manager-c9596ff9f-2c8kb" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.314473 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/514e8940-921e-47c0-a6b1-423b3b97ddb3-config\") pod \"route-controller-manager-c9596ff9f-2c8kb\" (UID: \"514e8940-921e-47c0-a6b1-423b3b97ddb3\") " pod="openshift-route-controller-manager/route-controller-manager-c9596ff9f-2c8kb" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.314536 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9pcq\" (UniqueName: \"kubernetes.io/projected/514e8940-921e-47c0-a6b1-423b3b97ddb3-kube-api-access-n9pcq\") pod \"route-controller-manager-c9596ff9f-2c8kb\" (UID: \"514e8940-921e-47c0-a6b1-423b3b97ddb3\") " pod="openshift-route-controller-manager/route-controller-manager-c9596ff9f-2c8kb" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.314571 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/514e8940-921e-47c0-a6b1-423b3b97ddb3-serving-cert\") pod \"route-controller-manager-c9596ff9f-2c8kb\" (UID: \"514e8940-921e-47c0-a6b1-423b3b97ddb3\") " pod="openshift-route-controller-manager/route-controller-manager-c9596ff9f-2c8kb" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.314616 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec998a9c-5cc8-437f-91b5-57621f40803a-serving-cert\") pod \"controller-manager-7d4c67957-jq6wx\" (UID: \"ec998a9c-5cc8-437f-91b5-57621f40803a\") " pod="openshift-controller-manager/controller-manager-7d4c67957-jq6wx" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.314673 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec998a9c-5cc8-437f-91b5-57621f40803a-client-ca\") pod \"controller-manager-7d4c67957-jq6wx\" (UID: \"ec998a9c-5cc8-437f-91b5-57621f40803a\") " pod="openshift-controller-manager/controller-manager-7d4c67957-jq6wx" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.315593 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/514e8940-921e-47c0-a6b1-423b3b97ddb3-client-ca\") pod \"route-controller-manager-c9596ff9f-2c8kb\" (UID: \"514e8940-921e-47c0-a6b1-423b3b97ddb3\") " pod="openshift-route-controller-manager/route-controller-manager-c9596ff9f-2c8kb" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.316021 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/514e8940-921e-47c0-a6b1-423b3b97ddb3-config\") pod \"route-controller-manager-c9596ff9f-2c8kb\" (UID: \"514e8940-921e-47c0-a6b1-423b3b97ddb3\") " pod="openshift-route-controller-manager/route-controller-manager-c9596ff9f-2c8kb" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.316465 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec998a9c-5cc8-437f-91b5-57621f40803a-client-ca\") pod \"controller-manager-7d4c67957-jq6wx\" (UID: \"ec998a9c-5cc8-437f-91b5-57621f40803a\") " pod="openshift-controller-manager/controller-manager-7d4c67957-jq6wx" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.316588 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec998a9c-5cc8-437f-91b5-57621f40803a-proxy-ca-bundles\") pod \"controller-manager-7d4c67957-jq6wx\" (UID: \"ec998a9c-5cc8-437f-91b5-57621f40803a\") " pod="openshift-controller-manager/controller-manager-7d4c67957-jq6wx" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.316710 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec998a9c-5cc8-437f-91b5-57621f40803a-config\") pod \"controller-manager-7d4c67957-jq6wx\" (UID: \"ec998a9c-5cc8-437f-91b5-57621f40803a\") " pod="openshift-controller-manager/controller-manager-7d4c67957-jq6wx" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.316760 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec998a9c-5cc8-437f-91b5-57621f40803a-proxy-ca-bundles\") pod \"controller-manager-7d4c67957-jq6wx\" (UID: \"ec998a9c-5cc8-437f-91b5-57621f40803a\") " pod="openshift-controller-manager/controller-manager-7d4c67957-jq6wx" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.318839 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec998a9c-5cc8-437f-91b5-57621f40803a-config\") pod \"controller-manager-7d4c67957-jq6wx\" (UID: \"ec998a9c-5cc8-437f-91b5-57621f40803a\") " pod="openshift-controller-manager/controller-manager-7d4c67957-jq6wx" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.319913 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/514e8940-921e-47c0-a6b1-423b3b97ddb3-serving-cert\") pod \"route-controller-manager-c9596ff9f-2c8kb\" (UID: \"514e8940-921e-47c0-a6b1-423b3b97ddb3\") " pod="openshift-route-controller-manager/route-controller-manager-c9596ff9f-2c8kb" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.321799 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec998a9c-5cc8-437f-91b5-57621f40803a-serving-cert\") pod \"controller-manager-7d4c67957-jq6wx\" (UID: \"ec998a9c-5cc8-437f-91b5-57621f40803a\") " pod="openshift-controller-manager/controller-manager-7d4c67957-jq6wx" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.333924 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qk22\" (UniqueName: \"kubernetes.io/projected/ec998a9c-5cc8-437f-91b5-57621f40803a-kube-api-access-4qk22\") pod \"controller-manager-7d4c67957-jq6wx\" (UID: \"ec998a9c-5cc8-437f-91b5-57621f40803a\") " pod="openshift-controller-manager/controller-manager-7d4c67957-jq6wx" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.340611 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9pcq\" (UniqueName: \"kubernetes.io/projected/514e8940-921e-47c0-a6b1-423b3b97ddb3-kube-api-access-n9pcq\") pod \"route-controller-manager-c9596ff9f-2c8kb\" (UID: \"514e8940-921e-47c0-a6b1-423b3b97ddb3\") " pod="openshift-route-controller-manager/route-controller-manager-c9596ff9f-2c8kb" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.453192 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c9596ff9f-2c8kb" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.474266 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d4c67957-jq6wx" Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.794344 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d4c67957-jq6wx"] Jan 01 08:31:56 crc kubenswrapper[4867]: I0101 08:31:56.973747 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c9596ff9f-2c8kb"] Jan 01 08:31:56 crc kubenswrapper[4867]: W0101 08:31:56.982204 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod514e8940_921e_47c0_a6b1_423b3b97ddb3.slice/crio-b3e30694104a6224a0d595505e2cc2f72f35b37f6b1f92201823b184ee04581b WatchSource:0}: Error finding container b3e30694104a6224a0d595505e2cc2f72f35b37f6b1f92201823b184ee04581b: Status 404 returned error can't find the container with id b3e30694104a6224a0d595505e2cc2f72f35b37f6b1f92201823b184ee04581b Jan 01 08:31:57 crc kubenswrapper[4867]: I0101 08:31:57.100391 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d4c67957-jq6wx" event={"ID":"ec998a9c-5cc8-437f-91b5-57621f40803a","Type":"ContainerStarted","Data":"40784b5ed769762587c98973dfdaf9125237f3bac9936a42ad3677c9ae508057"} Jan 01 08:31:57 crc kubenswrapper[4867]: I0101 08:31:57.100441 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d4c67957-jq6wx" event={"ID":"ec998a9c-5cc8-437f-91b5-57621f40803a","Type":"ContainerStarted","Data":"34b695da3de03b73b6811a2a66eabf8a267bc3107a04aeaee35295fe0e838b9f"} Jan 01 08:31:57 crc kubenswrapper[4867]: I0101 08:31:57.100553 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7d4c67957-jq6wx" Jan 01 08:31:57 crc kubenswrapper[4867]: I0101 08:31:57.103580 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c9596ff9f-2c8kb" event={"ID":"514e8940-921e-47c0-a6b1-423b3b97ddb3","Type":"ContainerStarted","Data":"ad472e3d08f9ae539493eb13557e573bafaf2eb229cc99c7a2ff2b1bee6fe955"} Jan 01 08:31:57 crc kubenswrapper[4867]: I0101 08:31:57.103631 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c9596ff9f-2c8kb" event={"ID":"514e8940-921e-47c0-a6b1-423b3b97ddb3","Type":"ContainerStarted","Data":"b3e30694104a6224a0d595505e2cc2f72f35b37f6b1f92201823b184ee04581b"} Jan 01 08:31:57 crc kubenswrapper[4867]: I0101 08:31:57.104114 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-c9596ff9f-2c8kb" Jan 01 08:31:57 crc kubenswrapper[4867]: I0101 08:31:57.105221 4867 patch_prober.go:28] interesting pod/route-controller-manager-c9596ff9f-2c8kb container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Jan 01 08:31:57 crc kubenswrapper[4867]: I0101 08:31:57.105269 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-c9596ff9f-2c8kb" podUID="514e8940-921e-47c0-a6b1-423b3b97ddb3" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" Jan 01 08:31:57 crc kubenswrapper[4867]: I0101 08:31:57.105951 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7d4c67957-jq6wx" Jan 01 08:31:57 crc kubenswrapper[4867]: I0101 08:31:57.117608 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7d4c67957-jq6wx" podStartSLOduration=3.117595387 podStartE2EDuration="3.117595387s" podCreationTimestamp="2026-01-01 08:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:31:57.115615596 +0000 UTC m=+326.250884395" watchObservedRunningTime="2026-01-01 08:31:57.117595387 +0000 UTC m=+326.252864176" Jan 01 08:31:57 crc kubenswrapper[4867]: I0101 08:31:57.134826 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02e8282b-37b6-4539-ad59-fae4c4c65a45" path="/var/lib/kubelet/pods/02e8282b-37b6-4539-ad59-fae4c4c65a45/volumes" Jan 01 08:31:57 crc kubenswrapper[4867]: I0101 08:31:57.135597 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9aff785a-03ef-4b1a-93d6-e2674725b053" path="/var/lib/kubelet/pods/9aff785a-03ef-4b1a-93d6-e2674725b053/volumes" Jan 01 08:31:57 crc kubenswrapper[4867]: I0101 08:31:57.139187 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-c9596ff9f-2c8kb" podStartSLOduration=3.139173297 podStartE2EDuration="3.139173297s" podCreationTimestamp="2026-01-01 08:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:31:57.136088013 +0000 UTC m=+326.271356792" watchObservedRunningTime="2026-01-01 08:31:57.139173297 +0000 UTC m=+326.274442066" Jan 01 08:31:58 crc kubenswrapper[4867]: I0101 08:31:58.117767 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-c9596ff9f-2c8kb" Jan 01 08:32:21 crc kubenswrapper[4867]: I0101 08:32:21.331697 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 08:32:21 crc kubenswrapper[4867]: I0101 08:32:21.332360 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 08:32:35 crc kubenswrapper[4867]: I0101 08:32:35.677002 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-txdr6"] Jan 01 08:32:35 crc kubenswrapper[4867]: I0101 08:32:35.677644 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-txdr6" podUID="72494188-2bff-4e14-8a71-041a84c049f2" containerName="registry-server" containerID="cri-o://cf732a527b24a22230cbc93f100464a3b0313c727f4b9e86bdad55fc8617b607" gracePeriod=30 Jan 01 08:32:35 crc kubenswrapper[4867]: I0101 08:32:35.691513 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gmjjl"] Jan 01 08:32:35 crc kubenswrapper[4867]: I0101 08:32:35.691751 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gmjjl" podUID="bcb25595-1b19-4e0b-a711-f3e0ed8e0689" containerName="registry-server" containerID="cri-o://3ffe9b91f993517411ea8f0f9b4e1f15e16536c6076c98013b3944fba6c26145" gracePeriod=30 Jan 01 08:32:35 crc kubenswrapper[4867]: I0101 08:32:35.698442 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8tlg5"] Jan 01 08:32:35 crc kubenswrapper[4867]: I0101 08:32:35.698684 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-8tlg5" podUID="15e74714-78ff-4351-9088-ddf6672ce8a5" containerName="marketplace-operator" containerID="cri-o://1447ad6f9262dece572323ff65442ecfbbed514c58127d49e99b67eb414e325a" gracePeriod=30 Jan 01 08:32:35 crc kubenswrapper[4867]: I0101 08:32:35.710175 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dr5fz"] Jan 01 08:32:35 crc kubenswrapper[4867]: I0101 08:32:35.710489 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dr5fz" podUID="8b1938e8-f894-481e-a3d9-9050583ee8c2" containerName="registry-server" containerID="cri-o://621d73540beb198de762909135607f1b2982a1456e070fd5e55998f11b70facb" gracePeriod=30 Jan 01 08:32:35 crc kubenswrapper[4867]: I0101 08:32:35.720582 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bt5bw"] Jan 01 08:32:35 crc kubenswrapper[4867]: I0101 08:32:35.720839 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bt5bw" podUID="8cda336b-c663-4993-bdc1-66b729bf0740" containerName="registry-server" containerID="cri-o://8210508ee7ddd57330f451d559f97151cd04a7be2cf6f7d8266fa96699ae6d29" gracePeriod=30 Jan 01 08:32:35 crc kubenswrapper[4867]: I0101 08:32:35.723812 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-m7q68"] Jan 01 08:32:35 crc kubenswrapper[4867]: I0101 08:32:35.724442 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-m7q68" Jan 01 08:32:35 crc kubenswrapper[4867]: I0101 08:32:35.749641 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-m7q68"] Jan 01 08:32:35 crc kubenswrapper[4867]: I0101 08:32:35.771578 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74dd8dbf-6baa-483d-8228-1248a8e3b791-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-m7q68\" (UID: \"74dd8dbf-6baa-483d-8228-1248a8e3b791\") " pod="openshift-marketplace/marketplace-operator-79b997595-m7q68" Jan 01 08:32:35 crc kubenswrapper[4867]: I0101 08:32:35.771699 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq2w8\" (UniqueName: \"kubernetes.io/projected/74dd8dbf-6baa-483d-8228-1248a8e3b791-kube-api-access-vq2w8\") pod \"marketplace-operator-79b997595-m7q68\" (UID: \"74dd8dbf-6baa-483d-8228-1248a8e3b791\") " pod="openshift-marketplace/marketplace-operator-79b997595-m7q68" Jan 01 08:32:35 crc kubenswrapper[4867]: I0101 08:32:35.771738 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74dd8dbf-6baa-483d-8228-1248a8e3b791-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-m7q68\" (UID: \"74dd8dbf-6baa-483d-8228-1248a8e3b791\") " pod="openshift-marketplace/marketplace-operator-79b997595-m7q68" Jan 01 08:32:35 crc kubenswrapper[4867]: I0101 08:32:35.877369 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq2w8\" (UniqueName: \"kubernetes.io/projected/74dd8dbf-6baa-483d-8228-1248a8e3b791-kube-api-access-vq2w8\") pod \"marketplace-operator-79b997595-m7q68\" (UID: \"74dd8dbf-6baa-483d-8228-1248a8e3b791\") " pod="openshift-marketplace/marketplace-operator-79b997595-m7q68" Jan 01 08:32:35 crc kubenswrapper[4867]: I0101 08:32:35.877414 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74dd8dbf-6baa-483d-8228-1248a8e3b791-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-m7q68\" (UID: \"74dd8dbf-6baa-483d-8228-1248a8e3b791\") " pod="openshift-marketplace/marketplace-operator-79b997595-m7q68" Jan 01 08:32:35 crc kubenswrapper[4867]: I0101 08:32:35.877447 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74dd8dbf-6baa-483d-8228-1248a8e3b791-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-m7q68\" (UID: \"74dd8dbf-6baa-483d-8228-1248a8e3b791\") " pod="openshift-marketplace/marketplace-operator-79b997595-m7q68" Jan 01 08:32:35 crc kubenswrapper[4867]: I0101 08:32:35.878646 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74dd8dbf-6baa-483d-8228-1248a8e3b791-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-m7q68\" (UID: \"74dd8dbf-6baa-483d-8228-1248a8e3b791\") " pod="openshift-marketplace/marketplace-operator-79b997595-m7q68" Jan 01 08:32:35 crc kubenswrapper[4867]: I0101 08:32:35.888303 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74dd8dbf-6baa-483d-8228-1248a8e3b791-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-m7q68\" (UID: \"74dd8dbf-6baa-483d-8228-1248a8e3b791\") " pod="openshift-marketplace/marketplace-operator-79b997595-m7q68" Jan 01 08:32:35 crc kubenswrapper[4867]: I0101 08:32:35.900316 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq2w8\" (UniqueName: \"kubernetes.io/projected/74dd8dbf-6baa-483d-8228-1248a8e3b791-kube-api-access-vq2w8\") pod \"marketplace-operator-79b997595-m7q68\" (UID: \"74dd8dbf-6baa-483d-8228-1248a8e3b791\") " pod="openshift-marketplace/marketplace-operator-79b997595-m7q68" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.044254 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-m7q68" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.131611 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-txdr6" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.180398 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmjjl" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.181938 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72494188-2bff-4e14-8a71-041a84c049f2-catalog-content\") pod \"72494188-2bff-4e14-8a71-041a84c049f2\" (UID: \"72494188-2bff-4e14-8a71-041a84c049f2\") " Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.181986 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frr5l\" (UniqueName: \"kubernetes.io/projected/72494188-2bff-4e14-8a71-041a84c049f2-kube-api-access-frr5l\") pod \"72494188-2bff-4e14-8a71-041a84c049f2\" (UID: \"72494188-2bff-4e14-8a71-041a84c049f2\") " Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.182093 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72494188-2bff-4e14-8a71-041a84c049f2-utilities\") pod \"72494188-2bff-4e14-8a71-041a84c049f2\" (UID: \"72494188-2bff-4e14-8a71-041a84c049f2\") " Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.183224 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72494188-2bff-4e14-8a71-041a84c049f2-utilities" (OuterVolumeSpecName: "utilities") pod "72494188-2bff-4e14-8a71-041a84c049f2" (UID: "72494188-2bff-4e14-8a71-041a84c049f2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.186875 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72494188-2bff-4e14-8a71-041a84c049f2-kube-api-access-frr5l" (OuterVolumeSpecName: "kube-api-access-frr5l") pod "72494188-2bff-4e14-8a71-041a84c049f2" (UID: "72494188-2bff-4e14-8a71-041a84c049f2"). InnerVolumeSpecName "kube-api-access-frr5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.189570 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bt5bw" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.215497 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dr5fz" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.233161 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8tlg5" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.264416 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72494188-2bff-4e14-8a71-041a84c049f2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "72494188-2bff-4e14-8a71-041a84c049f2" (UID: "72494188-2bff-4e14-8a71-041a84c049f2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.283284 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-622ml\" (UniqueName: \"kubernetes.io/projected/8cda336b-c663-4993-bdc1-66b729bf0740-kube-api-access-622ml\") pod \"8cda336b-c663-4993-bdc1-66b729bf0740\" (UID: \"8cda336b-c663-4993-bdc1-66b729bf0740\") " Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.283394 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb25595-1b19-4e0b-a711-f3e0ed8e0689-utilities\") pod \"bcb25595-1b19-4e0b-a711-f3e0ed8e0689\" (UID: \"bcb25595-1b19-4e0b-a711-f3e0ed8e0689\") " Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.283426 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/15e74714-78ff-4351-9088-ddf6672ce8a5-marketplace-operator-metrics\") pod \"15e74714-78ff-4351-9088-ddf6672ce8a5\" (UID: \"15e74714-78ff-4351-9088-ddf6672ce8a5\") " Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.283501 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b1938e8-f894-481e-a3d9-9050583ee8c2-utilities\") pod \"8b1938e8-f894-481e-a3d9-9050583ee8c2\" (UID: \"8b1938e8-f894-481e-a3d9-9050583ee8c2\") " Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.283529 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb25595-1b19-4e0b-a711-f3e0ed8e0689-catalog-content\") pod \"bcb25595-1b19-4e0b-a711-f3e0ed8e0689\" (UID: \"bcb25595-1b19-4e0b-a711-f3e0ed8e0689\") " Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.283592 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcqn9\" (UniqueName: \"kubernetes.io/projected/bcb25595-1b19-4e0b-a711-f3e0ed8e0689-kube-api-access-mcqn9\") pod \"bcb25595-1b19-4e0b-a711-f3e0ed8e0689\" (UID: \"bcb25595-1b19-4e0b-a711-f3e0ed8e0689\") " Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.283618 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5hks\" (UniqueName: \"kubernetes.io/projected/8b1938e8-f894-481e-a3d9-9050583ee8c2-kube-api-access-m5hks\") pod \"8b1938e8-f894-481e-a3d9-9050583ee8c2\" (UID: \"8b1938e8-f894-481e-a3d9-9050583ee8c2\") " Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.283658 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fp8tg\" (UniqueName: \"kubernetes.io/projected/15e74714-78ff-4351-9088-ddf6672ce8a5-kube-api-access-fp8tg\") pod \"15e74714-78ff-4351-9088-ddf6672ce8a5\" (UID: \"15e74714-78ff-4351-9088-ddf6672ce8a5\") " Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.283698 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/15e74714-78ff-4351-9088-ddf6672ce8a5-marketplace-trusted-ca\") pod \"15e74714-78ff-4351-9088-ddf6672ce8a5\" (UID: \"15e74714-78ff-4351-9088-ddf6672ce8a5\") " Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.283734 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cda336b-c663-4993-bdc1-66b729bf0740-catalog-content\") pod \"8cda336b-c663-4993-bdc1-66b729bf0740\" (UID: \"8cda336b-c663-4993-bdc1-66b729bf0740\") " Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.283771 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cda336b-c663-4993-bdc1-66b729bf0740-utilities\") pod \"8cda336b-c663-4993-bdc1-66b729bf0740\" (UID: \"8cda336b-c663-4993-bdc1-66b729bf0740\") " Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.283797 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b1938e8-f894-481e-a3d9-9050583ee8c2-catalog-content\") pod \"8b1938e8-f894-481e-a3d9-9050583ee8c2\" (UID: \"8b1938e8-f894-481e-a3d9-9050583ee8c2\") " Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.284268 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72494188-2bff-4e14-8a71-041a84c049f2-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.284287 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72494188-2bff-4e14-8a71-041a84c049f2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.284302 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frr5l\" (UniqueName: \"kubernetes.io/projected/72494188-2bff-4e14-8a71-041a84c049f2-kube-api-access-frr5l\") on node \"crc\" DevicePath \"\"" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.284686 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15e74714-78ff-4351-9088-ddf6672ce8a5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "15e74714-78ff-4351-9088-ddf6672ce8a5" (UID: "15e74714-78ff-4351-9088-ddf6672ce8a5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.285359 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cda336b-c663-4993-bdc1-66b729bf0740-utilities" (OuterVolumeSpecName: "utilities") pod "8cda336b-c663-4993-bdc1-66b729bf0740" (UID: "8cda336b-c663-4993-bdc1-66b729bf0740"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.285405 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b1938e8-f894-481e-a3d9-9050583ee8c2-utilities" (OuterVolumeSpecName: "utilities") pod "8b1938e8-f894-481e-a3d9-9050583ee8c2" (UID: "8b1938e8-f894-481e-a3d9-9050583ee8c2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.286186 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcb25595-1b19-4e0b-a711-f3e0ed8e0689-utilities" (OuterVolumeSpecName: "utilities") pod "bcb25595-1b19-4e0b-a711-f3e0ed8e0689" (UID: "bcb25595-1b19-4e0b-a711-f3e0ed8e0689"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.288009 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15e74714-78ff-4351-9088-ddf6672ce8a5-kube-api-access-fp8tg" (OuterVolumeSpecName: "kube-api-access-fp8tg") pod "15e74714-78ff-4351-9088-ddf6672ce8a5" (UID: "15e74714-78ff-4351-9088-ddf6672ce8a5"). InnerVolumeSpecName "kube-api-access-fp8tg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.288226 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcb25595-1b19-4e0b-a711-f3e0ed8e0689-kube-api-access-mcqn9" (OuterVolumeSpecName: "kube-api-access-mcqn9") pod "bcb25595-1b19-4e0b-a711-f3e0ed8e0689" (UID: "bcb25595-1b19-4e0b-a711-f3e0ed8e0689"). InnerVolumeSpecName "kube-api-access-mcqn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.289421 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b1938e8-f894-481e-a3d9-9050583ee8c2-kube-api-access-m5hks" (OuterVolumeSpecName: "kube-api-access-m5hks") pod "8b1938e8-f894-481e-a3d9-9050583ee8c2" (UID: "8b1938e8-f894-481e-a3d9-9050583ee8c2"). InnerVolumeSpecName "kube-api-access-m5hks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.289489 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15e74714-78ff-4351-9088-ddf6672ce8a5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "15e74714-78ff-4351-9088-ddf6672ce8a5" (UID: "15e74714-78ff-4351-9088-ddf6672ce8a5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.311470 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b1938e8-f894-481e-a3d9-9050583ee8c2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b1938e8-f894-481e-a3d9-9050583ee8c2" (UID: "8b1938e8-f894-481e-a3d9-9050583ee8c2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.311623 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cda336b-c663-4993-bdc1-66b729bf0740-kube-api-access-622ml" (OuterVolumeSpecName: "kube-api-access-622ml") pod "8cda336b-c663-4993-bdc1-66b729bf0740" (UID: "8cda336b-c663-4993-bdc1-66b729bf0740"). InnerVolumeSpecName "kube-api-access-622ml". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.336220 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcb25595-1b19-4e0b-a711-f3e0ed8e0689-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bcb25595-1b19-4e0b-a711-f3e0ed8e0689" (UID: "bcb25595-1b19-4e0b-a711-f3e0ed8e0689"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.382100 4867 generic.go:334] "Generic (PLEG): container finished" podID="8cda336b-c663-4993-bdc1-66b729bf0740" containerID="8210508ee7ddd57330f451d559f97151cd04a7be2cf6f7d8266fa96699ae6d29" exitCode=0 Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.382140 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bt5bw" event={"ID":"8cda336b-c663-4993-bdc1-66b729bf0740","Type":"ContainerDied","Data":"8210508ee7ddd57330f451d559f97151cd04a7be2cf6f7d8266fa96699ae6d29"} Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.382203 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bt5bw" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.382230 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bt5bw" event={"ID":"8cda336b-c663-4993-bdc1-66b729bf0740","Type":"ContainerDied","Data":"661b50172ba99e1c4d18945a6619c6b5356936cd4642917f465552f1e1aeaf1e"} Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.382253 4867 scope.go:117] "RemoveContainer" containerID="8210508ee7ddd57330f451d559f97151cd04a7be2cf6f7d8266fa96699ae6d29" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.385050 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fp8tg\" (UniqueName: \"kubernetes.io/projected/15e74714-78ff-4351-9088-ddf6672ce8a5-kube-api-access-fp8tg\") on node \"crc\" DevicePath \"\"" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.385078 4867 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/15e74714-78ff-4351-9088-ddf6672ce8a5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.385089 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cda336b-c663-4993-bdc1-66b729bf0740-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.385100 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b1938e8-f894-481e-a3d9-9050583ee8c2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.385110 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-622ml\" (UniqueName: \"kubernetes.io/projected/8cda336b-c663-4993-bdc1-66b729bf0740-kube-api-access-622ml\") on node \"crc\" DevicePath \"\"" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.385120 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb25595-1b19-4e0b-a711-f3e0ed8e0689-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.385128 4867 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/15e74714-78ff-4351-9088-ddf6672ce8a5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.385137 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b1938e8-f894-481e-a3d9-9050583ee8c2-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.385145 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb25595-1b19-4e0b-a711-f3e0ed8e0689-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.385153 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcqn9\" (UniqueName: \"kubernetes.io/projected/bcb25595-1b19-4e0b-a711-f3e0ed8e0689-kube-api-access-mcqn9\") on node \"crc\" DevicePath \"\"" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.385160 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5hks\" (UniqueName: \"kubernetes.io/projected/8b1938e8-f894-481e-a3d9-9050583ee8c2-kube-api-access-m5hks\") on node \"crc\" DevicePath \"\"" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.385157 4867 generic.go:334] "Generic (PLEG): container finished" podID="bcb25595-1b19-4e0b-a711-f3e0ed8e0689" containerID="3ffe9b91f993517411ea8f0f9b4e1f15e16536c6076c98013b3944fba6c26145" exitCode=0 Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.385213 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmjjl" event={"ID":"bcb25595-1b19-4e0b-a711-f3e0ed8e0689","Type":"ContainerDied","Data":"3ffe9b91f993517411ea8f0f9b4e1f15e16536c6076c98013b3944fba6c26145"} Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.385240 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmjjl" event={"ID":"bcb25595-1b19-4e0b-a711-f3e0ed8e0689","Type":"ContainerDied","Data":"eb78519c532b2ef04198c41a771f58ce9c8da21f1aacdc8c038ec3fc7e5357a3"} Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.385241 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmjjl" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.388340 4867 generic.go:334] "Generic (PLEG): container finished" podID="72494188-2bff-4e14-8a71-041a84c049f2" containerID="cf732a527b24a22230cbc93f100464a3b0313c727f4b9e86bdad55fc8617b607" exitCode=0 Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.388392 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-txdr6" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.388416 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txdr6" event={"ID":"72494188-2bff-4e14-8a71-041a84c049f2","Type":"ContainerDied","Data":"cf732a527b24a22230cbc93f100464a3b0313c727f4b9e86bdad55fc8617b607"} Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.388464 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txdr6" event={"ID":"72494188-2bff-4e14-8a71-041a84c049f2","Type":"ContainerDied","Data":"ff17e0b9138056267a34bb722bc75e20ac75c8902ba3b6321deb2914a0a5b3f9"} Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.390217 4867 generic.go:334] "Generic (PLEG): container finished" podID="15e74714-78ff-4351-9088-ddf6672ce8a5" containerID="1447ad6f9262dece572323ff65442ecfbbed514c58127d49e99b67eb414e325a" exitCode=0 Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.390242 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8tlg5" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.390301 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8tlg5" event={"ID":"15e74714-78ff-4351-9088-ddf6672ce8a5","Type":"ContainerDied","Data":"1447ad6f9262dece572323ff65442ecfbbed514c58127d49e99b67eb414e325a"} Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.390331 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8tlg5" event={"ID":"15e74714-78ff-4351-9088-ddf6672ce8a5","Type":"ContainerDied","Data":"9c30c68d2f91ede95d3251d8395311039ade25d031aabe34d61774cee20869bb"} Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.392017 4867 generic.go:334] "Generic (PLEG): container finished" podID="8b1938e8-f894-481e-a3d9-9050583ee8c2" containerID="621d73540beb198de762909135607f1b2982a1456e070fd5e55998f11b70facb" exitCode=0 Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.392049 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dr5fz" event={"ID":"8b1938e8-f894-481e-a3d9-9050583ee8c2","Type":"ContainerDied","Data":"621d73540beb198de762909135607f1b2982a1456e070fd5e55998f11b70facb"} Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.392070 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dr5fz" event={"ID":"8b1938e8-f894-481e-a3d9-9050583ee8c2","Type":"ContainerDied","Data":"610ffc783fc5ad098307f5810b44b215957c1ce7ba5208e3ff05629065e294c1"} Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.392136 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dr5fz" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.414829 4867 scope.go:117] "RemoveContainer" containerID="8ae66c9e2538394bb0b512c6122c3dc23dce4f661047a44dc6b4a1d5d58b4229" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.424927 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cda336b-c663-4993-bdc1-66b729bf0740-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8cda336b-c663-4993-bdc1-66b729bf0740" (UID: "8cda336b-c663-4993-bdc1-66b729bf0740"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.447103 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-txdr6"] Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.457402 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-txdr6"] Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.460742 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8tlg5"] Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.466690 4867 scope.go:117] "RemoveContainer" containerID="0a6dcef94bc6eff3e35e95019b5f7d8c774fc9ed67b9fbb78f2c1dc26e34e760" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.480814 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8tlg5"] Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.496740 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cda336b-c663-4993-bdc1-66b729bf0740-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.496812 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gmjjl"] Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.507403 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gmjjl"] Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.520952 4867 scope.go:117] "RemoveContainer" containerID="8210508ee7ddd57330f451d559f97151cd04a7be2cf6f7d8266fa96699ae6d29" Jan 01 08:32:36 crc kubenswrapper[4867]: E0101 08:32:36.523494 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8210508ee7ddd57330f451d559f97151cd04a7be2cf6f7d8266fa96699ae6d29\": container with ID starting with 8210508ee7ddd57330f451d559f97151cd04a7be2cf6f7d8266fa96699ae6d29 not found: ID does not exist" containerID="8210508ee7ddd57330f451d559f97151cd04a7be2cf6f7d8266fa96699ae6d29" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.523547 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8210508ee7ddd57330f451d559f97151cd04a7be2cf6f7d8266fa96699ae6d29"} err="failed to get container status \"8210508ee7ddd57330f451d559f97151cd04a7be2cf6f7d8266fa96699ae6d29\": rpc error: code = NotFound desc = could not find container \"8210508ee7ddd57330f451d559f97151cd04a7be2cf6f7d8266fa96699ae6d29\": container with ID starting with 8210508ee7ddd57330f451d559f97151cd04a7be2cf6f7d8266fa96699ae6d29 not found: ID does not exist" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.523583 4867 scope.go:117] "RemoveContainer" containerID="8ae66c9e2538394bb0b512c6122c3dc23dce4f661047a44dc6b4a1d5d58b4229" Jan 01 08:32:36 crc kubenswrapper[4867]: E0101 08:32:36.524085 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ae66c9e2538394bb0b512c6122c3dc23dce4f661047a44dc6b4a1d5d58b4229\": container with ID starting with 8ae66c9e2538394bb0b512c6122c3dc23dce4f661047a44dc6b4a1d5d58b4229 not found: ID does not exist" containerID="8ae66c9e2538394bb0b512c6122c3dc23dce4f661047a44dc6b4a1d5d58b4229" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.524113 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ae66c9e2538394bb0b512c6122c3dc23dce4f661047a44dc6b4a1d5d58b4229"} err="failed to get container status \"8ae66c9e2538394bb0b512c6122c3dc23dce4f661047a44dc6b4a1d5d58b4229\": rpc error: code = NotFound desc = could not find container \"8ae66c9e2538394bb0b512c6122c3dc23dce4f661047a44dc6b4a1d5d58b4229\": container with ID starting with 8ae66c9e2538394bb0b512c6122c3dc23dce4f661047a44dc6b4a1d5d58b4229 not found: ID does not exist" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.524133 4867 scope.go:117] "RemoveContainer" containerID="0a6dcef94bc6eff3e35e95019b5f7d8c774fc9ed67b9fbb78f2c1dc26e34e760" Jan 01 08:32:36 crc kubenswrapper[4867]: E0101 08:32:36.526533 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a6dcef94bc6eff3e35e95019b5f7d8c774fc9ed67b9fbb78f2c1dc26e34e760\": container with ID starting with 0a6dcef94bc6eff3e35e95019b5f7d8c774fc9ed67b9fbb78f2c1dc26e34e760 not found: ID does not exist" containerID="0a6dcef94bc6eff3e35e95019b5f7d8c774fc9ed67b9fbb78f2c1dc26e34e760" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.526566 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a6dcef94bc6eff3e35e95019b5f7d8c774fc9ed67b9fbb78f2c1dc26e34e760"} err="failed to get container status \"0a6dcef94bc6eff3e35e95019b5f7d8c774fc9ed67b9fbb78f2c1dc26e34e760\": rpc error: code = NotFound desc = could not find container \"0a6dcef94bc6eff3e35e95019b5f7d8c774fc9ed67b9fbb78f2c1dc26e34e760\": container with ID starting with 0a6dcef94bc6eff3e35e95019b5f7d8c774fc9ed67b9fbb78f2c1dc26e34e760 not found: ID does not exist" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.526611 4867 scope.go:117] "RemoveContainer" containerID="3ffe9b91f993517411ea8f0f9b4e1f15e16536c6076c98013b3944fba6c26145" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.537305 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dr5fz"] Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.541845 4867 scope.go:117] "RemoveContainer" containerID="eff0961e37310e35173a185b62a8daab00785a7eb8919489368c71e940318929" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.542302 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dr5fz"] Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.560962 4867 scope.go:117] "RemoveContainer" containerID="1be2a6b57798ddf7130e0d1257322fba44bd6a3e1fdc6aff86bbaab59a2143db" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.565772 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-m7q68"] Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.573263 4867 scope.go:117] "RemoveContainer" containerID="3ffe9b91f993517411ea8f0f9b4e1f15e16536c6076c98013b3944fba6c26145" Jan 01 08:32:36 crc kubenswrapper[4867]: E0101 08:32:36.573619 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ffe9b91f993517411ea8f0f9b4e1f15e16536c6076c98013b3944fba6c26145\": container with ID starting with 3ffe9b91f993517411ea8f0f9b4e1f15e16536c6076c98013b3944fba6c26145 not found: ID does not exist" containerID="3ffe9b91f993517411ea8f0f9b4e1f15e16536c6076c98013b3944fba6c26145" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.573904 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ffe9b91f993517411ea8f0f9b4e1f15e16536c6076c98013b3944fba6c26145"} err="failed to get container status \"3ffe9b91f993517411ea8f0f9b4e1f15e16536c6076c98013b3944fba6c26145\": rpc error: code = NotFound desc = could not find container \"3ffe9b91f993517411ea8f0f9b4e1f15e16536c6076c98013b3944fba6c26145\": container with ID starting with 3ffe9b91f993517411ea8f0f9b4e1f15e16536c6076c98013b3944fba6c26145 not found: ID does not exist" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.574962 4867 scope.go:117] "RemoveContainer" containerID="eff0961e37310e35173a185b62a8daab00785a7eb8919489368c71e940318929" Jan 01 08:32:36 crc kubenswrapper[4867]: E0101 08:32:36.575789 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eff0961e37310e35173a185b62a8daab00785a7eb8919489368c71e940318929\": container with ID starting with eff0961e37310e35173a185b62a8daab00785a7eb8919489368c71e940318929 not found: ID does not exist" containerID="eff0961e37310e35173a185b62a8daab00785a7eb8919489368c71e940318929" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.575836 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eff0961e37310e35173a185b62a8daab00785a7eb8919489368c71e940318929"} err="failed to get container status \"eff0961e37310e35173a185b62a8daab00785a7eb8919489368c71e940318929\": rpc error: code = NotFound desc = could not find container \"eff0961e37310e35173a185b62a8daab00785a7eb8919489368c71e940318929\": container with ID starting with eff0961e37310e35173a185b62a8daab00785a7eb8919489368c71e940318929 not found: ID does not exist" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.575868 4867 scope.go:117] "RemoveContainer" containerID="1be2a6b57798ddf7130e0d1257322fba44bd6a3e1fdc6aff86bbaab59a2143db" Jan 01 08:32:36 crc kubenswrapper[4867]: E0101 08:32:36.576167 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1be2a6b57798ddf7130e0d1257322fba44bd6a3e1fdc6aff86bbaab59a2143db\": container with ID starting with 1be2a6b57798ddf7130e0d1257322fba44bd6a3e1fdc6aff86bbaab59a2143db not found: ID does not exist" containerID="1be2a6b57798ddf7130e0d1257322fba44bd6a3e1fdc6aff86bbaab59a2143db" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.576199 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1be2a6b57798ddf7130e0d1257322fba44bd6a3e1fdc6aff86bbaab59a2143db"} err="failed to get container status \"1be2a6b57798ddf7130e0d1257322fba44bd6a3e1fdc6aff86bbaab59a2143db\": rpc error: code = NotFound desc = could not find container \"1be2a6b57798ddf7130e0d1257322fba44bd6a3e1fdc6aff86bbaab59a2143db\": container with ID starting with 1be2a6b57798ddf7130e0d1257322fba44bd6a3e1fdc6aff86bbaab59a2143db not found: ID does not exist" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.576220 4867 scope.go:117] "RemoveContainer" containerID="cf732a527b24a22230cbc93f100464a3b0313c727f4b9e86bdad55fc8617b607" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.587095 4867 scope.go:117] "RemoveContainer" containerID="18436afffc12676a4187fd5b32fe6f2e5cb5111f0e4a67e955a58545e0b4fae8" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.599466 4867 scope.go:117] "RemoveContainer" containerID="48bf2a57c3aa98e0c9ff529aa158f6d78614a23e928bd18f8f6fd89626bad53a" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.626220 4867 scope.go:117] "RemoveContainer" containerID="cf732a527b24a22230cbc93f100464a3b0313c727f4b9e86bdad55fc8617b607" Jan 01 08:32:36 crc kubenswrapper[4867]: E0101 08:32:36.626517 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf732a527b24a22230cbc93f100464a3b0313c727f4b9e86bdad55fc8617b607\": container with ID starting with cf732a527b24a22230cbc93f100464a3b0313c727f4b9e86bdad55fc8617b607 not found: ID does not exist" containerID="cf732a527b24a22230cbc93f100464a3b0313c727f4b9e86bdad55fc8617b607" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.626546 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf732a527b24a22230cbc93f100464a3b0313c727f4b9e86bdad55fc8617b607"} err="failed to get container status \"cf732a527b24a22230cbc93f100464a3b0313c727f4b9e86bdad55fc8617b607\": rpc error: code = NotFound desc = could not find container \"cf732a527b24a22230cbc93f100464a3b0313c727f4b9e86bdad55fc8617b607\": container with ID starting with cf732a527b24a22230cbc93f100464a3b0313c727f4b9e86bdad55fc8617b607 not found: ID does not exist" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.626566 4867 scope.go:117] "RemoveContainer" containerID="18436afffc12676a4187fd5b32fe6f2e5cb5111f0e4a67e955a58545e0b4fae8" Jan 01 08:32:36 crc kubenswrapper[4867]: E0101 08:32:36.626759 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18436afffc12676a4187fd5b32fe6f2e5cb5111f0e4a67e955a58545e0b4fae8\": container with ID starting with 18436afffc12676a4187fd5b32fe6f2e5cb5111f0e4a67e955a58545e0b4fae8 not found: ID does not exist" containerID="18436afffc12676a4187fd5b32fe6f2e5cb5111f0e4a67e955a58545e0b4fae8" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.626780 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18436afffc12676a4187fd5b32fe6f2e5cb5111f0e4a67e955a58545e0b4fae8"} err="failed to get container status \"18436afffc12676a4187fd5b32fe6f2e5cb5111f0e4a67e955a58545e0b4fae8\": rpc error: code = NotFound desc = could not find container \"18436afffc12676a4187fd5b32fe6f2e5cb5111f0e4a67e955a58545e0b4fae8\": container with ID starting with 18436afffc12676a4187fd5b32fe6f2e5cb5111f0e4a67e955a58545e0b4fae8 not found: ID does not exist" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.626793 4867 scope.go:117] "RemoveContainer" containerID="48bf2a57c3aa98e0c9ff529aa158f6d78614a23e928bd18f8f6fd89626bad53a" Jan 01 08:32:36 crc kubenswrapper[4867]: E0101 08:32:36.627012 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48bf2a57c3aa98e0c9ff529aa158f6d78614a23e928bd18f8f6fd89626bad53a\": container with ID starting with 48bf2a57c3aa98e0c9ff529aa158f6d78614a23e928bd18f8f6fd89626bad53a not found: ID does not exist" containerID="48bf2a57c3aa98e0c9ff529aa158f6d78614a23e928bd18f8f6fd89626bad53a" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.627033 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48bf2a57c3aa98e0c9ff529aa158f6d78614a23e928bd18f8f6fd89626bad53a"} err="failed to get container status \"48bf2a57c3aa98e0c9ff529aa158f6d78614a23e928bd18f8f6fd89626bad53a\": rpc error: code = NotFound desc = could not find container \"48bf2a57c3aa98e0c9ff529aa158f6d78614a23e928bd18f8f6fd89626bad53a\": container with ID starting with 48bf2a57c3aa98e0c9ff529aa158f6d78614a23e928bd18f8f6fd89626bad53a not found: ID does not exist" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.627044 4867 scope.go:117] "RemoveContainer" containerID="1447ad6f9262dece572323ff65442ecfbbed514c58127d49e99b67eb414e325a" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.640636 4867 scope.go:117] "RemoveContainer" containerID="c6f191078c9db97f18c9b8f987c8115ad6dac694b70e0fdf253e4f0def7e6cc0" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.654911 4867 scope.go:117] "RemoveContainer" containerID="1447ad6f9262dece572323ff65442ecfbbed514c58127d49e99b67eb414e325a" Jan 01 08:32:36 crc kubenswrapper[4867]: E0101 08:32:36.655349 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1447ad6f9262dece572323ff65442ecfbbed514c58127d49e99b67eb414e325a\": container with ID starting with 1447ad6f9262dece572323ff65442ecfbbed514c58127d49e99b67eb414e325a not found: ID does not exist" containerID="1447ad6f9262dece572323ff65442ecfbbed514c58127d49e99b67eb414e325a" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.655380 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1447ad6f9262dece572323ff65442ecfbbed514c58127d49e99b67eb414e325a"} err="failed to get container status \"1447ad6f9262dece572323ff65442ecfbbed514c58127d49e99b67eb414e325a\": rpc error: code = NotFound desc = could not find container \"1447ad6f9262dece572323ff65442ecfbbed514c58127d49e99b67eb414e325a\": container with ID starting with 1447ad6f9262dece572323ff65442ecfbbed514c58127d49e99b67eb414e325a not found: ID does not exist" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.655401 4867 scope.go:117] "RemoveContainer" containerID="c6f191078c9db97f18c9b8f987c8115ad6dac694b70e0fdf253e4f0def7e6cc0" Jan 01 08:32:36 crc kubenswrapper[4867]: E0101 08:32:36.655865 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6f191078c9db97f18c9b8f987c8115ad6dac694b70e0fdf253e4f0def7e6cc0\": container with ID starting with c6f191078c9db97f18c9b8f987c8115ad6dac694b70e0fdf253e4f0def7e6cc0 not found: ID does not exist" containerID="c6f191078c9db97f18c9b8f987c8115ad6dac694b70e0fdf253e4f0def7e6cc0" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.655933 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6f191078c9db97f18c9b8f987c8115ad6dac694b70e0fdf253e4f0def7e6cc0"} err="failed to get container status \"c6f191078c9db97f18c9b8f987c8115ad6dac694b70e0fdf253e4f0def7e6cc0\": rpc error: code = NotFound desc = could not find container \"c6f191078c9db97f18c9b8f987c8115ad6dac694b70e0fdf253e4f0def7e6cc0\": container with ID starting with c6f191078c9db97f18c9b8f987c8115ad6dac694b70e0fdf253e4f0def7e6cc0 not found: ID does not exist" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.656002 4867 scope.go:117] "RemoveContainer" containerID="621d73540beb198de762909135607f1b2982a1456e070fd5e55998f11b70facb" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.675116 4867 scope.go:117] "RemoveContainer" containerID="bf5ed6a62eaadffa2363e6b3d39a98b154e19f18ea43ee5bd291f29218705812" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.692895 4867 scope.go:117] "RemoveContainer" containerID="8f83968b4221e710316b20f187494a77775da996b435dc941cf0fa0982bfdc59" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.710765 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bt5bw"] Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.714812 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bt5bw"] Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.718127 4867 scope.go:117] "RemoveContainer" containerID="621d73540beb198de762909135607f1b2982a1456e070fd5e55998f11b70facb" Jan 01 08:32:36 crc kubenswrapper[4867]: E0101 08:32:36.718482 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"621d73540beb198de762909135607f1b2982a1456e070fd5e55998f11b70facb\": container with ID starting with 621d73540beb198de762909135607f1b2982a1456e070fd5e55998f11b70facb not found: ID does not exist" containerID="621d73540beb198de762909135607f1b2982a1456e070fd5e55998f11b70facb" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.718541 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"621d73540beb198de762909135607f1b2982a1456e070fd5e55998f11b70facb"} err="failed to get container status \"621d73540beb198de762909135607f1b2982a1456e070fd5e55998f11b70facb\": rpc error: code = NotFound desc = could not find container \"621d73540beb198de762909135607f1b2982a1456e070fd5e55998f11b70facb\": container with ID starting with 621d73540beb198de762909135607f1b2982a1456e070fd5e55998f11b70facb not found: ID does not exist" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.718568 4867 scope.go:117] "RemoveContainer" containerID="bf5ed6a62eaadffa2363e6b3d39a98b154e19f18ea43ee5bd291f29218705812" Jan 01 08:32:36 crc kubenswrapper[4867]: E0101 08:32:36.719224 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf5ed6a62eaadffa2363e6b3d39a98b154e19f18ea43ee5bd291f29218705812\": container with ID starting with bf5ed6a62eaadffa2363e6b3d39a98b154e19f18ea43ee5bd291f29218705812 not found: ID does not exist" containerID="bf5ed6a62eaadffa2363e6b3d39a98b154e19f18ea43ee5bd291f29218705812" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.719256 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf5ed6a62eaadffa2363e6b3d39a98b154e19f18ea43ee5bd291f29218705812"} err="failed to get container status \"bf5ed6a62eaadffa2363e6b3d39a98b154e19f18ea43ee5bd291f29218705812\": rpc error: code = NotFound desc = could not find container \"bf5ed6a62eaadffa2363e6b3d39a98b154e19f18ea43ee5bd291f29218705812\": container with ID starting with bf5ed6a62eaadffa2363e6b3d39a98b154e19f18ea43ee5bd291f29218705812 not found: ID does not exist" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.719270 4867 scope.go:117] "RemoveContainer" containerID="8f83968b4221e710316b20f187494a77775da996b435dc941cf0fa0982bfdc59" Jan 01 08:32:36 crc kubenswrapper[4867]: E0101 08:32:36.719518 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f83968b4221e710316b20f187494a77775da996b435dc941cf0fa0982bfdc59\": container with ID starting with 8f83968b4221e710316b20f187494a77775da996b435dc941cf0fa0982bfdc59 not found: ID does not exist" containerID="8f83968b4221e710316b20f187494a77775da996b435dc941cf0fa0982bfdc59" Jan 01 08:32:36 crc kubenswrapper[4867]: I0101 08:32:36.719561 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f83968b4221e710316b20f187494a77775da996b435dc941cf0fa0982bfdc59"} err="failed to get container status \"8f83968b4221e710316b20f187494a77775da996b435dc941cf0fa0982bfdc59\": rpc error: code = NotFound desc = could not find container \"8f83968b4221e710316b20f187494a77775da996b435dc941cf0fa0982bfdc59\": container with ID starting with 8f83968b4221e710316b20f187494a77775da996b435dc941cf0fa0982bfdc59 not found: ID does not exist" Jan 01 08:32:37 crc kubenswrapper[4867]: I0101 08:32:37.134870 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15e74714-78ff-4351-9088-ddf6672ce8a5" path="/var/lib/kubelet/pods/15e74714-78ff-4351-9088-ddf6672ce8a5/volumes" Jan 01 08:32:37 crc kubenswrapper[4867]: I0101 08:32:37.135578 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72494188-2bff-4e14-8a71-041a84c049f2" path="/var/lib/kubelet/pods/72494188-2bff-4e14-8a71-041a84c049f2/volumes" Jan 01 08:32:37 crc kubenswrapper[4867]: I0101 08:32:37.136132 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b1938e8-f894-481e-a3d9-9050583ee8c2" path="/var/lib/kubelet/pods/8b1938e8-f894-481e-a3d9-9050583ee8c2/volumes" Jan 01 08:32:37 crc kubenswrapper[4867]: I0101 08:32:37.137056 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cda336b-c663-4993-bdc1-66b729bf0740" path="/var/lib/kubelet/pods/8cda336b-c663-4993-bdc1-66b729bf0740/volumes" Jan 01 08:32:37 crc kubenswrapper[4867]: I0101 08:32:37.137609 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcb25595-1b19-4e0b-a711-f3e0ed8e0689" path="/var/lib/kubelet/pods/bcb25595-1b19-4e0b-a711-f3e0ed8e0689/volumes" Jan 01 08:32:37 crc kubenswrapper[4867]: I0101 08:32:37.401020 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-m7q68" event={"ID":"74dd8dbf-6baa-483d-8228-1248a8e3b791","Type":"ContainerStarted","Data":"00c9328eed7d616837ca274e434a6a038f27aaaf5a19abdf1a373197fbee445a"} Jan 01 08:32:37 crc kubenswrapper[4867]: I0101 08:32:37.401059 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-m7q68" event={"ID":"74dd8dbf-6baa-483d-8228-1248a8e3b791","Type":"ContainerStarted","Data":"4d1881dee9aa5d165e501ec37ee6bef8a62d34cbfc972df82770821a81531b1a"} Jan 01 08:32:37 crc kubenswrapper[4867]: I0101 08:32:37.401220 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-m7q68" Jan 01 08:32:37 crc kubenswrapper[4867]: I0101 08:32:37.404223 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-m7q68" Jan 01 08:32:37 crc kubenswrapper[4867]: I0101 08:32:37.415668 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-m7q68" podStartSLOduration=2.415648602 podStartE2EDuration="2.415648602s" podCreationTimestamp="2026-01-01 08:32:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:32:37.414027702 +0000 UTC m=+366.549296491" watchObservedRunningTime="2026-01-01 08:32:37.415648602 +0000 UTC m=+366.550917371" Jan 01 08:32:37 crc kubenswrapper[4867]: I0101 08:32:37.904358 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sf9dk"] Jan 01 08:32:37 crc kubenswrapper[4867]: E0101 08:32:37.904942 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cda336b-c663-4993-bdc1-66b729bf0740" containerName="extract-utilities" Jan 01 08:32:37 crc kubenswrapper[4867]: I0101 08:32:37.904957 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cda336b-c663-4993-bdc1-66b729bf0740" containerName="extract-utilities" Jan 01 08:32:37 crc kubenswrapper[4867]: E0101 08:32:37.904965 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b1938e8-f894-481e-a3d9-9050583ee8c2" containerName="extract-utilities" Jan 01 08:32:37 crc kubenswrapper[4867]: I0101 08:32:37.904971 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b1938e8-f894-481e-a3d9-9050583ee8c2" containerName="extract-utilities" Jan 01 08:32:37 crc kubenswrapper[4867]: E0101 08:32:37.904981 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b1938e8-f894-481e-a3d9-9050583ee8c2" containerName="registry-server" Jan 01 08:32:37 crc kubenswrapper[4867]: I0101 08:32:37.904986 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b1938e8-f894-481e-a3d9-9050583ee8c2" containerName="registry-server" Jan 01 08:32:37 crc kubenswrapper[4867]: E0101 08:32:37.904995 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15e74714-78ff-4351-9088-ddf6672ce8a5" containerName="marketplace-operator" Jan 01 08:32:37 crc kubenswrapper[4867]: I0101 08:32:37.905000 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="15e74714-78ff-4351-9088-ddf6672ce8a5" containerName="marketplace-operator" Jan 01 08:32:37 crc kubenswrapper[4867]: E0101 08:32:37.905007 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb25595-1b19-4e0b-a711-f3e0ed8e0689" containerName="extract-utilities" Jan 01 08:32:37 crc kubenswrapper[4867]: I0101 08:32:37.905012 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb25595-1b19-4e0b-a711-f3e0ed8e0689" containerName="extract-utilities" Jan 01 08:32:37 crc kubenswrapper[4867]: E0101 08:32:37.905021 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72494188-2bff-4e14-8a71-041a84c049f2" containerName="extract-utilities" Jan 01 08:32:37 crc kubenswrapper[4867]: I0101 08:32:37.905027 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="72494188-2bff-4e14-8a71-041a84c049f2" containerName="extract-utilities" Jan 01 08:32:37 crc kubenswrapper[4867]: E0101 08:32:37.905059 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72494188-2bff-4e14-8a71-041a84c049f2" containerName="extract-content" Jan 01 08:32:37 crc kubenswrapper[4867]: I0101 08:32:37.905066 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="72494188-2bff-4e14-8a71-041a84c049f2" containerName="extract-content" Jan 01 08:32:37 crc kubenswrapper[4867]: E0101 08:32:37.905075 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b1938e8-f894-481e-a3d9-9050583ee8c2" containerName="extract-content" Jan 01 08:32:37 crc kubenswrapper[4867]: I0101 08:32:37.905080 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b1938e8-f894-481e-a3d9-9050583ee8c2" containerName="extract-content" Jan 01 08:32:37 crc kubenswrapper[4867]: E0101 08:32:37.905090 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cda336b-c663-4993-bdc1-66b729bf0740" containerName="extract-content" Jan 01 08:32:37 crc kubenswrapper[4867]: I0101 08:32:37.905095 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cda336b-c663-4993-bdc1-66b729bf0740" containerName="extract-content" Jan 01 08:32:37 crc kubenswrapper[4867]: E0101 08:32:37.905105 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cda336b-c663-4993-bdc1-66b729bf0740" containerName="registry-server" Jan 01 08:32:37 crc kubenswrapper[4867]: I0101 08:32:37.905110 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cda336b-c663-4993-bdc1-66b729bf0740" containerName="registry-server" Jan 01 08:32:37 crc kubenswrapper[4867]: E0101 08:32:37.905121 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72494188-2bff-4e14-8a71-041a84c049f2" containerName="registry-server" Jan 01 08:32:37 crc kubenswrapper[4867]: I0101 08:32:37.905127 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="72494188-2bff-4e14-8a71-041a84c049f2" containerName="registry-server" Jan 01 08:32:37 crc kubenswrapper[4867]: E0101 08:32:37.905136 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb25595-1b19-4e0b-a711-f3e0ed8e0689" containerName="registry-server" Jan 01 08:32:37 crc kubenswrapper[4867]: I0101 08:32:37.905142 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb25595-1b19-4e0b-a711-f3e0ed8e0689" containerName="registry-server" Jan 01 08:32:37 crc kubenswrapper[4867]: E0101 08:32:37.905152 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb25595-1b19-4e0b-a711-f3e0ed8e0689" containerName="extract-content" Jan 01 08:32:37 crc kubenswrapper[4867]: I0101 08:32:37.905158 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb25595-1b19-4e0b-a711-f3e0ed8e0689" containerName="extract-content" Jan 01 08:32:37 crc kubenswrapper[4867]: I0101 08:32:37.905273 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="72494188-2bff-4e14-8a71-041a84c049f2" containerName="registry-server" Jan 01 08:32:37 crc kubenswrapper[4867]: I0101 08:32:37.905288 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcb25595-1b19-4e0b-a711-f3e0ed8e0689" containerName="registry-server" Jan 01 08:32:37 crc kubenswrapper[4867]: I0101 08:32:37.905297 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="15e74714-78ff-4351-9088-ddf6672ce8a5" containerName="marketplace-operator" Jan 01 08:32:37 crc kubenswrapper[4867]: I0101 08:32:37.905305 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="15e74714-78ff-4351-9088-ddf6672ce8a5" containerName="marketplace-operator" Jan 01 08:32:37 crc kubenswrapper[4867]: I0101 08:32:37.905313 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cda336b-c663-4993-bdc1-66b729bf0740" containerName="registry-server" Jan 01 08:32:37 crc kubenswrapper[4867]: I0101 08:32:37.905320 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b1938e8-f894-481e-a3d9-9050583ee8c2" containerName="registry-server" Jan 01 08:32:37 crc kubenswrapper[4867]: E0101 08:32:37.905398 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15e74714-78ff-4351-9088-ddf6672ce8a5" containerName="marketplace-operator" Jan 01 08:32:37 crc kubenswrapper[4867]: I0101 08:32:37.905405 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="15e74714-78ff-4351-9088-ddf6672ce8a5" containerName="marketplace-operator" Jan 01 08:32:37 crc kubenswrapper[4867]: I0101 08:32:37.905982 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sf9dk" Jan 01 08:32:37 crc kubenswrapper[4867]: I0101 08:32:37.908037 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 01 08:32:37 crc kubenswrapper[4867]: I0101 08:32:37.909854 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sf9dk"] Jan 01 08:32:38 crc kubenswrapper[4867]: I0101 08:32:38.015624 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5rhs\" (UniqueName: \"kubernetes.io/projected/39bfdca6-5787-4eaa-bc02-41f54ae947ee-kube-api-access-h5rhs\") pod \"redhat-marketplace-sf9dk\" (UID: \"39bfdca6-5787-4eaa-bc02-41f54ae947ee\") " pod="openshift-marketplace/redhat-marketplace-sf9dk" Jan 01 08:32:38 crc kubenswrapper[4867]: I0101 08:32:38.015674 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39bfdca6-5787-4eaa-bc02-41f54ae947ee-catalog-content\") pod \"redhat-marketplace-sf9dk\" (UID: \"39bfdca6-5787-4eaa-bc02-41f54ae947ee\") " pod="openshift-marketplace/redhat-marketplace-sf9dk" Jan 01 08:32:38 crc kubenswrapper[4867]: I0101 08:32:38.015703 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39bfdca6-5787-4eaa-bc02-41f54ae947ee-utilities\") pod \"redhat-marketplace-sf9dk\" (UID: \"39bfdca6-5787-4eaa-bc02-41f54ae947ee\") " pod="openshift-marketplace/redhat-marketplace-sf9dk" Jan 01 08:32:38 crc kubenswrapper[4867]: I0101 08:32:38.099391 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wdh87"] Jan 01 08:32:38 crc kubenswrapper[4867]: I0101 08:32:38.100528 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wdh87" Jan 01 08:32:38 crc kubenswrapper[4867]: I0101 08:32:38.102108 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 01 08:32:38 crc kubenswrapper[4867]: I0101 08:32:38.108212 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wdh87"] Jan 01 08:32:38 crc kubenswrapper[4867]: I0101 08:32:38.118947 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39bfdca6-5787-4eaa-bc02-41f54ae947ee-catalog-content\") pod \"redhat-marketplace-sf9dk\" (UID: \"39bfdca6-5787-4eaa-bc02-41f54ae947ee\") " pod="openshift-marketplace/redhat-marketplace-sf9dk" Jan 01 08:32:38 crc kubenswrapper[4867]: I0101 08:32:38.119821 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39bfdca6-5787-4eaa-bc02-41f54ae947ee-utilities\") pod \"redhat-marketplace-sf9dk\" (UID: \"39bfdca6-5787-4eaa-bc02-41f54ae947ee\") " pod="openshift-marketplace/redhat-marketplace-sf9dk" Jan 01 08:32:38 crc kubenswrapper[4867]: I0101 08:32:38.119343 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39bfdca6-5787-4eaa-bc02-41f54ae947ee-catalog-content\") pod \"redhat-marketplace-sf9dk\" (UID: \"39bfdca6-5787-4eaa-bc02-41f54ae947ee\") " pod="openshift-marketplace/redhat-marketplace-sf9dk" Jan 01 08:32:38 crc kubenswrapper[4867]: I0101 08:32:38.120153 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39bfdca6-5787-4eaa-bc02-41f54ae947ee-utilities\") pod \"redhat-marketplace-sf9dk\" (UID: \"39bfdca6-5787-4eaa-bc02-41f54ae947ee\") " pod="openshift-marketplace/redhat-marketplace-sf9dk" Jan 01 08:32:38 crc kubenswrapper[4867]: I0101 08:32:38.120222 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5rhs\" (UniqueName: \"kubernetes.io/projected/39bfdca6-5787-4eaa-bc02-41f54ae947ee-kube-api-access-h5rhs\") pod \"redhat-marketplace-sf9dk\" (UID: \"39bfdca6-5787-4eaa-bc02-41f54ae947ee\") " pod="openshift-marketplace/redhat-marketplace-sf9dk" Jan 01 08:32:38 crc kubenswrapper[4867]: I0101 08:32:38.141094 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5rhs\" (UniqueName: \"kubernetes.io/projected/39bfdca6-5787-4eaa-bc02-41f54ae947ee-kube-api-access-h5rhs\") pod \"redhat-marketplace-sf9dk\" (UID: \"39bfdca6-5787-4eaa-bc02-41f54ae947ee\") " pod="openshift-marketplace/redhat-marketplace-sf9dk" Jan 01 08:32:38 crc kubenswrapper[4867]: I0101 08:32:38.221024 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f8112e9-ec27-48d7-8e3a-491aaa03daa9-catalog-content\") pod \"redhat-operators-wdh87\" (UID: \"1f8112e9-ec27-48d7-8e3a-491aaa03daa9\") " pod="openshift-marketplace/redhat-operators-wdh87" Jan 01 08:32:38 crc kubenswrapper[4867]: I0101 08:32:38.221092 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f8112e9-ec27-48d7-8e3a-491aaa03daa9-utilities\") pod \"redhat-operators-wdh87\" (UID: \"1f8112e9-ec27-48d7-8e3a-491aaa03daa9\") " pod="openshift-marketplace/redhat-operators-wdh87" Jan 01 08:32:38 crc kubenswrapper[4867]: I0101 08:32:38.221129 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7ll6\" (UniqueName: \"kubernetes.io/projected/1f8112e9-ec27-48d7-8e3a-491aaa03daa9-kube-api-access-h7ll6\") pod \"redhat-operators-wdh87\" (UID: \"1f8112e9-ec27-48d7-8e3a-491aaa03daa9\") " pod="openshift-marketplace/redhat-operators-wdh87" Jan 01 08:32:38 crc kubenswrapper[4867]: I0101 08:32:38.260031 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sf9dk" Jan 01 08:32:38 crc kubenswrapper[4867]: I0101 08:32:38.322971 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f8112e9-ec27-48d7-8e3a-491aaa03daa9-catalog-content\") pod \"redhat-operators-wdh87\" (UID: \"1f8112e9-ec27-48d7-8e3a-491aaa03daa9\") " pod="openshift-marketplace/redhat-operators-wdh87" Jan 01 08:32:38 crc kubenswrapper[4867]: I0101 08:32:38.323070 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f8112e9-ec27-48d7-8e3a-491aaa03daa9-utilities\") pod \"redhat-operators-wdh87\" (UID: \"1f8112e9-ec27-48d7-8e3a-491aaa03daa9\") " pod="openshift-marketplace/redhat-operators-wdh87" Jan 01 08:32:38 crc kubenswrapper[4867]: I0101 08:32:38.323142 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7ll6\" (UniqueName: \"kubernetes.io/projected/1f8112e9-ec27-48d7-8e3a-491aaa03daa9-kube-api-access-h7ll6\") pod \"redhat-operators-wdh87\" (UID: \"1f8112e9-ec27-48d7-8e3a-491aaa03daa9\") " pod="openshift-marketplace/redhat-operators-wdh87" Jan 01 08:32:38 crc kubenswrapper[4867]: I0101 08:32:38.323843 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f8112e9-ec27-48d7-8e3a-491aaa03daa9-utilities\") pod \"redhat-operators-wdh87\" (UID: \"1f8112e9-ec27-48d7-8e3a-491aaa03daa9\") " pod="openshift-marketplace/redhat-operators-wdh87" Jan 01 08:32:38 crc kubenswrapper[4867]: I0101 08:32:38.323847 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f8112e9-ec27-48d7-8e3a-491aaa03daa9-catalog-content\") pod \"redhat-operators-wdh87\" (UID: \"1f8112e9-ec27-48d7-8e3a-491aaa03daa9\") " pod="openshift-marketplace/redhat-operators-wdh87" Jan 01 08:32:38 crc kubenswrapper[4867]: I0101 08:32:38.347955 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7ll6\" (UniqueName: \"kubernetes.io/projected/1f8112e9-ec27-48d7-8e3a-491aaa03daa9-kube-api-access-h7ll6\") pod \"redhat-operators-wdh87\" (UID: \"1f8112e9-ec27-48d7-8e3a-491aaa03daa9\") " pod="openshift-marketplace/redhat-operators-wdh87" Jan 01 08:32:38 crc kubenswrapper[4867]: I0101 08:32:38.423266 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wdh87" Jan 01 08:32:38 crc kubenswrapper[4867]: I0101 08:32:38.463955 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sf9dk"] Jan 01 08:32:38 crc kubenswrapper[4867]: W0101 08:32:38.472258 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39bfdca6_5787_4eaa_bc02_41f54ae947ee.slice/crio-c0518ee040752f207d34b64b8b6505b51474e1fb60e207f4b8739b43faac68a4 WatchSource:0}: Error finding container c0518ee040752f207d34b64b8b6505b51474e1fb60e207f4b8739b43faac68a4: Status 404 returned error can't find the container with id c0518ee040752f207d34b64b8b6505b51474e1fb60e207f4b8739b43faac68a4 Jan 01 08:32:38 crc kubenswrapper[4867]: I0101 08:32:38.662279 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wdh87"] Jan 01 08:32:38 crc kubenswrapper[4867]: W0101 08:32:38.670148 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f8112e9_ec27_48d7_8e3a_491aaa03daa9.slice/crio-aac7418b0db73df0bc5ed42fb5cd42b02d7e0eeef95629ebc5af5ec3db0eee5d WatchSource:0}: Error finding container aac7418b0db73df0bc5ed42fb5cd42b02d7e0eeef95629ebc5af5ec3db0eee5d: Status 404 returned error can't find the container with id aac7418b0db73df0bc5ed42fb5cd42b02d7e0eeef95629ebc5af5ec3db0eee5d Jan 01 08:32:39 crc kubenswrapper[4867]: I0101 08:32:39.419352 4867 generic.go:334] "Generic (PLEG): container finished" podID="1f8112e9-ec27-48d7-8e3a-491aaa03daa9" containerID="6577582f838d272c3a358ad3915e5667bc42e7d06a076f1452a6238dcc058962" exitCode=0 Jan 01 08:32:39 crc kubenswrapper[4867]: I0101 08:32:39.419454 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdh87" event={"ID":"1f8112e9-ec27-48d7-8e3a-491aaa03daa9","Type":"ContainerDied","Data":"6577582f838d272c3a358ad3915e5667bc42e7d06a076f1452a6238dcc058962"} Jan 01 08:32:39 crc kubenswrapper[4867]: I0101 08:32:39.419874 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdh87" event={"ID":"1f8112e9-ec27-48d7-8e3a-491aaa03daa9","Type":"ContainerStarted","Data":"aac7418b0db73df0bc5ed42fb5cd42b02d7e0eeef95629ebc5af5ec3db0eee5d"} Jan 01 08:32:39 crc kubenswrapper[4867]: I0101 08:32:39.424035 4867 generic.go:334] "Generic (PLEG): container finished" podID="39bfdca6-5787-4eaa-bc02-41f54ae947ee" containerID="9fc7948ed68ce8c30b157ea508160c943e4927109641f0bf91814c0cee868740" exitCode=0 Jan 01 08:32:39 crc kubenswrapper[4867]: I0101 08:32:39.424819 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sf9dk" event={"ID":"39bfdca6-5787-4eaa-bc02-41f54ae947ee","Type":"ContainerDied","Data":"9fc7948ed68ce8c30b157ea508160c943e4927109641f0bf91814c0cee868740"} Jan 01 08:32:39 crc kubenswrapper[4867]: I0101 08:32:39.424870 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sf9dk" event={"ID":"39bfdca6-5787-4eaa-bc02-41f54ae947ee","Type":"ContainerStarted","Data":"c0518ee040752f207d34b64b8b6505b51474e1fb60e207f4b8739b43faac68a4"} Jan 01 08:32:40 crc kubenswrapper[4867]: I0101 08:32:40.308857 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nj5s2"] Jan 01 08:32:40 crc kubenswrapper[4867]: I0101 08:32:40.310784 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nj5s2" Jan 01 08:32:40 crc kubenswrapper[4867]: I0101 08:32:40.314940 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 01 08:32:40 crc kubenswrapper[4867]: I0101 08:32:40.321353 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nj5s2"] Jan 01 08:32:40 crc kubenswrapper[4867]: I0101 08:32:40.348385 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7514bc2-fed6-4888-ad51-5849e664cf35-utilities\") pod \"community-operators-nj5s2\" (UID: \"d7514bc2-fed6-4888-ad51-5849e664cf35\") " pod="openshift-marketplace/community-operators-nj5s2" Jan 01 08:32:40 crc kubenswrapper[4867]: I0101 08:32:40.348444 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7pnt\" (UniqueName: \"kubernetes.io/projected/d7514bc2-fed6-4888-ad51-5849e664cf35-kube-api-access-k7pnt\") pod \"community-operators-nj5s2\" (UID: \"d7514bc2-fed6-4888-ad51-5849e664cf35\") " pod="openshift-marketplace/community-operators-nj5s2" Jan 01 08:32:40 crc kubenswrapper[4867]: I0101 08:32:40.348467 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7514bc2-fed6-4888-ad51-5849e664cf35-catalog-content\") pod \"community-operators-nj5s2\" (UID: \"d7514bc2-fed6-4888-ad51-5849e664cf35\") " pod="openshift-marketplace/community-operators-nj5s2" Jan 01 08:32:40 crc kubenswrapper[4867]: I0101 08:32:40.431537 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdh87" event={"ID":"1f8112e9-ec27-48d7-8e3a-491aaa03daa9","Type":"ContainerStarted","Data":"b884772b5e5ef15884faae6b96a76f8f889c01fff4d5c5a6053386d2062188a7"} Jan 01 08:32:40 crc kubenswrapper[4867]: I0101 08:32:40.433473 4867 generic.go:334] "Generic (PLEG): container finished" podID="39bfdca6-5787-4eaa-bc02-41f54ae947ee" containerID="391b1bb2427284975457d6e24f92a3e5d38dd1499384fa4120bec31a34d1b679" exitCode=0 Jan 01 08:32:40 crc kubenswrapper[4867]: I0101 08:32:40.433512 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sf9dk" event={"ID":"39bfdca6-5787-4eaa-bc02-41f54ae947ee","Type":"ContainerDied","Data":"391b1bb2427284975457d6e24f92a3e5d38dd1499384fa4120bec31a34d1b679"} Jan 01 08:32:40 crc kubenswrapper[4867]: I0101 08:32:40.450320 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7514bc2-fed6-4888-ad51-5849e664cf35-utilities\") pod \"community-operators-nj5s2\" (UID: \"d7514bc2-fed6-4888-ad51-5849e664cf35\") " pod="openshift-marketplace/community-operators-nj5s2" Jan 01 08:32:40 crc kubenswrapper[4867]: I0101 08:32:40.450422 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7pnt\" (UniqueName: \"kubernetes.io/projected/d7514bc2-fed6-4888-ad51-5849e664cf35-kube-api-access-k7pnt\") pod \"community-operators-nj5s2\" (UID: \"d7514bc2-fed6-4888-ad51-5849e664cf35\") " pod="openshift-marketplace/community-operators-nj5s2" Jan 01 08:32:40 crc kubenswrapper[4867]: I0101 08:32:40.450465 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7514bc2-fed6-4888-ad51-5849e664cf35-catalog-content\") pod \"community-operators-nj5s2\" (UID: \"d7514bc2-fed6-4888-ad51-5849e664cf35\") " pod="openshift-marketplace/community-operators-nj5s2" Jan 01 08:32:40 crc kubenswrapper[4867]: I0101 08:32:40.451081 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7514bc2-fed6-4888-ad51-5849e664cf35-utilities\") pod \"community-operators-nj5s2\" (UID: \"d7514bc2-fed6-4888-ad51-5849e664cf35\") " pod="openshift-marketplace/community-operators-nj5s2" Jan 01 08:32:40 crc kubenswrapper[4867]: I0101 08:32:40.451113 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7514bc2-fed6-4888-ad51-5849e664cf35-catalog-content\") pod \"community-operators-nj5s2\" (UID: \"d7514bc2-fed6-4888-ad51-5849e664cf35\") " pod="openshift-marketplace/community-operators-nj5s2" Jan 01 08:32:40 crc kubenswrapper[4867]: I0101 08:32:40.478165 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7pnt\" (UniqueName: \"kubernetes.io/projected/d7514bc2-fed6-4888-ad51-5849e664cf35-kube-api-access-k7pnt\") pod \"community-operators-nj5s2\" (UID: \"d7514bc2-fed6-4888-ad51-5849e664cf35\") " pod="openshift-marketplace/community-operators-nj5s2" Jan 01 08:32:40 crc kubenswrapper[4867]: I0101 08:32:40.498136 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dtc7k"] Jan 01 08:32:40 crc kubenswrapper[4867]: I0101 08:32:40.499112 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dtc7k" Jan 01 08:32:40 crc kubenswrapper[4867]: I0101 08:32:40.500803 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 01 08:32:40 crc kubenswrapper[4867]: I0101 08:32:40.522627 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dtc7k"] Jan 01 08:32:40 crc kubenswrapper[4867]: I0101 08:32:40.552383 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lxrk\" (UniqueName: \"kubernetes.io/projected/727cf46e-4a29-4a4a-9c90-6052bc53068c-kube-api-access-2lxrk\") pod \"certified-operators-dtc7k\" (UID: \"727cf46e-4a29-4a4a-9c90-6052bc53068c\") " pod="openshift-marketplace/certified-operators-dtc7k" Jan 01 08:32:40 crc kubenswrapper[4867]: I0101 08:32:40.552451 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/727cf46e-4a29-4a4a-9c90-6052bc53068c-utilities\") pod \"certified-operators-dtc7k\" (UID: \"727cf46e-4a29-4a4a-9c90-6052bc53068c\") " pod="openshift-marketplace/certified-operators-dtc7k" Jan 01 08:32:40 crc kubenswrapper[4867]: I0101 08:32:40.552529 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/727cf46e-4a29-4a4a-9c90-6052bc53068c-catalog-content\") pod \"certified-operators-dtc7k\" (UID: \"727cf46e-4a29-4a4a-9c90-6052bc53068c\") " pod="openshift-marketplace/certified-operators-dtc7k" Jan 01 08:32:40 crc kubenswrapper[4867]: I0101 08:32:40.654195 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lxrk\" (UniqueName: \"kubernetes.io/projected/727cf46e-4a29-4a4a-9c90-6052bc53068c-kube-api-access-2lxrk\") pod \"certified-operators-dtc7k\" (UID: \"727cf46e-4a29-4a4a-9c90-6052bc53068c\") " pod="openshift-marketplace/certified-operators-dtc7k" Jan 01 08:32:40 crc kubenswrapper[4867]: I0101 08:32:40.654260 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/727cf46e-4a29-4a4a-9c90-6052bc53068c-utilities\") pod \"certified-operators-dtc7k\" (UID: \"727cf46e-4a29-4a4a-9c90-6052bc53068c\") " pod="openshift-marketplace/certified-operators-dtc7k" Jan 01 08:32:40 crc kubenswrapper[4867]: I0101 08:32:40.654309 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/727cf46e-4a29-4a4a-9c90-6052bc53068c-catalog-content\") pod \"certified-operators-dtc7k\" (UID: \"727cf46e-4a29-4a4a-9c90-6052bc53068c\") " pod="openshift-marketplace/certified-operators-dtc7k" Jan 01 08:32:40 crc kubenswrapper[4867]: I0101 08:32:40.654849 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/727cf46e-4a29-4a4a-9c90-6052bc53068c-catalog-content\") pod \"certified-operators-dtc7k\" (UID: \"727cf46e-4a29-4a4a-9c90-6052bc53068c\") " pod="openshift-marketplace/certified-operators-dtc7k" Jan 01 08:32:40 crc kubenswrapper[4867]: I0101 08:32:40.654907 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/727cf46e-4a29-4a4a-9c90-6052bc53068c-utilities\") pod \"certified-operators-dtc7k\" (UID: \"727cf46e-4a29-4a4a-9c90-6052bc53068c\") " pod="openshift-marketplace/certified-operators-dtc7k" Jan 01 08:32:40 crc kubenswrapper[4867]: I0101 08:32:40.681762 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lxrk\" (UniqueName: \"kubernetes.io/projected/727cf46e-4a29-4a4a-9c90-6052bc53068c-kube-api-access-2lxrk\") pod \"certified-operators-dtc7k\" (UID: \"727cf46e-4a29-4a4a-9c90-6052bc53068c\") " pod="openshift-marketplace/certified-operators-dtc7k" Jan 01 08:32:40 crc kubenswrapper[4867]: I0101 08:32:40.743117 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nj5s2" Jan 01 08:32:40 crc kubenswrapper[4867]: I0101 08:32:40.813541 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dtc7k" Jan 01 08:32:40 crc kubenswrapper[4867]: I0101 08:32:40.950404 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nj5s2"] Jan 01 08:32:40 crc kubenswrapper[4867]: W0101 08:32:40.963059 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7514bc2_fed6_4888_ad51_5849e664cf35.slice/crio-49b1394c040a55ca02b38e063f973a8e55f73d6b94f8a450ce417ea3bf9e30e7 WatchSource:0}: Error finding container 49b1394c040a55ca02b38e063f973a8e55f73d6b94f8a450ce417ea3bf9e30e7: Status 404 returned error can't find the container with id 49b1394c040a55ca02b38e063f973a8e55f73d6b94f8a450ce417ea3bf9e30e7 Jan 01 08:32:41 crc kubenswrapper[4867]: I0101 08:32:41.206682 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dtc7k"] Jan 01 08:32:41 crc kubenswrapper[4867]: W0101 08:32:41.214475 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod727cf46e_4a29_4a4a_9c90_6052bc53068c.slice/crio-dd54f5e1466b5a876252b660d7ed1e9f9a6b09db4b21d352e8414f42f2307adf WatchSource:0}: Error finding container dd54f5e1466b5a876252b660d7ed1e9f9a6b09db4b21d352e8414f42f2307adf: Status 404 returned error can't find the container with id dd54f5e1466b5a876252b660d7ed1e9f9a6b09db4b21d352e8414f42f2307adf Jan 01 08:32:41 crc kubenswrapper[4867]: I0101 08:32:41.440109 4867 generic.go:334] "Generic (PLEG): container finished" podID="727cf46e-4a29-4a4a-9c90-6052bc53068c" containerID="255c90736e993e44457c67619f931b858e44c0c9c8d0fca3c6895087785dd67d" exitCode=0 Jan 01 08:32:41 crc kubenswrapper[4867]: I0101 08:32:41.440149 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dtc7k" event={"ID":"727cf46e-4a29-4a4a-9c90-6052bc53068c","Type":"ContainerDied","Data":"255c90736e993e44457c67619f931b858e44c0c9c8d0fca3c6895087785dd67d"} Jan 01 08:32:41 crc kubenswrapper[4867]: I0101 08:32:41.440456 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dtc7k" event={"ID":"727cf46e-4a29-4a4a-9c90-6052bc53068c","Type":"ContainerStarted","Data":"dd54f5e1466b5a876252b660d7ed1e9f9a6b09db4b21d352e8414f42f2307adf"} Jan 01 08:32:41 crc kubenswrapper[4867]: I0101 08:32:41.443545 4867 generic.go:334] "Generic (PLEG): container finished" podID="1f8112e9-ec27-48d7-8e3a-491aaa03daa9" containerID="b884772b5e5ef15884faae6b96a76f8f889c01fff4d5c5a6053386d2062188a7" exitCode=0 Jan 01 08:32:41 crc kubenswrapper[4867]: I0101 08:32:41.443597 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdh87" event={"ID":"1f8112e9-ec27-48d7-8e3a-491aaa03daa9","Type":"ContainerDied","Data":"b884772b5e5ef15884faae6b96a76f8f889c01fff4d5c5a6053386d2062188a7"} Jan 01 08:32:41 crc kubenswrapper[4867]: I0101 08:32:41.446527 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sf9dk" event={"ID":"39bfdca6-5787-4eaa-bc02-41f54ae947ee","Type":"ContainerStarted","Data":"d4b3057072b274746f858ecd2bd144607d0c49981013dbc1e5c10d6f26101f56"} Jan 01 08:32:41 crc kubenswrapper[4867]: I0101 08:32:41.449387 4867 generic.go:334] "Generic (PLEG): container finished" podID="d7514bc2-fed6-4888-ad51-5849e664cf35" containerID="f53627acf22a2e3aa445b8010a11d54b4409a86d8bba2827d2e81b00c4f804e0" exitCode=0 Jan 01 08:32:41 crc kubenswrapper[4867]: I0101 08:32:41.449420 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nj5s2" event={"ID":"d7514bc2-fed6-4888-ad51-5849e664cf35","Type":"ContainerDied","Data":"f53627acf22a2e3aa445b8010a11d54b4409a86d8bba2827d2e81b00c4f804e0"} Jan 01 08:32:41 crc kubenswrapper[4867]: I0101 08:32:41.449443 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nj5s2" event={"ID":"d7514bc2-fed6-4888-ad51-5849e664cf35","Type":"ContainerStarted","Data":"49b1394c040a55ca02b38e063f973a8e55f73d6b94f8a450ce417ea3bf9e30e7"} Jan 01 08:32:41 crc kubenswrapper[4867]: I0101 08:32:41.510259 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sf9dk" podStartSLOduration=3.029865282 podStartE2EDuration="4.51024374s" podCreationTimestamp="2026-01-01 08:32:37 +0000 UTC" firstStartedPulling="2026-01-01 08:32:39.425498215 +0000 UTC m=+368.560767024" lastFinishedPulling="2026-01-01 08:32:40.905876713 +0000 UTC m=+370.041145482" observedRunningTime="2026-01-01 08:32:41.50798164 +0000 UTC m=+370.643250429" watchObservedRunningTime="2026-01-01 08:32:41.51024374 +0000 UTC m=+370.645512519" Jan 01 08:32:42 crc kubenswrapper[4867]: I0101 08:32:42.455339 4867 generic.go:334] "Generic (PLEG): container finished" podID="d7514bc2-fed6-4888-ad51-5849e664cf35" containerID="1d8ca7c7118d3c17956f51ef97d72cbd4a639cae659da844c7d607e07149244c" exitCode=0 Jan 01 08:32:42 crc kubenswrapper[4867]: I0101 08:32:42.455547 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nj5s2" event={"ID":"d7514bc2-fed6-4888-ad51-5849e664cf35","Type":"ContainerDied","Data":"1d8ca7c7118d3c17956f51ef97d72cbd4a639cae659da844c7d607e07149244c"} Jan 01 08:32:42 crc kubenswrapper[4867]: I0101 08:32:42.459010 4867 generic.go:334] "Generic (PLEG): container finished" podID="727cf46e-4a29-4a4a-9c90-6052bc53068c" containerID="b9994592459a1137cd454939a0f8244d95ab83c16dd5268b2c21ccc13d184892" exitCode=0 Jan 01 08:32:42 crc kubenswrapper[4867]: I0101 08:32:42.459073 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dtc7k" event={"ID":"727cf46e-4a29-4a4a-9c90-6052bc53068c","Type":"ContainerDied","Data":"b9994592459a1137cd454939a0f8244d95ab83c16dd5268b2c21ccc13d184892"} Jan 01 08:32:42 crc kubenswrapper[4867]: I0101 08:32:42.464867 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdh87" event={"ID":"1f8112e9-ec27-48d7-8e3a-491aaa03daa9","Type":"ContainerStarted","Data":"bfa1a86fb324e3e54ecf8947bf33ef62ee9d37ed36466814696eaaf3817f41f3"} Jan 01 08:32:42 crc kubenswrapper[4867]: I0101 08:32:42.495305 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wdh87" podStartSLOduration=2.010031773 podStartE2EDuration="4.495290672s" podCreationTimestamp="2026-01-01 08:32:38 +0000 UTC" firstStartedPulling="2026-01-01 08:32:39.421010917 +0000 UTC m=+368.556279726" lastFinishedPulling="2026-01-01 08:32:41.906269846 +0000 UTC m=+371.041538625" observedRunningTime="2026-01-01 08:32:42.492237449 +0000 UTC m=+371.627506218" watchObservedRunningTime="2026-01-01 08:32:42.495290672 +0000 UTC m=+371.630559441" Jan 01 08:32:43 crc kubenswrapper[4867]: I0101 08:32:43.471536 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nj5s2" event={"ID":"d7514bc2-fed6-4888-ad51-5849e664cf35","Type":"ContainerStarted","Data":"3c82739c0c4c3487a13adca7f8402e71625971c5a6fd9006a95fa6865100f7bd"} Jan 01 08:32:43 crc kubenswrapper[4867]: I0101 08:32:43.473533 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dtc7k" event={"ID":"727cf46e-4a29-4a4a-9c90-6052bc53068c","Type":"ContainerStarted","Data":"05d75b2897a341874112a6c7d878c3a812b8a981fdeddda6db7756f674bdc982"} Jan 01 08:32:43 crc kubenswrapper[4867]: I0101 08:32:43.490708 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nj5s2" podStartSLOduration=2.101926537 podStartE2EDuration="3.490685311s" podCreationTimestamp="2026-01-01 08:32:40 +0000 UTC" firstStartedPulling="2026-01-01 08:32:41.451342276 +0000 UTC m=+370.586611045" lastFinishedPulling="2026-01-01 08:32:42.84010106 +0000 UTC m=+371.975369819" observedRunningTime="2026-01-01 08:32:43.487369 +0000 UTC m=+372.622637769" watchObservedRunningTime="2026-01-01 08:32:43.490685311 +0000 UTC m=+372.625954080" Jan 01 08:32:43 crc kubenswrapper[4867]: I0101 08:32:43.507572 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dtc7k" podStartSLOduration=1.9938242769999999 podStartE2EDuration="3.507550488s" podCreationTimestamp="2026-01-01 08:32:40 +0000 UTC" firstStartedPulling="2026-01-01 08:32:41.441910757 +0000 UTC m=+370.577179526" lastFinishedPulling="2026-01-01 08:32:42.955636968 +0000 UTC m=+372.090905737" observedRunningTime="2026-01-01 08:32:43.507064873 +0000 UTC m=+372.642333642" watchObservedRunningTime="2026-01-01 08:32:43.507550488 +0000 UTC m=+372.642819257" Jan 01 08:32:48 crc kubenswrapper[4867]: I0101 08:32:48.260409 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sf9dk" Jan 01 08:32:48 crc kubenswrapper[4867]: I0101 08:32:48.261368 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sf9dk" Jan 01 08:32:48 crc kubenswrapper[4867]: I0101 08:32:48.338004 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sf9dk" Jan 01 08:32:48 crc kubenswrapper[4867]: I0101 08:32:48.424395 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wdh87" Jan 01 08:32:48 crc kubenswrapper[4867]: I0101 08:32:48.424458 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wdh87" Jan 01 08:32:48 crc kubenswrapper[4867]: I0101 08:32:48.474318 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wdh87" Jan 01 08:32:48 crc kubenswrapper[4867]: I0101 08:32:48.538909 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wdh87" Jan 01 08:32:48 crc kubenswrapper[4867]: I0101 08:32:48.552808 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sf9dk" Jan 01 08:32:50 crc kubenswrapper[4867]: I0101 08:32:50.744195 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nj5s2" Jan 01 08:32:50 crc kubenswrapper[4867]: I0101 08:32:50.744251 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nj5s2" Jan 01 08:32:50 crc kubenswrapper[4867]: I0101 08:32:50.799472 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nj5s2" Jan 01 08:32:50 crc kubenswrapper[4867]: I0101 08:32:50.814748 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dtc7k" Jan 01 08:32:50 crc kubenswrapper[4867]: I0101 08:32:50.814799 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dtc7k" Jan 01 08:32:50 crc kubenswrapper[4867]: I0101 08:32:50.851403 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dtc7k" Jan 01 08:32:51 crc kubenswrapper[4867]: I0101 08:32:51.331350 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 08:32:51 crc kubenswrapper[4867]: I0101 08:32:51.331403 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 08:32:51 crc kubenswrapper[4867]: I0101 08:32:51.331443 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-69jph" Jan 01 08:32:51 crc kubenswrapper[4867]: I0101 08:32:51.331877 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c53a76cc86937cf15114c3707751f587066a2ca805617f3c3a8c296d350279a5"} pod="openshift-machine-config-operator/machine-config-daemon-69jph" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 01 08:32:51 crc kubenswrapper[4867]: I0101 08:32:51.331981 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" containerID="cri-o://c53a76cc86937cf15114c3707751f587066a2ca805617f3c3a8c296d350279a5" gracePeriod=600 Jan 01 08:32:51 crc kubenswrapper[4867]: I0101 08:32:51.555405 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dtc7k" Jan 01 08:32:51 crc kubenswrapper[4867]: I0101 08:32:51.569446 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nj5s2" Jan 01 08:32:54 crc kubenswrapper[4867]: I0101 08:32:54.530583 4867 generic.go:334] "Generic (PLEG): container finished" podID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerID="c53a76cc86937cf15114c3707751f587066a2ca805617f3c3a8c296d350279a5" exitCode=0 Jan 01 08:32:54 crc kubenswrapper[4867]: I0101 08:32:54.530681 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerDied","Data":"c53a76cc86937cf15114c3707751f587066a2ca805617f3c3a8c296d350279a5"} Jan 01 08:32:54 crc kubenswrapper[4867]: I0101 08:32:54.530864 4867 scope.go:117] "RemoveContainer" containerID="1c915c8585b3da45d458283e966e2c48d322ef3e55c13c69356d94b0355141df" Jan 01 08:32:55 crc kubenswrapper[4867]: I0101 08:32:55.538278 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerStarted","Data":"c5738a332d05a46a3e480cd27871b24fa7ab3c38831377e98113a1cb4db4d6b9"} Jan 01 08:33:31 crc kubenswrapper[4867]: I0101 08:33:31.325519 4867 scope.go:117] "RemoveContainer" containerID="84f3aba96b7a17026a129fe34c73f127f9db06f4b63d19cf5eebec11d9f00fe0" Jan 01 08:33:31 crc kubenswrapper[4867]: I0101 08:33:31.352601 4867 scope.go:117] "RemoveContainer" containerID="cc9b751374b1cd0251f7aff8ec686eec62920fca41f2668cc12df969629004e0" Jan 01 08:33:31 crc kubenswrapper[4867]: I0101 08:33:31.375250 4867 scope.go:117] "RemoveContainer" containerID="4f43ecb2e6ff74fd23c57cd58b8d869283bd5dbf658c5e986f25b2545da79f66" Jan 01 08:33:31 crc kubenswrapper[4867]: I0101 08:33:31.401478 4867 scope.go:117] "RemoveContainer" containerID="1f5226faafaf110160704628b357f0a8f29b16cd46ce271d294b0022214a1aac" Jan 01 08:33:31 crc kubenswrapper[4867]: I0101 08:33:31.435666 4867 scope.go:117] "RemoveContainer" containerID="d8ccd076c0f55bf5ec294860b0d52ab83b669b62402ceb3cb243c1188bea524a" Jan 01 08:33:31 crc kubenswrapper[4867]: I0101 08:33:31.459180 4867 scope.go:117] "RemoveContainer" containerID="5580e6b63c34c51923cb32b3e2d937d97c09ca14f97351d362deafee279da967" Jan 01 08:35:21 crc kubenswrapper[4867]: I0101 08:35:21.331684 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 08:35:21 crc kubenswrapper[4867]: I0101 08:35:21.332443 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 08:35:51 crc kubenswrapper[4867]: I0101 08:35:51.331225 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 08:35:51 crc kubenswrapper[4867]: I0101 08:35:51.331917 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 08:36:21 crc kubenswrapper[4867]: I0101 08:36:21.331121 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 08:36:21 crc kubenswrapper[4867]: I0101 08:36:21.331763 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 08:36:21 crc kubenswrapper[4867]: I0101 08:36:21.331829 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-69jph" Jan 01 08:36:21 crc kubenswrapper[4867]: I0101 08:36:21.332563 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c5738a332d05a46a3e480cd27871b24fa7ab3c38831377e98113a1cb4db4d6b9"} pod="openshift-machine-config-operator/machine-config-daemon-69jph" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 01 08:36:21 crc kubenswrapper[4867]: I0101 08:36:21.332659 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" containerID="cri-o://c5738a332d05a46a3e480cd27871b24fa7ab3c38831377e98113a1cb4db4d6b9" gracePeriod=600 Jan 01 08:36:22 crc kubenswrapper[4867]: I0101 08:36:22.215926 4867 generic.go:334] "Generic (PLEG): container finished" podID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerID="c5738a332d05a46a3e480cd27871b24fa7ab3c38831377e98113a1cb4db4d6b9" exitCode=0 Jan 01 08:36:22 crc kubenswrapper[4867]: I0101 08:36:22.215997 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerDied","Data":"c5738a332d05a46a3e480cd27871b24fa7ab3c38831377e98113a1cb4db4d6b9"} Jan 01 08:36:22 crc kubenswrapper[4867]: I0101 08:36:22.216330 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerStarted","Data":"b489a809c46fea0670e0a497e09fb93b663297b7dde42c0a30153339b2adc104"} Jan 01 08:36:22 crc kubenswrapper[4867]: I0101 08:36:22.216360 4867 scope.go:117] "RemoveContainer" containerID="c53a76cc86937cf15114c3707751f587066a2ca805617f3c3a8c296d350279a5" Jan 01 08:37:21 crc kubenswrapper[4867]: I0101 08:37:21.079628 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-r4qhz"] Jan 01 08:37:21 crc kubenswrapper[4867]: I0101 08:37:21.081477 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-r4qhz" Jan 01 08:37:21 crc kubenswrapper[4867]: I0101 08:37:21.102721 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-r4qhz"] Jan 01 08:37:21 crc kubenswrapper[4867]: I0101 08:37:21.217446 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/40c23d5a-758b-4c23-842f-9d25c59ad45d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-r4qhz\" (UID: \"40c23d5a-758b-4c23-842f-9d25c59ad45d\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4qhz" Jan 01 08:37:21 crc kubenswrapper[4867]: I0101 08:37:21.217618 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-r4qhz\" (UID: \"40c23d5a-758b-4c23-842f-9d25c59ad45d\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4qhz" Jan 01 08:37:21 crc kubenswrapper[4867]: I0101 08:37:21.217689 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-692p2\" (UniqueName: \"kubernetes.io/projected/40c23d5a-758b-4c23-842f-9d25c59ad45d-kube-api-access-692p2\") pod \"image-registry-66df7c8f76-r4qhz\" (UID: \"40c23d5a-758b-4c23-842f-9d25c59ad45d\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4qhz" Jan 01 08:37:21 crc kubenswrapper[4867]: I0101 08:37:21.217733 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/40c23d5a-758b-4c23-842f-9d25c59ad45d-registry-tls\") pod \"image-registry-66df7c8f76-r4qhz\" (UID: \"40c23d5a-758b-4c23-842f-9d25c59ad45d\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4qhz" Jan 01 08:37:21 crc kubenswrapper[4867]: I0101 08:37:21.217756 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/40c23d5a-758b-4c23-842f-9d25c59ad45d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-r4qhz\" (UID: \"40c23d5a-758b-4c23-842f-9d25c59ad45d\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4qhz" Jan 01 08:37:21 crc kubenswrapper[4867]: I0101 08:37:21.217852 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/40c23d5a-758b-4c23-842f-9d25c59ad45d-trusted-ca\") pod \"image-registry-66df7c8f76-r4qhz\" (UID: \"40c23d5a-758b-4c23-842f-9d25c59ad45d\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4qhz" Jan 01 08:37:21 crc kubenswrapper[4867]: I0101 08:37:21.217896 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/40c23d5a-758b-4c23-842f-9d25c59ad45d-registry-certificates\") pod \"image-registry-66df7c8f76-r4qhz\" (UID: \"40c23d5a-758b-4c23-842f-9d25c59ad45d\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4qhz" Jan 01 08:37:21 crc kubenswrapper[4867]: I0101 08:37:21.217953 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/40c23d5a-758b-4c23-842f-9d25c59ad45d-bound-sa-token\") pod \"image-registry-66df7c8f76-r4qhz\" (UID: \"40c23d5a-758b-4c23-842f-9d25c59ad45d\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4qhz" Jan 01 08:37:21 crc kubenswrapper[4867]: I0101 08:37:21.247718 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-r4qhz\" (UID: \"40c23d5a-758b-4c23-842f-9d25c59ad45d\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4qhz" Jan 01 08:37:21 crc kubenswrapper[4867]: I0101 08:37:21.319395 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-692p2\" (UniqueName: \"kubernetes.io/projected/40c23d5a-758b-4c23-842f-9d25c59ad45d-kube-api-access-692p2\") pod \"image-registry-66df7c8f76-r4qhz\" (UID: \"40c23d5a-758b-4c23-842f-9d25c59ad45d\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4qhz" Jan 01 08:37:21 crc kubenswrapper[4867]: I0101 08:37:21.319913 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/40c23d5a-758b-4c23-842f-9d25c59ad45d-registry-tls\") pod \"image-registry-66df7c8f76-r4qhz\" (UID: \"40c23d5a-758b-4c23-842f-9d25c59ad45d\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4qhz" Jan 01 08:37:21 crc kubenswrapper[4867]: I0101 08:37:21.319940 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/40c23d5a-758b-4c23-842f-9d25c59ad45d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-r4qhz\" (UID: \"40c23d5a-758b-4c23-842f-9d25c59ad45d\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4qhz" Jan 01 08:37:21 crc kubenswrapper[4867]: I0101 08:37:21.319963 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/40c23d5a-758b-4c23-842f-9d25c59ad45d-trusted-ca\") pod \"image-registry-66df7c8f76-r4qhz\" (UID: \"40c23d5a-758b-4c23-842f-9d25c59ad45d\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4qhz" Jan 01 08:37:21 crc kubenswrapper[4867]: I0101 08:37:21.319982 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/40c23d5a-758b-4c23-842f-9d25c59ad45d-registry-certificates\") pod \"image-registry-66df7c8f76-r4qhz\" (UID: \"40c23d5a-758b-4c23-842f-9d25c59ad45d\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4qhz" Jan 01 08:37:21 crc kubenswrapper[4867]: I0101 08:37:21.320006 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/40c23d5a-758b-4c23-842f-9d25c59ad45d-bound-sa-token\") pod \"image-registry-66df7c8f76-r4qhz\" (UID: \"40c23d5a-758b-4c23-842f-9d25c59ad45d\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4qhz" Jan 01 08:37:21 crc kubenswrapper[4867]: I0101 08:37:21.320039 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/40c23d5a-758b-4c23-842f-9d25c59ad45d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-r4qhz\" (UID: \"40c23d5a-758b-4c23-842f-9d25c59ad45d\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4qhz" Jan 01 08:37:21 crc kubenswrapper[4867]: I0101 08:37:21.321239 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/40c23d5a-758b-4c23-842f-9d25c59ad45d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-r4qhz\" (UID: \"40c23d5a-758b-4c23-842f-9d25c59ad45d\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4qhz" Jan 01 08:37:21 crc kubenswrapper[4867]: I0101 08:37:21.322477 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/40c23d5a-758b-4c23-842f-9d25c59ad45d-trusted-ca\") pod \"image-registry-66df7c8f76-r4qhz\" (UID: \"40c23d5a-758b-4c23-842f-9d25c59ad45d\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4qhz" Jan 01 08:37:21 crc kubenswrapper[4867]: I0101 08:37:21.322737 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/40c23d5a-758b-4c23-842f-9d25c59ad45d-registry-certificates\") pod \"image-registry-66df7c8f76-r4qhz\" (UID: \"40c23d5a-758b-4c23-842f-9d25c59ad45d\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4qhz" Jan 01 08:37:21 crc kubenswrapper[4867]: I0101 08:37:21.329006 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/40c23d5a-758b-4c23-842f-9d25c59ad45d-registry-tls\") pod \"image-registry-66df7c8f76-r4qhz\" (UID: \"40c23d5a-758b-4c23-842f-9d25c59ad45d\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4qhz" Jan 01 08:37:21 crc kubenswrapper[4867]: I0101 08:37:21.336642 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/40c23d5a-758b-4c23-842f-9d25c59ad45d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-r4qhz\" (UID: \"40c23d5a-758b-4c23-842f-9d25c59ad45d\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4qhz" Jan 01 08:37:21 crc kubenswrapper[4867]: I0101 08:37:21.343935 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-692p2\" (UniqueName: \"kubernetes.io/projected/40c23d5a-758b-4c23-842f-9d25c59ad45d-kube-api-access-692p2\") pod \"image-registry-66df7c8f76-r4qhz\" (UID: \"40c23d5a-758b-4c23-842f-9d25c59ad45d\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4qhz" Jan 01 08:37:21 crc kubenswrapper[4867]: I0101 08:37:21.345266 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/40c23d5a-758b-4c23-842f-9d25c59ad45d-bound-sa-token\") pod \"image-registry-66df7c8f76-r4qhz\" (UID: \"40c23d5a-758b-4c23-842f-9d25c59ad45d\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4qhz" Jan 01 08:37:21 crc kubenswrapper[4867]: I0101 08:37:21.400315 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-r4qhz" Jan 01 08:37:21 crc kubenswrapper[4867]: I0101 08:37:21.625512 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-r4qhz"] Jan 01 08:37:22 crc kubenswrapper[4867]: I0101 08:37:22.626642 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-r4qhz" event={"ID":"40c23d5a-758b-4c23-842f-9d25c59ad45d","Type":"ContainerStarted","Data":"54f18d067161e532b8b8a0c660d7ab8acccb265a2f7a9df1f54a83238011214e"} Jan 01 08:37:22 crc kubenswrapper[4867]: I0101 08:37:22.627063 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-r4qhz" event={"ID":"40c23d5a-758b-4c23-842f-9d25c59ad45d","Type":"ContainerStarted","Data":"8c530fc23971291a6d142e6772da202445f171ad0a956424d441ba23d6090e3f"} Jan 01 08:37:22 crc kubenswrapper[4867]: I0101 08:37:22.627104 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-r4qhz" Jan 01 08:37:41 crc kubenswrapper[4867]: I0101 08:37:41.411301 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-r4qhz" Jan 01 08:37:41 crc kubenswrapper[4867]: I0101 08:37:41.446178 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-r4qhz" podStartSLOduration=20.446149276 podStartE2EDuration="20.446149276s" podCreationTimestamp="2026-01-01 08:37:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:37:22.660002025 +0000 UTC m=+651.795270854" watchObservedRunningTime="2026-01-01 08:37:41.446149276 +0000 UTC m=+670.581418075" Jan 01 08:37:41 crc kubenswrapper[4867]: I0101 08:37:41.491369 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4nb85"] Jan 01 08:38:06 crc kubenswrapper[4867]: I0101 08:38:06.542342 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" podUID="bab255b3-6b41-494f-a4f6-dde5ebe7b538" containerName="registry" containerID="cri-o://8d47e170aa6bcb204c2758b521d56a700de61b0e3faf84929813cba1ff65f620" gracePeriod=30 Jan 01 08:38:06 crc kubenswrapper[4867]: I0101 08:38:06.966967 4867 generic.go:334] "Generic (PLEG): container finished" podID="bab255b3-6b41-494f-a4f6-dde5ebe7b538" containerID="8d47e170aa6bcb204c2758b521d56a700de61b0e3faf84929813cba1ff65f620" exitCode=0 Jan 01 08:38:06 crc kubenswrapper[4867]: I0101 08:38:06.967013 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" event={"ID":"bab255b3-6b41-494f-a4f6-dde5ebe7b538","Type":"ContainerDied","Data":"8d47e170aa6bcb204c2758b521d56a700de61b0e3faf84929813cba1ff65f620"} Jan 01 08:38:06 crc kubenswrapper[4867]: I0101 08:38:06.967045 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" event={"ID":"bab255b3-6b41-494f-a4f6-dde5ebe7b538","Type":"ContainerDied","Data":"ce8043c911f4b287e7d74de50848297bc19a3f2745188805cf6ca2c858c741c4"} Jan 01 08:38:06 crc kubenswrapper[4867]: I0101 08:38:06.967060 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce8043c911f4b287e7d74de50848297bc19a3f2745188805cf6ca2c858c741c4" Jan 01 08:38:07 crc kubenswrapper[4867]: I0101 08:38:07.004304 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:38:07 crc kubenswrapper[4867]: I0101 08:38:07.202675 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kv7b\" (UniqueName: \"kubernetes.io/projected/bab255b3-6b41-494f-a4f6-dde5ebe7b538-kube-api-access-8kv7b\") pod \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " Jan 01 08:38:07 crc kubenswrapper[4867]: I0101 08:38:07.203048 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bab255b3-6b41-494f-a4f6-dde5ebe7b538-trusted-ca\") pod \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " Jan 01 08:38:07 crc kubenswrapper[4867]: I0101 08:38:07.203090 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bab255b3-6b41-494f-a4f6-dde5ebe7b538-registry-certificates\") pod \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " Jan 01 08:38:07 crc kubenswrapper[4867]: I0101 08:38:07.203129 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bab255b3-6b41-494f-a4f6-dde5ebe7b538-registry-tls\") pod \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " Jan 01 08:38:07 crc kubenswrapper[4867]: I0101 08:38:07.203162 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bab255b3-6b41-494f-a4f6-dde5ebe7b538-ca-trust-extracted\") pod \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " Jan 01 08:38:07 crc kubenswrapper[4867]: I0101 08:38:07.203197 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bab255b3-6b41-494f-a4f6-dde5ebe7b538-installation-pull-secrets\") pod \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " Jan 01 08:38:07 crc kubenswrapper[4867]: I0101 08:38:07.203376 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " Jan 01 08:38:07 crc kubenswrapper[4867]: I0101 08:38:07.203421 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bab255b3-6b41-494f-a4f6-dde5ebe7b538-bound-sa-token\") pod \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\" (UID: \"bab255b3-6b41-494f-a4f6-dde5ebe7b538\") " Jan 01 08:38:07 crc kubenswrapper[4867]: I0101 08:38:07.204258 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bab255b3-6b41-494f-a4f6-dde5ebe7b538-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "bab255b3-6b41-494f-a4f6-dde5ebe7b538" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:38:07 crc kubenswrapper[4867]: I0101 08:38:07.206350 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bab255b3-6b41-494f-a4f6-dde5ebe7b538-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bab255b3-6b41-494f-a4f6-dde5ebe7b538" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:38:07 crc kubenswrapper[4867]: I0101 08:38:07.212834 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bab255b3-6b41-494f-a4f6-dde5ebe7b538-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bab255b3-6b41-494f-a4f6-dde5ebe7b538" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:38:07 crc kubenswrapper[4867]: I0101 08:38:07.215189 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bab255b3-6b41-494f-a4f6-dde5ebe7b538-kube-api-access-8kv7b" (OuterVolumeSpecName: "kube-api-access-8kv7b") pod "bab255b3-6b41-494f-a4f6-dde5ebe7b538" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538"). InnerVolumeSpecName "kube-api-access-8kv7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:38:07 crc kubenswrapper[4867]: I0101 08:38:07.215980 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bab255b3-6b41-494f-a4f6-dde5ebe7b538-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "bab255b3-6b41-494f-a4f6-dde5ebe7b538" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:38:07 crc kubenswrapper[4867]: I0101 08:38:07.219103 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bab255b3-6b41-494f-a4f6-dde5ebe7b538-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "bab255b3-6b41-494f-a4f6-dde5ebe7b538" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:38:07 crc kubenswrapper[4867]: I0101 08:38:07.225618 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "bab255b3-6b41-494f-a4f6-dde5ebe7b538" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 01 08:38:07 crc kubenswrapper[4867]: I0101 08:38:07.243275 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bab255b3-6b41-494f-a4f6-dde5ebe7b538-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "bab255b3-6b41-494f-a4f6-dde5ebe7b538" (UID: "bab255b3-6b41-494f-a4f6-dde5ebe7b538"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:38:07 crc kubenswrapper[4867]: I0101 08:38:07.305092 4867 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bab255b3-6b41-494f-a4f6-dde5ebe7b538-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 01 08:38:07 crc kubenswrapper[4867]: I0101 08:38:07.305432 4867 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bab255b3-6b41-494f-a4f6-dde5ebe7b538-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 01 08:38:07 crc kubenswrapper[4867]: I0101 08:38:07.305462 4867 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bab255b3-6b41-494f-a4f6-dde5ebe7b538-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 01 08:38:07 crc kubenswrapper[4867]: I0101 08:38:07.305487 4867 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bab255b3-6b41-494f-a4f6-dde5ebe7b538-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 01 08:38:07 crc kubenswrapper[4867]: I0101 08:38:07.305510 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kv7b\" (UniqueName: \"kubernetes.io/projected/bab255b3-6b41-494f-a4f6-dde5ebe7b538-kube-api-access-8kv7b\") on node \"crc\" DevicePath \"\"" Jan 01 08:38:07 crc kubenswrapper[4867]: I0101 08:38:07.305530 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bab255b3-6b41-494f-a4f6-dde5ebe7b538-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 01 08:38:07 crc kubenswrapper[4867]: I0101 08:38:07.305551 4867 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bab255b3-6b41-494f-a4f6-dde5ebe7b538-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 01 08:38:07 crc kubenswrapper[4867]: I0101 08:38:07.974660 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4nb85" Jan 01 08:38:08 crc kubenswrapper[4867]: I0101 08:38:08.027985 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4nb85"] Jan 01 08:38:08 crc kubenswrapper[4867]: I0101 08:38:08.035230 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4nb85"] Jan 01 08:38:09 crc kubenswrapper[4867]: I0101 08:38:09.141922 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bab255b3-6b41-494f-a4f6-dde5ebe7b538" path="/var/lib/kubelet/pods/bab255b3-6b41-494f-a4f6-dde5ebe7b538/volumes" Jan 01 08:38:21 crc kubenswrapper[4867]: I0101 08:38:21.331022 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 08:38:21 crc kubenswrapper[4867]: I0101 08:38:21.331798 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 08:38:31 crc kubenswrapper[4867]: I0101 08:38:31.629052 4867 scope.go:117] "RemoveContainer" containerID="8d47e170aa6bcb204c2758b521d56a700de61b0e3faf84929813cba1ff65f620" Jan 01 08:38:51 crc kubenswrapper[4867]: I0101 08:38:51.331363 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 08:38:51 crc kubenswrapper[4867]: I0101 08:38:51.332142 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.311372 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6nftn"] Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.312919 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="ovn-controller" containerID="cri-o://a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2" gracePeriod=30 Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.313029 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="sbdb" containerID="cri-o://7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d" gracePeriod=30 Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.313080 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a" gracePeriod=30 Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.313157 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="ovn-acl-logging" containerID="cri-o://21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936" gracePeriod=30 Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.313146 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="northd" containerID="cri-o://88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba" gracePeriod=30 Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.313229 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="kube-rbac-proxy-node" containerID="cri-o://29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a" gracePeriod=30 Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.313435 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="nbdb" containerID="cri-o://4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20" gracePeriod=30 Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.349031 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="ovnkube-controller" containerID="cri-o://5e27cc3c277a229235b0ff61f0b87aa1c0c080cec7fd84c149bc2b47722102a7" gracePeriod=30 Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.647779 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nftn_2d26a65b-86d6-4603-bdeb-ffcb2f086fda/ovnkube-controller/3.log" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.650282 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nftn_2d26a65b-86d6-4603-bdeb-ffcb2f086fda/ovn-acl-logging/0.log" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.650830 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nftn_2d26a65b-86d6-4603-bdeb-ffcb2f086fda/ovn-controller/0.log" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.651272 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.699088 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-g8smn"] Jan 01 08:38:59 crc kubenswrapper[4867]: E0101 08:38:59.699490 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="kubecfg-setup" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.699501 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="kubecfg-setup" Jan 01 08:38:59 crc kubenswrapper[4867]: E0101 08:38:59.699509 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="ovnkube-controller" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.699514 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="ovnkube-controller" Jan 01 08:38:59 crc kubenswrapper[4867]: E0101 08:38:59.699524 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="northd" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.699530 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="northd" Jan 01 08:38:59 crc kubenswrapper[4867]: E0101 08:38:59.699539 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="kube-rbac-proxy-node" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.699545 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="kube-rbac-proxy-node" Jan 01 08:38:59 crc kubenswrapper[4867]: E0101 08:38:59.699555 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="ovnkube-controller" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.699562 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="ovnkube-controller" Jan 01 08:38:59 crc kubenswrapper[4867]: E0101 08:38:59.699572 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="ovnkube-controller" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.699577 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="ovnkube-controller" Jan 01 08:38:59 crc kubenswrapper[4867]: E0101 08:38:59.699585 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="ovn-acl-logging" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.699591 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="ovn-acl-logging" Jan 01 08:38:59 crc kubenswrapper[4867]: E0101 08:38:59.699598 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="ovnkube-controller" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.699604 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="ovnkube-controller" Jan 01 08:38:59 crc kubenswrapper[4867]: E0101 08:38:59.699610 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="ovn-controller" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.699616 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="ovn-controller" Jan 01 08:38:59 crc kubenswrapper[4867]: E0101 08:38:59.699626 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="nbdb" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.699660 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="nbdb" Jan 01 08:38:59 crc kubenswrapper[4867]: E0101 08:38:59.699669 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bab255b3-6b41-494f-a4f6-dde5ebe7b538" containerName="registry" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.699675 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="bab255b3-6b41-494f-a4f6-dde5ebe7b538" containerName="registry" Jan 01 08:38:59 crc kubenswrapper[4867]: E0101 08:38:59.699683 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="kube-rbac-proxy-ovn-metrics" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.699689 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="kube-rbac-proxy-ovn-metrics" Jan 01 08:38:59 crc kubenswrapper[4867]: E0101 08:38:59.699695 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="sbdb" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.699701 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="sbdb" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.699774 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="ovnkube-controller" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.699783 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="kube-rbac-proxy-node" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.699789 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="ovnkube-controller" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.699796 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="ovnkube-controller" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.699803 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="northd" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.699810 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="bab255b3-6b41-494f-a4f6-dde5ebe7b538" containerName="registry" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.699818 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="sbdb" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.699827 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="ovn-controller" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.699833 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="nbdb" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.699840 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="kube-rbac-proxy-ovn-metrics" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.699848 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="ovn-acl-logging" Jan 01 08:38:59 crc kubenswrapper[4867]: E0101 08:38:59.699957 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="ovnkube-controller" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.699964 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="ovnkube-controller" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.700039 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="ovnkube-controller" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.700048 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerName="ovnkube-controller" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.701511 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.714162 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-var-lib-openvswitch\") pod \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.714212 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-etc-openvswitch\") pod \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.714266 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-log-socket\") pod \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.714295 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-run-ovn-kubernetes\") pod \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.714342 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-systemd-units\") pod \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.714400 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-cni-netd\") pod \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.714457 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-kubelet\") pod \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.714491 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-ovn-node-metrics-cert\") pod \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.714531 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-run-systemd\") pod \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.714554 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-run-netns\") pod \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.714577 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-run-ovn\") pod \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.714603 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-slash\") pod \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.714625 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-node-log\") pod \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.714649 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvswz\" (UniqueName: \"kubernetes.io/projected/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-kube-api-access-kvswz\") pod \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.714671 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-cni-bin\") pod \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.714697 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-ovnkube-config\") pod \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.714721 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-ovnkube-script-lib\") pod \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.714740 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-env-overrides\") pod \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.714768 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-var-lib-cni-networks-ovn-kubernetes\") pod \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.714788 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-run-openvswitch\") pod \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\" (UID: \"2d26a65b-86d6-4603-bdeb-ffcb2f086fda\") " Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.715033 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "2d26a65b-86d6-4603-bdeb-ffcb2f086fda" (UID: "2d26a65b-86d6-4603-bdeb-ffcb2f086fda"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.715069 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "2d26a65b-86d6-4603-bdeb-ffcb2f086fda" (UID: "2d26a65b-86d6-4603-bdeb-ffcb2f086fda"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.715177 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "2d26a65b-86d6-4603-bdeb-ffcb2f086fda" (UID: "2d26a65b-86d6-4603-bdeb-ffcb2f086fda"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.715230 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "2d26a65b-86d6-4603-bdeb-ffcb2f086fda" (UID: "2d26a65b-86d6-4603-bdeb-ffcb2f086fda"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.715256 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-log-socket" (OuterVolumeSpecName: "log-socket") pod "2d26a65b-86d6-4603-bdeb-ffcb2f086fda" (UID: "2d26a65b-86d6-4603-bdeb-ffcb2f086fda"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.715279 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "2d26a65b-86d6-4603-bdeb-ffcb2f086fda" (UID: "2d26a65b-86d6-4603-bdeb-ffcb2f086fda"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.715308 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "2d26a65b-86d6-4603-bdeb-ffcb2f086fda" (UID: "2d26a65b-86d6-4603-bdeb-ffcb2f086fda"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.715335 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "2d26a65b-86d6-4603-bdeb-ffcb2f086fda" (UID: "2d26a65b-86d6-4603-bdeb-ffcb2f086fda"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.715361 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "2d26a65b-86d6-4603-bdeb-ffcb2f086fda" (UID: "2d26a65b-86d6-4603-bdeb-ffcb2f086fda"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.715389 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "2d26a65b-86d6-4603-bdeb-ffcb2f086fda" (UID: "2d26a65b-86d6-4603-bdeb-ffcb2f086fda"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.715418 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "2d26a65b-86d6-4603-bdeb-ffcb2f086fda" (UID: "2d26a65b-86d6-4603-bdeb-ffcb2f086fda"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.715441 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "2d26a65b-86d6-4603-bdeb-ffcb2f086fda" (UID: "2d26a65b-86d6-4603-bdeb-ffcb2f086fda"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.715474 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "2d26a65b-86d6-4603-bdeb-ffcb2f086fda" (UID: "2d26a65b-86d6-4603-bdeb-ffcb2f086fda"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.715496 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-slash" (OuterVolumeSpecName: "host-slash") pod "2d26a65b-86d6-4603-bdeb-ffcb2f086fda" (UID: "2d26a65b-86d6-4603-bdeb-ffcb2f086fda"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.715774 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "2d26a65b-86d6-4603-bdeb-ffcb2f086fda" (UID: "2d26a65b-86d6-4603-bdeb-ffcb2f086fda"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.715857 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-node-log" (OuterVolumeSpecName: "node-log") pod "2d26a65b-86d6-4603-bdeb-ffcb2f086fda" (UID: "2d26a65b-86d6-4603-bdeb-ffcb2f086fda"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.715961 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "2d26a65b-86d6-4603-bdeb-ffcb2f086fda" (UID: "2d26a65b-86d6-4603-bdeb-ffcb2f086fda"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.720987 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "2d26a65b-86d6-4603-bdeb-ffcb2f086fda" (UID: "2d26a65b-86d6-4603-bdeb-ffcb2f086fda"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.721174 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-kube-api-access-kvswz" (OuterVolumeSpecName: "kube-api-access-kvswz") pod "2d26a65b-86d6-4603-bdeb-ffcb2f086fda" (UID: "2d26a65b-86d6-4603-bdeb-ffcb2f086fda"). InnerVolumeSpecName "kube-api-access-kvswz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.729240 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "2d26a65b-86d6-4603-bdeb-ffcb2f086fda" (UID: "2d26a65b-86d6-4603-bdeb-ffcb2f086fda"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.815835 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-host-slash\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.815876 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-host-cni-bin\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.815910 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-run-ovn\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.815949 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-run-openvswitch\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.815980 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-log-socket\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.815998 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-host-cni-netd\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.816012 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-host-kubelet\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.816033 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-etc-openvswitch\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.816052 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8571df12-51be-414d-b0b5-9f0e024851be-env-overrides\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.816070 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-host-run-netns\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.816085 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-host-run-ovn-kubernetes\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.816105 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-run-systemd\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.816125 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8571df12-51be-414d-b0b5-9f0e024851be-ovnkube-config\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.816147 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-var-lib-openvswitch\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.816162 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8571df12-51be-414d-b0b5-9f0e024851be-ovnkube-script-lib\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.816183 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.816330 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-systemd-units\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.816446 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8571df12-51be-414d-b0b5-9f0e024851be-ovn-node-metrics-cert\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.816478 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lztm\" (UniqueName: \"kubernetes.io/projected/8571df12-51be-414d-b0b5-9f0e024851be-kube-api-access-2lztm\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.816541 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-node-log\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.816729 4867 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-log-socket\") on node \"crc\" DevicePath \"\"" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.816759 4867 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.816777 4867 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.816793 4867 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.816808 4867 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.816822 4867 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.816833 4867 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.816844 4867 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.816855 4867 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.816864 4867 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-slash\") on node \"crc\" DevicePath \"\"" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.816874 4867 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-node-log\") on node \"crc\" DevicePath \"\"" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.816903 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvswz\" (UniqueName: \"kubernetes.io/projected/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-kube-api-access-kvswz\") on node \"crc\" DevicePath \"\"" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.816915 4867 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.816925 4867 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.816935 4867 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.816944 4867 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.816954 4867 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.816966 4867 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.816987 4867 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.817003 4867 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d26a65b-86d6-4603-bdeb-ffcb2f086fda-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.917716 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-host-slash\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.917788 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-host-cni-bin\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.917825 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-run-ovn\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.917863 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-run-openvswitch\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.917916 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-run-ovn\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.917941 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-log-socket\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.917981 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-run-openvswitch\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.918007 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-host-cni-netd\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.918026 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-host-cni-bin\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.917869 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-host-slash\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.917992 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-log-socket\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.917990 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-host-cni-netd\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.918116 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-host-kubelet\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.918160 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-etc-openvswitch\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.918198 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8571df12-51be-414d-b0b5-9f0e024851be-env-overrides\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.918204 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-host-kubelet\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.918239 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-host-run-netns\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.918270 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-host-run-ovn-kubernetes\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.918302 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-run-systemd\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.918319 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8571df12-51be-414d-b0b5-9f0e024851be-ovnkube-config\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.918362 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-var-lib-openvswitch\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.918378 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8571df12-51be-414d-b0b5-9f0e024851be-ovnkube-script-lib\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.918426 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.918449 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-systemd-units\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.918475 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8571df12-51be-414d-b0b5-9f0e024851be-ovn-node-metrics-cert\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.918493 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lztm\" (UniqueName: \"kubernetes.io/projected/8571df12-51be-414d-b0b5-9f0e024851be-kube-api-access-2lztm\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.918514 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-node-log\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.918598 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-node-log\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.918620 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-host-run-netns\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.918640 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-host-run-ovn-kubernetes\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.918660 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-run-systemd\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.919207 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8571df12-51be-414d-b0b5-9f0e024851be-ovnkube-config\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.919242 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-var-lib-openvswitch\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.919275 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8571df12-51be-414d-b0b5-9f0e024851be-env-overrides\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.919348 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-etc-openvswitch\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.919398 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-systemd-units\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.919442 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8571df12-51be-414d-b0b5-9f0e024851be-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.919627 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8571df12-51be-414d-b0b5-9f0e024851be-ovnkube-script-lib\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.922781 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8571df12-51be-414d-b0b5-9f0e024851be-ovn-node-metrics-cert\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:38:59 crc kubenswrapper[4867]: I0101 08:38:59.951016 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lztm\" (UniqueName: \"kubernetes.io/projected/8571df12-51be-414d-b0b5-9f0e024851be-kube-api-access-2lztm\") pod \"ovnkube-node-g8smn\" (UID: \"8571df12-51be-414d-b0b5-9f0e024851be\") " pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.018876 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.342266 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nftn_2d26a65b-86d6-4603-bdeb-ffcb2f086fda/ovnkube-controller/3.log" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.344995 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nftn_2d26a65b-86d6-4603-bdeb-ffcb2f086fda/ovn-acl-logging/0.log" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.345534 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nftn_2d26a65b-86d6-4603-bdeb-ffcb2f086fda/ovn-controller/0.log" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.345879 4867 generic.go:334] "Generic (PLEG): container finished" podID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerID="5e27cc3c277a229235b0ff61f0b87aa1c0c080cec7fd84c149bc2b47722102a7" exitCode=0 Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.345922 4867 generic.go:334] "Generic (PLEG): container finished" podID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerID="7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d" exitCode=0 Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.345918 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" event={"ID":"2d26a65b-86d6-4603-bdeb-ffcb2f086fda","Type":"ContainerDied","Data":"5e27cc3c277a229235b0ff61f0b87aa1c0c080cec7fd84c149bc2b47722102a7"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.345934 4867 generic.go:334] "Generic (PLEG): container finished" podID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerID="4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20" exitCode=0 Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.345943 4867 generic.go:334] "Generic (PLEG): container finished" podID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerID="88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba" exitCode=0 Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.345953 4867 generic.go:334] "Generic (PLEG): container finished" podID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerID="a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a" exitCode=0 Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.345962 4867 generic.go:334] "Generic (PLEG): container finished" podID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerID="29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a" exitCode=0 Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.345969 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" event={"ID":"2d26a65b-86d6-4603-bdeb-ffcb2f086fda","Type":"ContainerDied","Data":"7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.345974 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346015 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" event={"ID":"2d26a65b-86d6-4603-bdeb-ffcb2f086fda","Type":"ContainerDied","Data":"4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346032 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" event={"ID":"2d26a65b-86d6-4603-bdeb-ffcb2f086fda","Type":"ContainerDied","Data":"88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346043 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" event={"ID":"2d26a65b-86d6-4603-bdeb-ffcb2f086fda","Type":"ContainerDied","Data":"a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346057 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" event={"ID":"2d26a65b-86d6-4603-bdeb-ffcb2f086fda","Type":"ContainerDied","Data":"29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.345971 4867 generic.go:334] "Generic (PLEG): container finished" podID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerID="21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936" exitCode=143 Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346081 4867 generic.go:334] "Generic (PLEG): container finished" podID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" containerID="a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2" exitCode=143 Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346096 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346111 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346119 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346127 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346135 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346142 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346175 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346182 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346187 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346198 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" event={"ID":"2d26a65b-86d6-4603-bdeb-ffcb2f086fda","Type":"ContainerDied","Data":"21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346211 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e27cc3c277a229235b0ff61f0b87aa1c0c080cec7fd84c149bc2b47722102a7"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346219 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346225 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346253 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346262 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346268 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346274 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346281 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346287 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346293 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346302 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" event={"ID":"2d26a65b-86d6-4603-bdeb-ffcb2f086fda","Type":"ContainerDied","Data":"a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346335 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e27cc3c277a229235b0ff61f0b87aa1c0c080cec7fd84c149bc2b47722102a7"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346344 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346350 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346356 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346362 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346368 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346375 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346382 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346388 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346428 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346440 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nftn" event={"ID":"2d26a65b-86d6-4603-bdeb-ffcb2f086fda","Type":"ContainerDied","Data":"b757c86dbe8954ffcc745fd87d69a6c3786db50f80bc1098bfe5f093f59e51c3"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.345986 4867 scope.go:117] "RemoveContainer" containerID="5e27cc3c277a229235b0ff61f0b87aa1c0c080cec7fd84c149bc2b47722102a7" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346455 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e27cc3c277a229235b0ff61f0b87aa1c0c080cec7fd84c149bc2b47722102a7"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346587 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346623 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346639 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346655 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346669 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346684 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346699 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346713 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.346733 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.349584 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wkbs8_da72a722-a2a3-459e-875a-e1605b442e05/kube-multus/2.log" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.352673 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wkbs8_da72a722-a2a3-459e-875a-e1605b442e05/kube-multus/1.log" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.352737 4867 generic.go:334] "Generic (PLEG): container finished" podID="da72a722-a2a3-459e-875a-e1605b442e05" containerID="6f96374fd054c235b06dbd37e3fde553db1ef9928046058431a727ac1da2bf50" exitCode=2 Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.352814 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wkbs8" event={"ID":"da72a722-a2a3-459e-875a-e1605b442e05","Type":"ContainerDied","Data":"6f96374fd054c235b06dbd37e3fde553db1ef9928046058431a727ac1da2bf50"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.352847 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"602fa2ee8eb9678b61a838c41ada5620972c139005d78b06dd99cf10077d9b12"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.354807 4867 scope.go:117] "RemoveContainer" containerID="6f96374fd054c235b06dbd37e3fde553db1ef9928046058431a727ac1da2bf50" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.355089 4867 generic.go:334] "Generic (PLEG): container finished" podID="8571df12-51be-414d-b0b5-9f0e024851be" containerID="b97fdfaf3f0f9910947c0f96bfeb82ccd924f88f80bc5d072dc91ff8d4648bd5" exitCode=0 Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.355116 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" event={"ID":"8571df12-51be-414d-b0b5-9f0e024851be","Type":"ContainerDied","Data":"b97fdfaf3f0f9910947c0f96bfeb82ccd924f88f80bc5d072dc91ff8d4648bd5"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.355139 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" event={"ID":"8571df12-51be-414d-b0b5-9f0e024851be","Type":"ContainerStarted","Data":"cabb2f17f075159529598e795462b1ce5fe93f4ef9fdb27ab945b9ea0f136a56"} Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.383716 4867 scope.go:117] "RemoveContainer" containerID="f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.413551 4867 scope.go:117] "RemoveContainer" containerID="7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.429973 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6nftn"] Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.434654 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6nftn"] Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.437217 4867 scope.go:117] "RemoveContainer" containerID="4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.468291 4867 scope.go:117] "RemoveContainer" containerID="88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.485246 4867 scope.go:117] "RemoveContainer" containerID="a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.497592 4867 scope.go:117] "RemoveContainer" containerID="29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.512633 4867 scope.go:117] "RemoveContainer" containerID="21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.530110 4867 scope.go:117] "RemoveContainer" containerID="a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.545330 4867 scope.go:117] "RemoveContainer" containerID="19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.563558 4867 scope.go:117] "RemoveContainer" containerID="5e27cc3c277a229235b0ff61f0b87aa1c0c080cec7fd84c149bc2b47722102a7" Jan 01 08:39:00 crc kubenswrapper[4867]: E0101 08:39:00.564292 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e27cc3c277a229235b0ff61f0b87aa1c0c080cec7fd84c149bc2b47722102a7\": container with ID starting with 5e27cc3c277a229235b0ff61f0b87aa1c0c080cec7fd84c149bc2b47722102a7 not found: ID does not exist" containerID="5e27cc3c277a229235b0ff61f0b87aa1c0c080cec7fd84c149bc2b47722102a7" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.564343 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e27cc3c277a229235b0ff61f0b87aa1c0c080cec7fd84c149bc2b47722102a7"} err="failed to get container status \"5e27cc3c277a229235b0ff61f0b87aa1c0c080cec7fd84c149bc2b47722102a7\": rpc error: code = NotFound desc = could not find container \"5e27cc3c277a229235b0ff61f0b87aa1c0c080cec7fd84c149bc2b47722102a7\": container with ID starting with 5e27cc3c277a229235b0ff61f0b87aa1c0c080cec7fd84c149bc2b47722102a7 not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.564380 4867 scope.go:117] "RemoveContainer" containerID="f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b" Jan 01 08:39:00 crc kubenswrapper[4867]: E0101 08:39:00.565020 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b\": container with ID starting with f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b not found: ID does not exist" containerID="f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.565063 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b"} err="failed to get container status \"f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b\": rpc error: code = NotFound desc = could not find container \"f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b\": container with ID starting with f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.565087 4867 scope.go:117] "RemoveContainer" containerID="7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d" Jan 01 08:39:00 crc kubenswrapper[4867]: E0101 08:39:00.565328 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d\": container with ID starting with 7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d not found: ID does not exist" containerID="7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.565360 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d"} err="failed to get container status \"7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d\": rpc error: code = NotFound desc = could not find container \"7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d\": container with ID starting with 7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.565382 4867 scope.go:117] "RemoveContainer" containerID="4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20" Jan 01 08:39:00 crc kubenswrapper[4867]: E0101 08:39:00.565607 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20\": container with ID starting with 4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20 not found: ID does not exist" containerID="4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.565632 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20"} err="failed to get container status \"4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20\": rpc error: code = NotFound desc = could not find container \"4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20\": container with ID starting with 4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20 not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.565650 4867 scope.go:117] "RemoveContainer" containerID="88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba" Jan 01 08:39:00 crc kubenswrapper[4867]: E0101 08:39:00.565824 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba\": container with ID starting with 88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba not found: ID does not exist" containerID="88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.565851 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba"} err="failed to get container status \"88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba\": rpc error: code = NotFound desc = could not find container \"88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba\": container with ID starting with 88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.565868 4867 scope.go:117] "RemoveContainer" containerID="a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a" Jan 01 08:39:00 crc kubenswrapper[4867]: E0101 08:39:00.566071 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a\": container with ID starting with a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a not found: ID does not exist" containerID="a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.566095 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a"} err="failed to get container status \"a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a\": rpc error: code = NotFound desc = could not find container \"a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a\": container with ID starting with a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.566112 4867 scope.go:117] "RemoveContainer" containerID="29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a" Jan 01 08:39:00 crc kubenswrapper[4867]: E0101 08:39:00.566275 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a\": container with ID starting with 29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a not found: ID does not exist" containerID="29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.566299 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a"} err="failed to get container status \"29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a\": rpc error: code = NotFound desc = could not find container \"29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a\": container with ID starting with 29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.566314 4867 scope.go:117] "RemoveContainer" containerID="21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936" Jan 01 08:39:00 crc kubenswrapper[4867]: E0101 08:39:00.566477 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936\": container with ID starting with 21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936 not found: ID does not exist" containerID="21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.566501 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936"} err="failed to get container status \"21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936\": rpc error: code = NotFound desc = could not find container \"21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936\": container with ID starting with 21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936 not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.566517 4867 scope.go:117] "RemoveContainer" containerID="a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2" Jan 01 08:39:00 crc kubenswrapper[4867]: E0101 08:39:00.566724 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2\": container with ID starting with a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2 not found: ID does not exist" containerID="a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.566747 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2"} err="failed to get container status \"a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2\": rpc error: code = NotFound desc = could not find container \"a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2\": container with ID starting with a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2 not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.566765 4867 scope.go:117] "RemoveContainer" containerID="19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12" Jan 01 08:39:00 crc kubenswrapper[4867]: E0101 08:39:00.567032 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\": container with ID starting with 19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12 not found: ID does not exist" containerID="19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.567072 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12"} err="failed to get container status \"19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\": rpc error: code = NotFound desc = could not find container \"19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\": container with ID starting with 19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12 not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.567104 4867 scope.go:117] "RemoveContainer" containerID="5e27cc3c277a229235b0ff61f0b87aa1c0c080cec7fd84c149bc2b47722102a7" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.567432 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e27cc3c277a229235b0ff61f0b87aa1c0c080cec7fd84c149bc2b47722102a7"} err="failed to get container status \"5e27cc3c277a229235b0ff61f0b87aa1c0c080cec7fd84c149bc2b47722102a7\": rpc error: code = NotFound desc = could not find container \"5e27cc3c277a229235b0ff61f0b87aa1c0c080cec7fd84c149bc2b47722102a7\": container with ID starting with 5e27cc3c277a229235b0ff61f0b87aa1c0c080cec7fd84c149bc2b47722102a7 not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.567458 4867 scope.go:117] "RemoveContainer" containerID="f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.567633 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b"} err="failed to get container status \"f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b\": rpc error: code = NotFound desc = could not find container \"f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b\": container with ID starting with f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.567659 4867 scope.go:117] "RemoveContainer" containerID="7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.567820 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d"} err="failed to get container status \"7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d\": rpc error: code = NotFound desc = could not find container \"7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d\": container with ID starting with 7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.567842 4867 scope.go:117] "RemoveContainer" containerID="4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.568034 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20"} err="failed to get container status \"4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20\": rpc error: code = NotFound desc = could not find container \"4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20\": container with ID starting with 4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20 not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.568059 4867 scope.go:117] "RemoveContainer" containerID="88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.568226 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba"} err="failed to get container status \"88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba\": rpc error: code = NotFound desc = could not find container \"88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba\": container with ID starting with 88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.568251 4867 scope.go:117] "RemoveContainer" containerID="a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.568406 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a"} err="failed to get container status \"a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a\": rpc error: code = NotFound desc = could not find container \"a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a\": container with ID starting with a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.568428 4867 scope.go:117] "RemoveContainer" containerID="29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.568582 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a"} err="failed to get container status \"29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a\": rpc error: code = NotFound desc = could not find container \"29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a\": container with ID starting with 29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.568605 4867 scope.go:117] "RemoveContainer" containerID="21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.568758 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936"} err="failed to get container status \"21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936\": rpc error: code = NotFound desc = could not find container \"21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936\": container with ID starting with 21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936 not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.568779 4867 scope.go:117] "RemoveContainer" containerID="a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.569034 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2"} err="failed to get container status \"a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2\": rpc error: code = NotFound desc = could not find container \"a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2\": container with ID starting with a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2 not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.569059 4867 scope.go:117] "RemoveContainer" containerID="19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.569239 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12"} err="failed to get container status \"19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\": rpc error: code = NotFound desc = could not find container \"19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\": container with ID starting with 19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12 not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.569260 4867 scope.go:117] "RemoveContainer" containerID="5e27cc3c277a229235b0ff61f0b87aa1c0c080cec7fd84c149bc2b47722102a7" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.569415 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e27cc3c277a229235b0ff61f0b87aa1c0c080cec7fd84c149bc2b47722102a7"} err="failed to get container status \"5e27cc3c277a229235b0ff61f0b87aa1c0c080cec7fd84c149bc2b47722102a7\": rpc error: code = NotFound desc = could not find container \"5e27cc3c277a229235b0ff61f0b87aa1c0c080cec7fd84c149bc2b47722102a7\": container with ID starting with 5e27cc3c277a229235b0ff61f0b87aa1c0c080cec7fd84c149bc2b47722102a7 not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.569436 4867 scope.go:117] "RemoveContainer" containerID="f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.569589 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b"} err="failed to get container status \"f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b\": rpc error: code = NotFound desc = could not find container \"f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b\": container with ID starting with f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.569610 4867 scope.go:117] "RemoveContainer" containerID="7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.569768 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d"} err="failed to get container status \"7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d\": rpc error: code = NotFound desc = could not find container \"7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d\": container with ID starting with 7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.569788 4867 scope.go:117] "RemoveContainer" containerID="4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.569988 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20"} err="failed to get container status \"4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20\": rpc error: code = NotFound desc = could not find container \"4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20\": container with ID starting with 4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20 not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.570012 4867 scope.go:117] "RemoveContainer" containerID="88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.570176 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba"} err="failed to get container status \"88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba\": rpc error: code = NotFound desc = could not find container \"88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba\": container with ID starting with 88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.570197 4867 scope.go:117] "RemoveContainer" containerID="a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.571596 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a"} err="failed to get container status \"a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a\": rpc error: code = NotFound desc = could not find container \"a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a\": container with ID starting with a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.571623 4867 scope.go:117] "RemoveContainer" containerID="29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.571824 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a"} err="failed to get container status \"29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a\": rpc error: code = NotFound desc = could not find container \"29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a\": container with ID starting with 29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.571847 4867 scope.go:117] "RemoveContainer" containerID="21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.572083 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936"} err="failed to get container status \"21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936\": rpc error: code = NotFound desc = could not find container \"21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936\": container with ID starting with 21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936 not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.572111 4867 scope.go:117] "RemoveContainer" containerID="a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.572325 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2"} err="failed to get container status \"a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2\": rpc error: code = NotFound desc = could not find container \"a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2\": container with ID starting with a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2 not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.572349 4867 scope.go:117] "RemoveContainer" containerID="19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.572544 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12"} err="failed to get container status \"19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\": rpc error: code = NotFound desc = could not find container \"19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\": container with ID starting with 19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12 not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.572568 4867 scope.go:117] "RemoveContainer" containerID="5e27cc3c277a229235b0ff61f0b87aa1c0c080cec7fd84c149bc2b47722102a7" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.572759 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e27cc3c277a229235b0ff61f0b87aa1c0c080cec7fd84c149bc2b47722102a7"} err="failed to get container status \"5e27cc3c277a229235b0ff61f0b87aa1c0c080cec7fd84c149bc2b47722102a7\": rpc error: code = NotFound desc = could not find container \"5e27cc3c277a229235b0ff61f0b87aa1c0c080cec7fd84c149bc2b47722102a7\": container with ID starting with 5e27cc3c277a229235b0ff61f0b87aa1c0c080cec7fd84c149bc2b47722102a7 not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.572784 4867 scope.go:117] "RemoveContainer" containerID="f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.572996 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b"} err="failed to get container status \"f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b\": rpc error: code = NotFound desc = could not find container \"f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b\": container with ID starting with f26e72db7fc0e33365d6737cb2604bb7d68aa0da6cb3a8d9fb0ef513146c806b not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.573018 4867 scope.go:117] "RemoveContainer" containerID="7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.573229 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d"} err="failed to get container status \"7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d\": rpc error: code = NotFound desc = could not find container \"7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d\": container with ID starting with 7c13bd37a185c7e83afb35b6ab16a18d1924f2bf74471fee6db350a4f4d5664d not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.573253 4867 scope.go:117] "RemoveContainer" containerID="4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.573441 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20"} err="failed to get container status \"4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20\": rpc error: code = NotFound desc = could not find container \"4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20\": container with ID starting with 4b0c56ee7448b4bb8bb9af96ce3e06c06099c98c7b8ccb8e3ad32ffe3e555c20 not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.573463 4867 scope.go:117] "RemoveContainer" containerID="88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.573624 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba"} err="failed to get container status \"88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba\": rpc error: code = NotFound desc = could not find container \"88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba\": container with ID starting with 88c577a45e467f8ce998d0f030517aea2391e7805778307172899c04280f43ba not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.573646 4867 scope.go:117] "RemoveContainer" containerID="a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.573805 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a"} err="failed to get container status \"a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a\": rpc error: code = NotFound desc = could not find container \"a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a\": container with ID starting with a27e9ee73c74a57349341fd155ae6b996adc1b22d9447123e15c7d0422737f4a not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.573830 4867 scope.go:117] "RemoveContainer" containerID="29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.574017 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a"} err="failed to get container status \"29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a\": rpc error: code = NotFound desc = could not find container \"29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a\": container with ID starting with 29a46f1d3fe9ab057a81c7bbb34c7118a649706cd8a8a6ada76ef43d5203da8a not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.574040 4867 scope.go:117] "RemoveContainer" containerID="21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.574267 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936"} err="failed to get container status \"21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936\": rpc error: code = NotFound desc = could not find container \"21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936\": container with ID starting with 21241bdb935ce00f84fb42ac0b50784e1845c0785f7e12f83b9fc99b7cf5a936 not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.574290 4867 scope.go:117] "RemoveContainer" containerID="a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.574455 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2"} err="failed to get container status \"a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2\": rpc error: code = NotFound desc = could not find container \"a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2\": container with ID starting with a083e335d16bc423c090d6ae70678cd3596343faf603e0e53daed787020b0be2 not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.574476 4867 scope.go:117] "RemoveContainer" containerID="19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.574638 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12"} err="failed to get container status \"19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\": rpc error: code = NotFound desc = could not find container \"19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12\": container with ID starting with 19dc409a94bc6444123875e4b4b66c7c15c0c6f8957618b5dd494fcbc2dbcf12 not found: ID does not exist" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.574659 4867 scope.go:117] "RemoveContainer" containerID="5e27cc3c277a229235b0ff61f0b87aa1c0c080cec7fd84c149bc2b47722102a7" Jan 01 08:39:00 crc kubenswrapper[4867]: I0101 08:39:00.574814 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e27cc3c277a229235b0ff61f0b87aa1c0c080cec7fd84c149bc2b47722102a7"} err="failed to get container status \"5e27cc3c277a229235b0ff61f0b87aa1c0c080cec7fd84c149bc2b47722102a7\": rpc error: code = NotFound desc = could not find container \"5e27cc3c277a229235b0ff61f0b87aa1c0c080cec7fd84c149bc2b47722102a7\": container with ID starting with 5e27cc3c277a229235b0ff61f0b87aa1c0c080cec7fd84c149bc2b47722102a7 not found: ID does not exist" Jan 01 08:39:01 crc kubenswrapper[4867]: I0101 08:39:01.147222 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d26a65b-86d6-4603-bdeb-ffcb2f086fda" path="/var/lib/kubelet/pods/2d26a65b-86d6-4603-bdeb-ffcb2f086fda/volumes" Jan 01 08:39:01 crc kubenswrapper[4867]: I0101 08:39:01.484552 4867 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 01 08:39:01 crc kubenswrapper[4867]: I0101 08:39:01.503952 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wkbs8_da72a722-a2a3-459e-875a-e1605b442e05/kube-multus/2.log" Jan 01 08:39:01 crc kubenswrapper[4867]: I0101 08:39:01.504480 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wkbs8_da72a722-a2a3-459e-875a-e1605b442e05/kube-multus/1.log" Jan 01 08:39:01 crc kubenswrapper[4867]: I0101 08:39:01.504585 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wkbs8" event={"ID":"da72a722-a2a3-459e-875a-e1605b442e05","Type":"ContainerStarted","Data":"653d308ca372484f95bcf1a633c514719a5f5c77c15b15b644636c0d3863acb8"} Jan 01 08:39:01 crc kubenswrapper[4867]: I0101 08:39:01.536602 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" event={"ID":"8571df12-51be-414d-b0b5-9f0e024851be","Type":"ContainerStarted","Data":"027cd32c35ae45ff9dd3b07d39c3c9ff35f40ee858b5e3a24120b804f5682d7e"} Jan 01 08:39:01 crc kubenswrapper[4867]: I0101 08:39:01.536660 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" event={"ID":"8571df12-51be-414d-b0b5-9f0e024851be","Type":"ContainerStarted","Data":"e7eb91c037eb032d63555753bc4190f7af808db2eec5e580d286e1edfa9171f7"} Jan 01 08:39:01 crc kubenswrapper[4867]: I0101 08:39:01.536677 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" event={"ID":"8571df12-51be-414d-b0b5-9f0e024851be","Type":"ContainerStarted","Data":"b1f65fb50e93e573ec6a9c2de631818d9c1caee04d852f62fd6813aeb7a0b2cd"} Jan 01 08:39:01 crc kubenswrapper[4867]: I0101 08:39:01.536860 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" event={"ID":"8571df12-51be-414d-b0b5-9f0e024851be","Type":"ContainerStarted","Data":"318353ffc2058ecb6c86e7f50e82217956a0c38781e8d28fda9bdab2e8cc47f4"} Jan 01 08:39:02 crc kubenswrapper[4867]: I0101 08:39:02.545988 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" event={"ID":"8571df12-51be-414d-b0b5-9f0e024851be","Type":"ContainerStarted","Data":"f0862e3f1265ca8e917ccf6430092d65bf89dce79b8ca308795989a37f6fe7f2"} Jan 01 08:39:02 crc kubenswrapper[4867]: I0101 08:39:02.546367 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" event={"ID":"8571df12-51be-414d-b0b5-9f0e024851be","Type":"ContainerStarted","Data":"079365c851a89ce986874ad4221857d57247d5b07c0310fc5aaa94d8644dfdd1"} Jan 01 08:39:04 crc kubenswrapper[4867]: I0101 08:39:04.567396 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" event={"ID":"8571df12-51be-414d-b0b5-9f0e024851be","Type":"ContainerStarted","Data":"336de95655743619be808cc425129464d2e13bc8ab89b75e11286017fa237807"} Jan 01 08:39:06 crc kubenswrapper[4867]: I0101 08:39:06.590666 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" event={"ID":"8571df12-51be-414d-b0b5-9f0e024851be","Type":"ContainerStarted","Data":"95b5657b04d7631000b490adc7d2e073fb606a7884c73f3dea07ca890987f8cc"} Jan 01 08:39:06 crc kubenswrapper[4867]: I0101 08:39:06.591422 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:39:06 crc kubenswrapper[4867]: I0101 08:39:06.591446 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:39:06 crc kubenswrapper[4867]: I0101 08:39:06.627933 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" podStartSLOduration=7.627915825 podStartE2EDuration="7.627915825s" podCreationTimestamp="2026-01-01 08:38:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:39:06.624915863 +0000 UTC m=+755.760184642" watchObservedRunningTime="2026-01-01 08:39:06.627915825 +0000 UTC m=+755.763184614" Jan 01 08:39:06 crc kubenswrapper[4867]: I0101 08:39:06.631254 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:39:07 crc kubenswrapper[4867]: I0101 08:39:07.064237 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-xmktq"] Jan 01 08:39:07 crc kubenswrapper[4867]: I0101 08:39:07.064901 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-xmktq" Jan 01 08:39:07 crc kubenswrapper[4867]: I0101 08:39:07.067155 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 01 08:39:07 crc kubenswrapper[4867]: I0101 08:39:07.067503 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 01 08:39:07 crc kubenswrapper[4867]: I0101 08:39:07.070508 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 01 08:39:07 crc kubenswrapper[4867]: I0101 08:39:07.070684 4867 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-6tq7b" Jan 01 08:39:07 crc kubenswrapper[4867]: I0101 08:39:07.150460 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/83f18347-b9b6-4c1c-ab58-6d987317b853-crc-storage\") pod \"crc-storage-crc-xmktq\" (UID: \"83f18347-b9b6-4c1c-ab58-6d987317b853\") " pod="crc-storage/crc-storage-crc-xmktq" Jan 01 08:39:07 crc kubenswrapper[4867]: I0101 08:39:07.150542 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr4lp\" (UniqueName: \"kubernetes.io/projected/83f18347-b9b6-4c1c-ab58-6d987317b853-kube-api-access-rr4lp\") pod \"crc-storage-crc-xmktq\" (UID: \"83f18347-b9b6-4c1c-ab58-6d987317b853\") " pod="crc-storage/crc-storage-crc-xmktq" Jan 01 08:39:07 crc kubenswrapper[4867]: I0101 08:39:07.150590 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/83f18347-b9b6-4c1c-ab58-6d987317b853-node-mnt\") pod \"crc-storage-crc-xmktq\" (UID: \"83f18347-b9b6-4c1c-ab58-6d987317b853\") " pod="crc-storage/crc-storage-crc-xmktq" Jan 01 08:39:07 crc kubenswrapper[4867]: I0101 08:39:07.251440 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/83f18347-b9b6-4c1c-ab58-6d987317b853-node-mnt\") pod \"crc-storage-crc-xmktq\" (UID: \"83f18347-b9b6-4c1c-ab58-6d987317b853\") " pod="crc-storage/crc-storage-crc-xmktq" Jan 01 08:39:07 crc kubenswrapper[4867]: I0101 08:39:07.251516 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/83f18347-b9b6-4c1c-ab58-6d987317b853-crc-storage\") pod \"crc-storage-crc-xmktq\" (UID: \"83f18347-b9b6-4c1c-ab58-6d987317b853\") " pod="crc-storage/crc-storage-crc-xmktq" Jan 01 08:39:07 crc kubenswrapper[4867]: I0101 08:39:07.251584 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr4lp\" (UniqueName: \"kubernetes.io/projected/83f18347-b9b6-4c1c-ab58-6d987317b853-kube-api-access-rr4lp\") pod \"crc-storage-crc-xmktq\" (UID: \"83f18347-b9b6-4c1c-ab58-6d987317b853\") " pod="crc-storage/crc-storage-crc-xmktq" Jan 01 08:39:07 crc kubenswrapper[4867]: I0101 08:39:07.251854 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/83f18347-b9b6-4c1c-ab58-6d987317b853-node-mnt\") pod \"crc-storage-crc-xmktq\" (UID: \"83f18347-b9b6-4c1c-ab58-6d987317b853\") " pod="crc-storage/crc-storage-crc-xmktq" Jan 01 08:39:07 crc kubenswrapper[4867]: I0101 08:39:07.252676 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/83f18347-b9b6-4c1c-ab58-6d987317b853-crc-storage\") pod \"crc-storage-crc-xmktq\" (UID: \"83f18347-b9b6-4c1c-ab58-6d987317b853\") " pod="crc-storage/crc-storage-crc-xmktq" Jan 01 08:39:07 crc kubenswrapper[4867]: I0101 08:39:07.270438 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr4lp\" (UniqueName: \"kubernetes.io/projected/83f18347-b9b6-4c1c-ab58-6d987317b853-kube-api-access-rr4lp\") pod \"crc-storage-crc-xmktq\" (UID: \"83f18347-b9b6-4c1c-ab58-6d987317b853\") " pod="crc-storage/crc-storage-crc-xmktq" Jan 01 08:39:07 crc kubenswrapper[4867]: I0101 08:39:07.383153 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-xmktq" Jan 01 08:39:07 crc kubenswrapper[4867]: E0101 08:39:07.413150 4867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-xmktq_crc-storage_83f18347-b9b6-4c1c-ab58-6d987317b853_0(4042441fcf02246d2d755852780ed11f26104e8613fd3aca8b7ac17c102e8447): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 01 08:39:07 crc kubenswrapper[4867]: E0101 08:39:07.413596 4867 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-xmktq_crc-storage_83f18347-b9b6-4c1c-ab58-6d987317b853_0(4042441fcf02246d2d755852780ed11f26104e8613fd3aca8b7ac17c102e8447): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-xmktq" Jan 01 08:39:07 crc kubenswrapper[4867]: E0101 08:39:07.413631 4867 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-xmktq_crc-storage_83f18347-b9b6-4c1c-ab58-6d987317b853_0(4042441fcf02246d2d755852780ed11f26104e8613fd3aca8b7ac17c102e8447): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-xmktq" Jan 01 08:39:07 crc kubenswrapper[4867]: E0101 08:39:07.413717 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-xmktq_crc-storage(83f18347-b9b6-4c1c-ab58-6d987317b853)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-xmktq_crc-storage(83f18347-b9b6-4c1c-ab58-6d987317b853)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-xmktq_crc-storage_83f18347-b9b6-4c1c-ab58-6d987317b853_0(4042441fcf02246d2d755852780ed11f26104e8613fd3aca8b7ac17c102e8447): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-xmktq" podUID="83f18347-b9b6-4c1c-ab58-6d987317b853" Jan 01 08:39:07 crc kubenswrapper[4867]: I0101 08:39:07.597869 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:39:07 crc kubenswrapper[4867]: I0101 08:39:07.663182 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:39:07 crc kubenswrapper[4867]: I0101 08:39:07.682104 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-xmktq"] Jan 01 08:39:07 crc kubenswrapper[4867]: I0101 08:39:07.682238 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-xmktq" Jan 01 08:39:07 crc kubenswrapper[4867]: I0101 08:39:07.682950 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-xmktq" Jan 01 08:39:07 crc kubenswrapper[4867]: E0101 08:39:07.714788 4867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-xmktq_crc-storage_83f18347-b9b6-4c1c-ab58-6d987317b853_0(6ffecb10934923a199bb3d3d8afed1e4ff36f07a8d612940e2fbbc5ef565f9a9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 01 08:39:07 crc kubenswrapper[4867]: E0101 08:39:07.714855 4867 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-xmktq_crc-storage_83f18347-b9b6-4c1c-ab58-6d987317b853_0(6ffecb10934923a199bb3d3d8afed1e4ff36f07a8d612940e2fbbc5ef565f9a9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-xmktq" Jan 01 08:39:07 crc kubenswrapper[4867]: E0101 08:39:07.714944 4867 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-xmktq_crc-storage_83f18347-b9b6-4c1c-ab58-6d987317b853_0(6ffecb10934923a199bb3d3d8afed1e4ff36f07a8d612940e2fbbc5ef565f9a9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-xmktq" Jan 01 08:39:07 crc kubenswrapper[4867]: E0101 08:39:07.715027 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-xmktq_crc-storage(83f18347-b9b6-4c1c-ab58-6d987317b853)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-xmktq_crc-storage(83f18347-b9b6-4c1c-ab58-6d987317b853)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-xmktq_crc-storage_83f18347-b9b6-4c1c-ab58-6d987317b853_0(6ffecb10934923a199bb3d3d8afed1e4ff36f07a8d612940e2fbbc5ef565f9a9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-xmktq" podUID="83f18347-b9b6-4c1c-ab58-6d987317b853" Jan 01 08:39:21 crc kubenswrapper[4867]: I0101 08:39:21.127754 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-xmktq" Jan 01 08:39:21 crc kubenswrapper[4867]: I0101 08:39:21.133634 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-xmktq" Jan 01 08:39:21 crc kubenswrapper[4867]: I0101 08:39:21.331624 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 08:39:21 crc kubenswrapper[4867]: I0101 08:39:21.332259 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 08:39:21 crc kubenswrapper[4867]: I0101 08:39:21.332332 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-69jph" Jan 01 08:39:21 crc kubenswrapper[4867]: I0101 08:39:21.333296 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b489a809c46fea0670e0a497e09fb93b663297b7dde42c0a30153339b2adc104"} pod="openshift-machine-config-operator/machine-config-daemon-69jph" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 01 08:39:21 crc kubenswrapper[4867]: I0101 08:39:21.333380 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" containerID="cri-o://b489a809c46fea0670e0a497e09fb93b663297b7dde42c0a30153339b2adc104" gracePeriod=600 Jan 01 08:39:21 crc kubenswrapper[4867]: I0101 08:39:21.428363 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-xmktq"] Jan 01 08:39:21 crc kubenswrapper[4867]: W0101 08:39:21.434443 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83f18347_b9b6_4c1c_ab58_6d987317b853.slice/crio-3414541e8415c595ac0803ce8be7f0203db7909163b00529d6ebe08e72aac7c7 WatchSource:0}: Error finding container 3414541e8415c595ac0803ce8be7f0203db7909163b00529d6ebe08e72aac7c7: Status 404 returned error can't find the container with id 3414541e8415c595ac0803ce8be7f0203db7909163b00529d6ebe08e72aac7c7 Jan 01 08:39:21 crc kubenswrapper[4867]: I0101 08:39:21.437780 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 01 08:39:21 crc kubenswrapper[4867]: I0101 08:39:21.696274 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-xmktq" event={"ID":"83f18347-b9b6-4c1c-ab58-6d987317b853","Type":"ContainerStarted","Data":"3414541e8415c595ac0803ce8be7f0203db7909163b00529d6ebe08e72aac7c7"} Jan 01 08:39:22 crc kubenswrapper[4867]: I0101 08:39:22.706130 4867 generic.go:334] "Generic (PLEG): container finished" podID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerID="b489a809c46fea0670e0a497e09fb93b663297b7dde42c0a30153339b2adc104" exitCode=0 Jan 01 08:39:22 crc kubenswrapper[4867]: I0101 08:39:22.706357 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerDied","Data":"b489a809c46fea0670e0a497e09fb93b663297b7dde42c0a30153339b2adc104"} Jan 01 08:39:22 crc kubenswrapper[4867]: I0101 08:39:22.706720 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerStarted","Data":"5c0242f8cb2cb86cd3c1961752ae798238bc46747b9db37482dfc5091eb3d814"} Jan 01 08:39:22 crc kubenswrapper[4867]: I0101 08:39:22.706742 4867 scope.go:117] "RemoveContainer" containerID="c5738a332d05a46a3e480cd27871b24fa7ab3c38831377e98113a1cb4db4d6b9" Jan 01 08:39:22 crc kubenswrapper[4867]: I0101 08:39:22.710691 4867 generic.go:334] "Generic (PLEG): container finished" podID="83f18347-b9b6-4c1c-ab58-6d987317b853" containerID="6521b471c3b91363304ff1b54df8af56dceaefccba854ac8c9ba2997112ae659" exitCode=0 Jan 01 08:39:22 crc kubenswrapper[4867]: I0101 08:39:22.710717 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-xmktq" event={"ID":"83f18347-b9b6-4c1c-ab58-6d987317b853","Type":"ContainerDied","Data":"6521b471c3b91363304ff1b54df8af56dceaefccba854ac8c9ba2997112ae659"} Jan 01 08:39:24 crc kubenswrapper[4867]: I0101 08:39:24.027512 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-xmktq" Jan 01 08:39:24 crc kubenswrapper[4867]: I0101 08:39:24.187334 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/83f18347-b9b6-4c1c-ab58-6d987317b853-crc-storage\") pod \"83f18347-b9b6-4c1c-ab58-6d987317b853\" (UID: \"83f18347-b9b6-4c1c-ab58-6d987317b853\") " Jan 01 08:39:24 crc kubenswrapper[4867]: I0101 08:39:24.187432 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/83f18347-b9b6-4c1c-ab58-6d987317b853-node-mnt\") pod \"83f18347-b9b6-4c1c-ab58-6d987317b853\" (UID: \"83f18347-b9b6-4c1c-ab58-6d987317b853\") " Jan 01 08:39:24 crc kubenswrapper[4867]: I0101 08:39:24.187519 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83f18347-b9b6-4c1c-ab58-6d987317b853-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "83f18347-b9b6-4c1c-ab58-6d987317b853" (UID: "83f18347-b9b6-4c1c-ab58-6d987317b853"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:39:24 crc kubenswrapper[4867]: I0101 08:39:24.187551 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rr4lp\" (UniqueName: \"kubernetes.io/projected/83f18347-b9b6-4c1c-ab58-6d987317b853-kube-api-access-rr4lp\") pod \"83f18347-b9b6-4c1c-ab58-6d987317b853\" (UID: \"83f18347-b9b6-4c1c-ab58-6d987317b853\") " Jan 01 08:39:24 crc kubenswrapper[4867]: I0101 08:39:24.188070 4867 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/83f18347-b9b6-4c1c-ab58-6d987317b853-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 01 08:39:24 crc kubenswrapper[4867]: I0101 08:39:24.196544 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83f18347-b9b6-4c1c-ab58-6d987317b853-kube-api-access-rr4lp" (OuterVolumeSpecName: "kube-api-access-rr4lp") pod "83f18347-b9b6-4c1c-ab58-6d987317b853" (UID: "83f18347-b9b6-4c1c-ab58-6d987317b853"). InnerVolumeSpecName "kube-api-access-rr4lp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:39:24 crc kubenswrapper[4867]: I0101 08:39:24.213374 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83f18347-b9b6-4c1c-ab58-6d987317b853-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "83f18347-b9b6-4c1c-ab58-6d987317b853" (UID: "83f18347-b9b6-4c1c-ab58-6d987317b853"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:39:24 crc kubenswrapper[4867]: I0101 08:39:24.290247 4867 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/83f18347-b9b6-4c1c-ab58-6d987317b853-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 01 08:39:24 crc kubenswrapper[4867]: I0101 08:39:24.290315 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rr4lp\" (UniqueName: \"kubernetes.io/projected/83f18347-b9b6-4c1c-ab58-6d987317b853-kube-api-access-rr4lp\") on node \"crc\" DevicePath \"\"" Jan 01 08:39:24 crc kubenswrapper[4867]: I0101 08:39:24.733159 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-xmktq" event={"ID":"83f18347-b9b6-4c1c-ab58-6d987317b853","Type":"ContainerDied","Data":"3414541e8415c595ac0803ce8be7f0203db7909163b00529d6ebe08e72aac7c7"} Jan 01 08:39:24 crc kubenswrapper[4867]: I0101 08:39:24.733211 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3414541e8415c595ac0803ce8be7f0203db7909163b00529d6ebe08e72aac7c7" Jan 01 08:39:24 crc kubenswrapper[4867]: I0101 08:39:24.733263 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-xmktq" Jan 01 08:39:30 crc kubenswrapper[4867]: I0101 08:39:30.054131 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-g8smn" Jan 01 08:39:31 crc kubenswrapper[4867]: I0101 08:39:31.674670 4867 scope.go:117] "RemoveContainer" containerID="602fa2ee8eb9678b61a838c41ada5620972c139005d78b06dd99cf10077d9b12" Jan 01 08:39:31 crc kubenswrapper[4867]: I0101 08:39:31.778924 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wkbs8_da72a722-a2a3-459e-875a-e1605b442e05/kube-multus/2.log" Jan 01 08:39:32 crc kubenswrapper[4867]: I0101 08:39:32.539122 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jxdv9"] Jan 01 08:39:32 crc kubenswrapper[4867]: E0101 08:39:32.539774 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83f18347-b9b6-4c1c-ab58-6d987317b853" containerName="storage" Jan 01 08:39:32 crc kubenswrapper[4867]: I0101 08:39:32.539798 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="83f18347-b9b6-4c1c-ab58-6d987317b853" containerName="storage" Jan 01 08:39:32 crc kubenswrapper[4867]: I0101 08:39:32.539982 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="83f18347-b9b6-4c1c-ab58-6d987317b853" containerName="storage" Jan 01 08:39:32 crc kubenswrapper[4867]: I0101 08:39:32.540970 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jxdv9" Jan 01 08:39:32 crc kubenswrapper[4867]: I0101 08:39:32.543354 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 01 08:39:32 crc kubenswrapper[4867]: I0101 08:39:32.549685 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jxdv9"] Jan 01 08:39:32 crc kubenswrapper[4867]: I0101 08:39:32.728389 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfrqf\" (UniqueName: \"kubernetes.io/projected/c55b6c63-182a-4871-8b23-55a3edc099a6-kube-api-access-rfrqf\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jxdv9\" (UID: \"c55b6c63-182a-4871-8b23-55a3edc099a6\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jxdv9" Jan 01 08:39:32 crc kubenswrapper[4867]: I0101 08:39:32.728497 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c55b6c63-182a-4871-8b23-55a3edc099a6-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jxdv9\" (UID: \"c55b6c63-182a-4871-8b23-55a3edc099a6\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jxdv9" Jan 01 08:39:32 crc kubenswrapper[4867]: I0101 08:39:32.728549 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c55b6c63-182a-4871-8b23-55a3edc099a6-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jxdv9\" (UID: \"c55b6c63-182a-4871-8b23-55a3edc099a6\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jxdv9" Jan 01 08:39:32 crc kubenswrapper[4867]: I0101 08:39:32.830451 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c55b6c63-182a-4871-8b23-55a3edc099a6-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jxdv9\" (UID: \"c55b6c63-182a-4871-8b23-55a3edc099a6\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jxdv9" Jan 01 08:39:32 crc kubenswrapper[4867]: I0101 08:39:32.830554 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c55b6c63-182a-4871-8b23-55a3edc099a6-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jxdv9\" (UID: \"c55b6c63-182a-4871-8b23-55a3edc099a6\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jxdv9" Jan 01 08:39:32 crc kubenswrapper[4867]: I0101 08:39:32.830689 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfrqf\" (UniqueName: \"kubernetes.io/projected/c55b6c63-182a-4871-8b23-55a3edc099a6-kube-api-access-rfrqf\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jxdv9\" (UID: \"c55b6c63-182a-4871-8b23-55a3edc099a6\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jxdv9" Jan 01 08:39:32 crc kubenswrapper[4867]: I0101 08:39:32.831194 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c55b6c63-182a-4871-8b23-55a3edc099a6-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jxdv9\" (UID: \"c55b6c63-182a-4871-8b23-55a3edc099a6\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jxdv9" Jan 01 08:39:32 crc kubenswrapper[4867]: I0101 08:39:32.831809 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c55b6c63-182a-4871-8b23-55a3edc099a6-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jxdv9\" (UID: \"c55b6c63-182a-4871-8b23-55a3edc099a6\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jxdv9" Jan 01 08:39:32 crc kubenswrapper[4867]: I0101 08:39:32.865926 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfrqf\" (UniqueName: \"kubernetes.io/projected/c55b6c63-182a-4871-8b23-55a3edc099a6-kube-api-access-rfrqf\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jxdv9\" (UID: \"c55b6c63-182a-4871-8b23-55a3edc099a6\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jxdv9" Jan 01 08:39:32 crc kubenswrapper[4867]: I0101 08:39:32.867850 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jxdv9" Jan 01 08:39:33 crc kubenswrapper[4867]: I0101 08:39:33.367337 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jxdv9"] Jan 01 08:39:33 crc kubenswrapper[4867]: W0101 08:39:33.377470 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc55b6c63_182a_4871_8b23_55a3edc099a6.slice/crio-ecbcd9a8b90f5f2e5e5da98f8553aa00e9734012dab615113f52aa891a3ee4f4 WatchSource:0}: Error finding container ecbcd9a8b90f5f2e5e5da98f8553aa00e9734012dab615113f52aa891a3ee4f4: Status 404 returned error can't find the container with id ecbcd9a8b90f5f2e5e5da98f8553aa00e9734012dab615113f52aa891a3ee4f4 Jan 01 08:39:33 crc kubenswrapper[4867]: I0101 08:39:33.796046 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jxdv9" event={"ID":"c55b6c63-182a-4871-8b23-55a3edc099a6","Type":"ContainerStarted","Data":"7635204f495f5de1a4e14e5fd567d87ec1d34ef929607ca5d4087638f11c0736"} Jan 01 08:39:33 crc kubenswrapper[4867]: I0101 08:39:33.796112 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jxdv9" event={"ID":"c55b6c63-182a-4871-8b23-55a3edc099a6","Type":"ContainerStarted","Data":"ecbcd9a8b90f5f2e5e5da98f8553aa00e9734012dab615113f52aa891a3ee4f4"} Jan 01 08:39:34 crc kubenswrapper[4867]: I0101 08:39:34.805285 4867 generic.go:334] "Generic (PLEG): container finished" podID="c55b6c63-182a-4871-8b23-55a3edc099a6" containerID="7635204f495f5de1a4e14e5fd567d87ec1d34ef929607ca5d4087638f11c0736" exitCode=0 Jan 01 08:39:34 crc kubenswrapper[4867]: I0101 08:39:34.805345 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jxdv9" event={"ID":"c55b6c63-182a-4871-8b23-55a3edc099a6","Type":"ContainerDied","Data":"7635204f495f5de1a4e14e5fd567d87ec1d34ef929607ca5d4087638f11c0736"} Jan 01 08:39:34 crc kubenswrapper[4867]: I0101 08:39:34.883643 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rltnr"] Jan 01 08:39:34 crc kubenswrapper[4867]: I0101 08:39:34.885808 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rltnr" Jan 01 08:39:34 crc kubenswrapper[4867]: I0101 08:39:34.897352 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rltnr"] Jan 01 08:39:35 crc kubenswrapper[4867]: I0101 08:39:35.065767 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lwfs\" (UniqueName: \"kubernetes.io/projected/6df70a8e-8c48-4ad7-b49c-253c574f7b71-kube-api-access-8lwfs\") pod \"redhat-operators-rltnr\" (UID: \"6df70a8e-8c48-4ad7-b49c-253c574f7b71\") " pod="openshift-marketplace/redhat-operators-rltnr" Jan 01 08:39:35 crc kubenswrapper[4867]: I0101 08:39:35.066020 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6df70a8e-8c48-4ad7-b49c-253c574f7b71-catalog-content\") pod \"redhat-operators-rltnr\" (UID: \"6df70a8e-8c48-4ad7-b49c-253c574f7b71\") " pod="openshift-marketplace/redhat-operators-rltnr" Jan 01 08:39:35 crc kubenswrapper[4867]: I0101 08:39:35.066110 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6df70a8e-8c48-4ad7-b49c-253c574f7b71-utilities\") pod \"redhat-operators-rltnr\" (UID: \"6df70a8e-8c48-4ad7-b49c-253c574f7b71\") " pod="openshift-marketplace/redhat-operators-rltnr" Jan 01 08:39:35 crc kubenswrapper[4867]: I0101 08:39:35.167071 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6df70a8e-8c48-4ad7-b49c-253c574f7b71-catalog-content\") pod \"redhat-operators-rltnr\" (UID: \"6df70a8e-8c48-4ad7-b49c-253c574f7b71\") " pod="openshift-marketplace/redhat-operators-rltnr" Jan 01 08:39:35 crc kubenswrapper[4867]: I0101 08:39:35.167201 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6df70a8e-8c48-4ad7-b49c-253c574f7b71-utilities\") pod \"redhat-operators-rltnr\" (UID: \"6df70a8e-8c48-4ad7-b49c-253c574f7b71\") " pod="openshift-marketplace/redhat-operators-rltnr" Jan 01 08:39:35 crc kubenswrapper[4867]: I0101 08:39:35.167275 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lwfs\" (UniqueName: \"kubernetes.io/projected/6df70a8e-8c48-4ad7-b49c-253c574f7b71-kube-api-access-8lwfs\") pod \"redhat-operators-rltnr\" (UID: \"6df70a8e-8c48-4ad7-b49c-253c574f7b71\") " pod="openshift-marketplace/redhat-operators-rltnr" Jan 01 08:39:35 crc kubenswrapper[4867]: I0101 08:39:35.167865 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6df70a8e-8c48-4ad7-b49c-253c574f7b71-catalog-content\") pod \"redhat-operators-rltnr\" (UID: \"6df70a8e-8c48-4ad7-b49c-253c574f7b71\") " pod="openshift-marketplace/redhat-operators-rltnr" Jan 01 08:39:35 crc kubenswrapper[4867]: I0101 08:39:35.167956 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6df70a8e-8c48-4ad7-b49c-253c574f7b71-utilities\") pod \"redhat-operators-rltnr\" (UID: \"6df70a8e-8c48-4ad7-b49c-253c574f7b71\") " pod="openshift-marketplace/redhat-operators-rltnr" Jan 01 08:39:35 crc kubenswrapper[4867]: I0101 08:39:35.201121 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lwfs\" (UniqueName: \"kubernetes.io/projected/6df70a8e-8c48-4ad7-b49c-253c574f7b71-kube-api-access-8lwfs\") pod \"redhat-operators-rltnr\" (UID: \"6df70a8e-8c48-4ad7-b49c-253c574f7b71\") " pod="openshift-marketplace/redhat-operators-rltnr" Jan 01 08:39:35 crc kubenswrapper[4867]: I0101 08:39:35.214482 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rltnr" Jan 01 08:39:35 crc kubenswrapper[4867]: I0101 08:39:35.506411 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rltnr"] Jan 01 08:39:35 crc kubenswrapper[4867]: W0101 08:39:35.512755 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6df70a8e_8c48_4ad7_b49c_253c574f7b71.slice/crio-862fe3bc3cf3e0fe3a89b8879c24cd7706f94e10a02589560be159f49fe094c1 WatchSource:0}: Error finding container 862fe3bc3cf3e0fe3a89b8879c24cd7706f94e10a02589560be159f49fe094c1: Status 404 returned error can't find the container with id 862fe3bc3cf3e0fe3a89b8879c24cd7706f94e10a02589560be159f49fe094c1 Jan 01 08:39:35 crc kubenswrapper[4867]: I0101 08:39:35.811444 4867 generic.go:334] "Generic (PLEG): container finished" podID="6df70a8e-8c48-4ad7-b49c-253c574f7b71" containerID="94d1ad37452e5979c8b02a8ae22858efb4e41797b6b8368f3d94dbf1ba0c8f6b" exitCode=0 Jan 01 08:39:35 crc kubenswrapper[4867]: I0101 08:39:35.811488 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rltnr" event={"ID":"6df70a8e-8c48-4ad7-b49c-253c574f7b71","Type":"ContainerDied","Data":"94d1ad37452e5979c8b02a8ae22858efb4e41797b6b8368f3d94dbf1ba0c8f6b"} Jan 01 08:39:35 crc kubenswrapper[4867]: I0101 08:39:35.811518 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rltnr" event={"ID":"6df70a8e-8c48-4ad7-b49c-253c574f7b71","Type":"ContainerStarted","Data":"862fe3bc3cf3e0fe3a89b8879c24cd7706f94e10a02589560be159f49fe094c1"} Jan 01 08:39:36 crc kubenswrapper[4867]: I0101 08:39:36.822155 4867 generic.go:334] "Generic (PLEG): container finished" podID="c55b6c63-182a-4871-8b23-55a3edc099a6" containerID="f6e177b6e1e3cc9aa5c065872dd1cf9cb54998b9a65781965634c02d6f905cac" exitCode=0 Jan 01 08:39:36 crc kubenswrapper[4867]: I0101 08:39:36.822286 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jxdv9" event={"ID":"c55b6c63-182a-4871-8b23-55a3edc099a6","Type":"ContainerDied","Data":"f6e177b6e1e3cc9aa5c065872dd1cf9cb54998b9a65781965634c02d6f905cac"} Jan 01 08:39:37 crc kubenswrapper[4867]: I0101 08:39:37.845558 4867 generic.go:334] "Generic (PLEG): container finished" podID="c55b6c63-182a-4871-8b23-55a3edc099a6" containerID="f7cfaaeb8c950f4c0030edc4bc3ac61384cee0e5f24ce590e72b7c3082d3b498" exitCode=0 Jan 01 08:39:37 crc kubenswrapper[4867]: I0101 08:39:37.845641 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jxdv9" event={"ID":"c55b6c63-182a-4871-8b23-55a3edc099a6","Type":"ContainerDied","Data":"f7cfaaeb8c950f4c0030edc4bc3ac61384cee0e5f24ce590e72b7c3082d3b498"} Jan 01 08:39:37 crc kubenswrapper[4867]: I0101 08:39:37.848944 4867 generic.go:334] "Generic (PLEG): container finished" podID="6df70a8e-8c48-4ad7-b49c-253c574f7b71" containerID="09b75ca6c152730a04e0606c1d302b69dd692c85df356d55d52c369b61346c2b" exitCode=0 Jan 01 08:39:37 crc kubenswrapper[4867]: I0101 08:39:37.849022 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rltnr" event={"ID":"6df70a8e-8c48-4ad7-b49c-253c574f7b71","Type":"ContainerDied","Data":"09b75ca6c152730a04e0606c1d302b69dd692c85df356d55d52c369b61346c2b"} Jan 01 08:39:38 crc kubenswrapper[4867]: I0101 08:39:38.858365 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rltnr" event={"ID":"6df70a8e-8c48-4ad7-b49c-253c574f7b71","Type":"ContainerStarted","Data":"dfa246c2bb32c45f517df66b12edc5b7137dd4fd11bf05e0d55f802b1f4e10c9"} Jan 01 08:39:38 crc kubenswrapper[4867]: I0101 08:39:38.888556 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rltnr" podStartSLOduration=2.420197532 podStartE2EDuration="4.888535749s" podCreationTimestamp="2026-01-01 08:39:34 +0000 UTC" firstStartedPulling="2026-01-01 08:39:35.812846839 +0000 UTC m=+784.948115598" lastFinishedPulling="2026-01-01 08:39:38.281185016 +0000 UTC m=+787.416453815" observedRunningTime="2026-01-01 08:39:38.882089242 +0000 UTC m=+788.017358041" watchObservedRunningTime="2026-01-01 08:39:38.888535749 +0000 UTC m=+788.023804528" Jan 01 08:39:39 crc kubenswrapper[4867]: I0101 08:39:39.193384 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jxdv9" Jan 01 08:39:39 crc kubenswrapper[4867]: I0101 08:39:39.323323 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c55b6c63-182a-4871-8b23-55a3edc099a6-bundle\") pod \"c55b6c63-182a-4871-8b23-55a3edc099a6\" (UID: \"c55b6c63-182a-4871-8b23-55a3edc099a6\") " Jan 01 08:39:39 crc kubenswrapper[4867]: I0101 08:39:39.323380 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c55b6c63-182a-4871-8b23-55a3edc099a6-util\") pod \"c55b6c63-182a-4871-8b23-55a3edc099a6\" (UID: \"c55b6c63-182a-4871-8b23-55a3edc099a6\") " Jan 01 08:39:39 crc kubenswrapper[4867]: I0101 08:39:39.323453 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfrqf\" (UniqueName: \"kubernetes.io/projected/c55b6c63-182a-4871-8b23-55a3edc099a6-kube-api-access-rfrqf\") pod \"c55b6c63-182a-4871-8b23-55a3edc099a6\" (UID: \"c55b6c63-182a-4871-8b23-55a3edc099a6\") " Jan 01 08:39:39 crc kubenswrapper[4867]: I0101 08:39:39.324220 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c55b6c63-182a-4871-8b23-55a3edc099a6-bundle" (OuterVolumeSpecName: "bundle") pod "c55b6c63-182a-4871-8b23-55a3edc099a6" (UID: "c55b6c63-182a-4871-8b23-55a3edc099a6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:39:39 crc kubenswrapper[4867]: I0101 08:39:39.333175 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c55b6c63-182a-4871-8b23-55a3edc099a6-kube-api-access-rfrqf" (OuterVolumeSpecName: "kube-api-access-rfrqf") pod "c55b6c63-182a-4871-8b23-55a3edc099a6" (UID: "c55b6c63-182a-4871-8b23-55a3edc099a6"). InnerVolumeSpecName "kube-api-access-rfrqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:39:39 crc kubenswrapper[4867]: I0101 08:39:39.337795 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c55b6c63-182a-4871-8b23-55a3edc099a6-util" (OuterVolumeSpecName: "util") pod "c55b6c63-182a-4871-8b23-55a3edc099a6" (UID: "c55b6c63-182a-4871-8b23-55a3edc099a6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:39:39 crc kubenswrapper[4867]: I0101 08:39:39.424750 4867 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c55b6c63-182a-4871-8b23-55a3edc099a6-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:39:39 crc kubenswrapper[4867]: I0101 08:39:39.424801 4867 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c55b6c63-182a-4871-8b23-55a3edc099a6-util\") on node \"crc\" DevicePath \"\"" Jan 01 08:39:39 crc kubenswrapper[4867]: I0101 08:39:39.424820 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfrqf\" (UniqueName: \"kubernetes.io/projected/c55b6c63-182a-4871-8b23-55a3edc099a6-kube-api-access-rfrqf\") on node \"crc\" DevicePath \"\"" Jan 01 08:39:39 crc kubenswrapper[4867]: I0101 08:39:39.868348 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jxdv9" event={"ID":"c55b6c63-182a-4871-8b23-55a3edc099a6","Type":"ContainerDied","Data":"ecbcd9a8b90f5f2e5e5da98f8553aa00e9734012dab615113f52aa891a3ee4f4"} Jan 01 08:39:39 crc kubenswrapper[4867]: I0101 08:39:39.868397 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecbcd9a8b90f5f2e5e5da98f8553aa00e9734012dab615113f52aa891a3ee4f4" Jan 01 08:39:39 crc kubenswrapper[4867]: I0101 08:39:39.868391 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jxdv9" Jan 01 08:39:42 crc kubenswrapper[4867]: I0101 08:39:42.990815 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-4nkqq"] Jan 01 08:39:42 crc kubenswrapper[4867]: E0101 08:39:42.991366 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c55b6c63-182a-4871-8b23-55a3edc099a6" containerName="extract" Jan 01 08:39:42 crc kubenswrapper[4867]: I0101 08:39:42.991380 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c55b6c63-182a-4871-8b23-55a3edc099a6" containerName="extract" Jan 01 08:39:42 crc kubenswrapper[4867]: E0101 08:39:42.991405 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c55b6c63-182a-4871-8b23-55a3edc099a6" containerName="pull" Jan 01 08:39:42 crc kubenswrapper[4867]: I0101 08:39:42.991414 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c55b6c63-182a-4871-8b23-55a3edc099a6" containerName="pull" Jan 01 08:39:42 crc kubenswrapper[4867]: E0101 08:39:42.991427 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c55b6c63-182a-4871-8b23-55a3edc099a6" containerName="util" Jan 01 08:39:42 crc kubenswrapper[4867]: I0101 08:39:42.991436 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c55b6c63-182a-4871-8b23-55a3edc099a6" containerName="util" Jan 01 08:39:42 crc kubenswrapper[4867]: I0101 08:39:42.991559 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="c55b6c63-182a-4871-8b23-55a3edc099a6" containerName="extract" Jan 01 08:39:42 crc kubenswrapper[4867]: I0101 08:39:42.991984 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-4nkqq" Jan 01 08:39:42 crc kubenswrapper[4867]: I0101 08:39:42.994676 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 01 08:39:42 crc kubenswrapper[4867]: I0101 08:39:42.994830 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 01 08:39:43 crc kubenswrapper[4867]: I0101 08:39:43.001734 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-4nkqq"] Jan 01 08:39:43 crc kubenswrapper[4867]: I0101 08:39:43.001982 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-kmsfh" Jan 01 08:39:43 crc kubenswrapper[4867]: I0101 08:39:43.175436 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fdlz\" (UniqueName: \"kubernetes.io/projected/5814d320-5a21-4996-96a3-0a19c1d304f2-kube-api-access-8fdlz\") pod \"nmstate-operator-6769fb99d-4nkqq\" (UID: \"5814d320-5a21-4996-96a3-0a19c1d304f2\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-4nkqq" Jan 01 08:39:43 crc kubenswrapper[4867]: I0101 08:39:43.276312 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fdlz\" (UniqueName: \"kubernetes.io/projected/5814d320-5a21-4996-96a3-0a19c1d304f2-kube-api-access-8fdlz\") pod \"nmstate-operator-6769fb99d-4nkqq\" (UID: \"5814d320-5a21-4996-96a3-0a19c1d304f2\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-4nkqq" Jan 01 08:39:43 crc kubenswrapper[4867]: I0101 08:39:43.297996 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fdlz\" (UniqueName: \"kubernetes.io/projected/5814d320-5a21-4996-96a3-0a19c1d304f2-kube-api-access-8fdlz\") pod \"nmstate-operator-6769fb99d-4nkqq\" (UID: \"5814d320-5a21-4996-96a3-0a19c1d304f2\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-4nkqq" Jan 01 08:39:43 crc kubenswrapper[4867]: I0101 08:39:43.309436 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-4nkqq" Jan 01 08:39:43 crc kubenswrapper[4867]: I0101 08:39:43.540945 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-4nkqq"] Jan 01 08:39:43 crc kubenswrapper[4867]: W0101 08:39:43.549645 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5814d320_5a21_4996_96a3_0a19c1d304f2.slice/crio-ac03e67987752dcf0323693694b31f5d3d35623f46b2a175d84f776c6d9ac99d WatchSource:0}: Error finding container ac03e67987752dcf0323693694b31f5d3d35623f46b2a175d84f776c6d9ac99d: Status 404 returned error can't find the container with id ac03e67987752dcf0323693694b31f5d3d35623f46b2a175d84f776c6d9ac99d Jan 01 08:39:43 crc kubenswrapper[4867]: I0101 08:39:43.895026 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-4nkqq" event={"ID":"5814d320-5a21-4996-96a3-0a19c1d304f2","Type":"ContainerStarted","Data":"ac03e67987752dcf0323693694b31f5d3d35623f46b2a175d84f776c6d9ac99d"} Jan 01 08:39:45 crc kubenswrapper[4867]: I0101 08:39:45.215527 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rltnr" Jan 01 08:39:45 crc kubenswrapper[4867]: I0101 08:39:45.215589 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rltnr" Jan 01 08:39:45 crc kubenswrapper[4867]: I0101 08:39:45.258176 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rltnr" Jan 01 08:39:45 crc kubenswrapper[4867]: I0101 08:39:45.966181 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rltnr" Jan 01 08:39:47 crc kubenswrapper[4867]: I0101 08:39:47.671297 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rltnr"] Jan 01 08:39:47 crc kubenswrapper[4867]: I0101 08:39:47.919789 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rltnr" podUID="6df70a8e-8c48-4ad7-b49c-253c574f7b71" containerName="registry-server" containerID="cri-o://dfa246c2bb32c45f517df66b12edc5b7137dd4fd11bf05e0d55f802b1f4e10c9" gracePeriod=2 Jan 01 08:39:50 crc kubenswrapper[4867]: I0101 08:39:50.942100 4867 generic.go:334] "Generic (PLEG): container finished" podID="6df70a8e-8c48-4ad7-b49c-253c574f7b71" containerID="dfa246c2bb32c45f517df66b12edc5b7137dd4fd11bf05e0d55f802b1f4e10c9" exitCode=0 Jan 01 08:39:50 crc kubenswrapper[4867]: I0101 08:39:50.942200 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rltnr" event={"ID":"6df70a8e-8c48-4ad7-b49c-253c574f7b71","Type":"ContainerDied","Data":"dfa246c2bb32c45f517df66b12edc5b7137dd4fd11bf05e0d55f802b1f4e10c9"} Jan 01 08:39:50 crc kubenswrapper[4867]: I0101 08:39:50.942696 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rltnr" event={"ID":"6df70a8e-8c48-4ad7-b49c-253c574f7b71","Type":"ContainerDied","Data":"862fe3bc3cf3e0fe3a89b8879c24cd7706f94e10a02589560be159f49fe094c1"} Jan 01 08:39:50 crc kubenswrapper[4867]: I0101 08:39:50.942711 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="862fe3bc3cf3e0fe3a89b8879c24cd7706f94e10a02589560be159f49fe094c1" Jan 01 08:39:50 crc kubenswrapper[4867]: I0101 08:39:50.965567 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rltnr" Jan 01 08:39:51 crc kubenswrapper[4867]: I0101 08:39:51.090136 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lwfs\" (UniqueName: \"kubernetes.io/projected/6df70a8e-8c48-4ad7-b49c-253c574f7b71-kube-api-access-8lwfs\") pod \"6df70a8e-8c48-4ad7-b49c-253c574f7b71\" (UID: \"6df70a8e-8c48-4ad7-b49c-253c574f7b71\") " Jan 01 08:39:51 crc kubenswrapper[4867]: I0101 08:39:51.090437 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6df70a8e-8c48-4ad7-b49c-253c574f7b71-utilities\") pod \"6df70a8e-8c48-4ad7-b49c-253c574f7b71\" (UID: \"6df70a8e-8c48-4ad7-b49c-253c574f7b71\") " Jan 01 08:39:51 crc kubenswrapper[4867]: I0101 08:39:51.090533 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6df70a8e-8c48-4ad7-b49c-253c574f7b71-catalog-content\") pod \"6df70a8e-8c48-4ad7-b49c-253c574f7b71\" (UID: \"6df70a8e-8c48-4ad7-b49c-253c574f7b71\") " Jan 01 08:39:51 crc kubenswrapper[4867]: I0101 08:39:51.091993 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6df70a8e-8c48-4ad7-b49c-253c574f7b71-utilities" (OuterVolumeSpecName: "utilities") pod "6df70a8e-8c48-4ad7-b49c-253c574f7b71" (UID: "6df70a8e-8c48-4ad7-b49c-253c574f7b71"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:39:51 crc kubenswrapper[4867]: I0101 08:39:51.097117 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6df70a8e-8c48-4ad7-b49c-253c574f7b71-kube-api-access-8lwfs" (OuterVolumeSpecName: "kube-api-access-8lwfs") pod "6df70a8e-8c48-4ad7-b49c-253c574f7b71" (UID: "6df70a8e-8c48-4ad7-b49c-253c574f7b71"). InnerVolumeSpecName "kube-api-access-8lwfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:39:51 crc kubenswrapper[4867]: I0101 08:39:51.193930 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lwfs\" (UniqueName: \"kubernetes.io/projected/6df70a8e-8c48-4ad7-b49c-253c574f7b71-kube-api-access-8lwfs\") on node \"crc\" DevicePath \"\"" Jan 01 08:39:51 crc kubenswrapper[4867]: I0101 08:39:51.193958 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6df70a8e-8c48-4ad7-b49c-253c574f7b71-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 08:39:51 crc kubenswrapper[4867]: I0101 08:39:51.225321 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6df70a8e-8c48-4ad7-b49c-253c574f7b71-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6df70a8e-8c48-4ad7-b49c-253c574f7b71" (UID: "6df70a8e-8c48-4ad7-b49c-253c574f7b71"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:39:51 crc kubenswrapper[4867]: I0101 08:39:51.294863 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6df70a8e-8c48-4ad7-b49c-253c574f7b71-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 08:39:51 crc kubenswrapper[4867]: I0101 08:39:51.948872 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rltnr" Jan 01 08:39:51 crc kubenswrapper[4867]: I0101 08:39:51.987409 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rltnr"] Jan 01 08:39:51 crc kubenswrapper[4867]: I0101 08:39:51.998232 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rltnr"] Jan 01 08:39:53 crc kubenswrapper[4867]: I0101 08:39:53.134242 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6df70a8e-8c48-4ad7-b49c-253c574f7b71" path="/var/lib/kubelet/pods/6df70a8e-8c48-4ad7-b49c-253c574f7b71/volumes" Jan 01 08:39:53 crc kubenswrapper[4867]: I0101 08:39:53.966173 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-4nkqq" event={"ID":"5814d320-5a21-4996-96a3-0a19c1d304f2","Type":"ContainerStarted","Data":"68dfbf2f84e16f633f7255b49f1067253dd7fe81599e50b8ea7e73620bfc8b05"} Jan 01 08:39:53 crc kubenswrapper[4867]: I0101 08:39:53.993712 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-6769fb99d-4nkqq" podStartSLOduration=2.464100053 podStartE2EDuration="11.993688659s" podCreationTimestamp="2026-01-01 08:39:42 +0000 UTC" firstStartedPulling="2026-01-01 08:39:43.552492405 +0000 UTC m=+792.687761174" lastFinishedPulling="2026-01-01 08:39:53.082081011 +0000 UTC m=+802.217349780" observedRunningTime="2026-01-01 08:39:53.989660418 +0000 UTC m=+803.124929247" watchObservedRunningTime="2026-01-01 08:39:53.993688659 +0000 UTC m=+803.128957448" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.061073 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-56bcb"] Jan 01 08:39:55 crc kubenswrapper[4867]: E0101 08:39:55.064306 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6df70a8e-8c48-4ad7-b49c-253c574f7b71" containerName="extract-utilities" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.064356 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6df70a8e-8c48-4ad7-b49c-253c574f7b71" containerName="extract-utilities" Jan 01 08:39:55 crc kubenswrapper[4867]: E0101 08:39:55.064433 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6df70a8e-8c48-4ad7-b49c-253c574f7b71" containerName="extract-content" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.064453 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6df70a8e-8c48-4ad7-b49c-253c574f7b71" containerName="extract-content" Jan 01 08:39:55 crc kubenswrapper[4867]: E0101 08:39:55.064478 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6df70a8e-8c48-4ad7-b49c-253c574f7b71" containerName="registry-server" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.064497 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6df70a8e-8c48-4ad7-b49c-253c574f7b71" containerName="registry-server" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.064800 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="6df70a8e-8c48-4ad7-b49c-253c574f7b71" containerName="registry-server" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.066177 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-56bcb" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.068357 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-mnpst" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.078887 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-x2hhj"] Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.079517 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-x2hhj" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.110755 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.134227 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-x2hhj"] Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.136068 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-kt6pp"] Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.136760 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-kt6pp" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.145992 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-56bcb"] Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.225746 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-2hlv5"] Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.226400 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-2hlv5" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.228605 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.229666 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-lgzrb" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.232710 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.240953 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-2hlv5"] Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.251288 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgfnz\" (UniqueName: \"kubernetes.io/projected/55028d16-a688-40c2-a1e7-eacb136d5ea1-kube-api-access-bgfnz\") pod \"nmstate-webhook-f8fb84555-x2hhj\" (UID: \"55028d16-a688-40c2-a1e7-eacb136d5ea1\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-x2hhj" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.251375 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d382f382-e329-4c64-9d5f-daa382470de3-dbus-socket\") pod \"nmstate-handler-kt6pp\" (UID: \"d382f382-e329-4c64-9d5f-daa382470de3\") " pod="openshift-nmstate/nmstate-handler-kt6pp" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.251412 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d382f382-e329-4c64-9d5f-daa382470de3-ovs-socket\") pod \"nmstate-handler-kt6pp\" (UID: \"d382f382-e329-4c64-9d5f-daa382470de3\") " pod="openshift-nmstate/nmstate-handler-kt6pp" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.252159 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/55028d16-a688-40c2-a1e7-eacb136d5ea1-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-x2hhj\" (UID: \"55028d16-a688-40c2-a1e7-eacb136d5ea1\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-x2hhj" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.252269 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm64s\" (UniqueName: \"kubernetes.io/projected/5c066e2d-baa4-4f40-a024-e8b4a5c67e1a-kube-api-access-jm64s\") pod \"nmstate-metrics-7f7f7578db-56bcb\" (UID: \"5c066e2d-baa4-4f40-a024-e8b4a5c67e1a\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-56bcb" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.252403 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d382f382-e329-4c64-9d5f-daa382470de3-nmstate-lock\") pod \"nmstate-handler-kt6pp\" (UID: \"d382f382-e329-4c64-9d5f-daa382470de3\") " pod="openshift-nmstate/nmstate-handler-kt6pp" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.252427 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qddzp\" (UniqueName: \"kubernetes.io/projected/d382f382-e329-4c64-9d5f-daa382470de3-kube-api-access-qddzp\") pod \"nmstate-handler-kt6pp\" (UID: \"d382f382-e329-4c64-9d5f-daa382470de3\") " pod="openshift-nmstate/nmstate-handler-kt6pp" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.353382 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm64s\" (UniqueName: \"kubernetes.io/projected/5c066e2d-baa4-4f40-a024-e8b4a5c67e1a-kube-api-access-jm64s\") pod \"nmstate-metrics-7f7f7578db-56bcb\" (UID: \"5c066e2d-baa4-4f40-a024-e8b4a5c67e1a\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-56bcb" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.353433 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d382f382-e329-4c64-9d5f-daa382470de3-nmstate-lock\") pod \"nmstate-handler-kt6pp\" (UID: \"d382f382-e329-4c64-9d5f-daa382470de3\") " pod="openshift-nmstate/nmstate-handler-kt6pp" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.353453 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qddzp\" (UniqueName: \"kubernetes.io/projected/d382f382-e329-4c64-9d5f-daa382470de3-kube-api-access-qddzp\") pod \"nmstate-handler-kt6pp\" (UID: \"d382f382-e329-4c64-9d5f-daa382470de3\") " pod="openshift-nmstate/nmstate-handler-kt6pp" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.353482 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgfnz\" (UniqueName: \"kubernetes.io/projected/55028d16-a688-40c2-a1e7-eacb136d5ea1-kube-api-access-bgfnz\") pod \"nmstate-webhook-f8fb84555-x2hhj\" (UID: \"55028d16-a688-40c2-a1e7-eacb136d5ea1\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-x2hhj" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.353512 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5j78\" (UniqueName: \"kubernetes.io/projected/5f75e4a7-c411-444c-820f-168a7f5e51fb-kube-api-access-k5j78\") pod \"nmstate-console-plugin-6ff7998486-2hlv5\" (UID: \"5f75e4a7-c411-444c-820f-168a7f5e51fb\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-2hlv5" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.353536 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d382f382-e329-4c64-9d5f-daa382470de3-dbus-socket\") pod \"nmstate-handler-kt6pp\" (UID: \"d382f382-e329-4c64-9d5f-daa382470de3\") " pod="openshift-nmstate/nmstate-handler-kt6pp" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.353553 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5f75e4a7-c411-444c-820f-168a7f5e51fb-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-2hlv5\" (UID: \"5f75e4a7-c411-444c-820f-168a7f5e51fb\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-2hlv5" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.353577 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5f75e4a7-c411-444c-820f-168a7f5e51fb-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-2hlv5\" (UID: \"5f75e4a7-c411-444c-820f-168a7f5e51fb\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-2hlv5" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.353568 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d382f382-e329-4c64-9d5f-daa382470de3-nmstate-lock\") pod \"nmstate-handler-kt6pp\" (UID: \"d382f382-e329-4c64-9d5f-daa382470de3\") " pod="openshift-nmstate/nmstate-handler-kt6pp" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.353634 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d382f382-e329-4c64-9d5f-daa382470de3-ovs-socket\") pod \"nmstate-handler-kt6pp\" (UID: \"d382f382-e329-4c64-9d5f-daa382470de3\") " pod="openshift-nmstate/nmstate-handler-kt6pp" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.353657 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/55028d16-a688-40c2-a1e7-eacb136d5ea1-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-x2hhj\" (UID: \"55028d16-a688-40c2-a1e7-eacb136d5ea1\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-x2hhj" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.353778 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d382f382-e329-4c64-9d5f-daa382470de3-ovs-socket\") pod \"nmstate-handler-kt6pp\" (UID: \"d382f382-e329-4c64-9d5f-daa382470de3\") " pod="openshift-nmstate/nmstate-handler-kt6pp" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.353946 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d382f382-e329-4c64-9d5f-daa382470de3-dbus-socket\") pod \"nmstate-handler-kt6pp\" (UID: \"d382f382-e329-4c64-9d5f-daa382470de3\") " pod="openshift-nmstate/nmstate-handler-kt6pp" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.360680 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/55028d16-a688-40c2-a1e7-eacb136d5ea1-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-x2hhj\" (UID: \"55028d16-a688-40c2-a1e7-eacb136d5ea1\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-x2hhj" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.371195 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgfnz\" (UniqueName: \"kubernetes.io/projected/55028d16-a688-40c2-a1e7-eacb136d5ea1-kube-api-access-bgfnz\") pod \"nmstate-webhook-f8fb84555-x2hhj\" (UID: \"55028d16-a688-40c2-a1e7-eacb136d5ea1\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-x2hhj" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.373063 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qddzp\" (UniqueName: \"kubernetes.io/projected/d382f382-e329-4c64-9d5f-daa382470de3-kube-api-access-qddzp\") pod \"nmstate-handler-kt6pp\" (UID: \"d382f382-e329-4c64-9d5f-daa382470de3\") " pod="openshift-nmstate/nmstate-handler-kt6pp" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.381852 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm64s\" (UniqueName: \"kubernetes.io/projected/5c066e2d-baa4-4f40-a024-e8b4a5c67e1a-kube-api-access-jm64s\") pod \"nmstate-metrics-7f7f7578db-56bcb\" (UID: \"5c066e2d-baa4-4f40-a024-e8b4a5c67e1a\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-56bcb" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.425906 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-667b55fd6f-6m7wt"] Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.426555 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-667b55fd6f-6m7wt" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.430730 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-56bcb" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.437778 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-x2hhj" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.442193 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-667b55fd6f-6m7wt"] Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.450637 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-kt6pp" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.455850 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0406448d-8912-4bd8-9366-aa485e3b5500-oauth-serving-cert\") pod \"console-667b55fd6f-6m7wt\" (UID: \"0406448d-8912-4bd8-9366-aa485e3b5500\") " pod="openshift-console/console-667b55fd6f-6m7wt" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.455933 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0406448d-8912-4bd8-9366-aa485e3b5500-console-oauth-config\") pod \"console-667b55fd6f-6m7wt\" (UID: \"0406448d-8912-4bd8-9366-aa485e3b5500\") " pod="openshift-console/console-667b55fd6f-6m7wt" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.455985 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0406448d-8912-4bd8-9366-aa485e3b5500-service-ca\") pod \"console-667b55fd6f-6m7wt\" (UID: \"0406448d-8912-4bd8-9366-aa485e3b5500\") " pod="openshift-console/console-667b55fd6f-6m7wt" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.456007 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9vhk\" (UniqueName: \"kubernetes.io/projected/0406448d-8912-4bd8-9366-aa485e3b5500-kube-api-access-k9vhk\") pod \"console-667b55fd6f-6m7wt\" (UID: \"0406448d-8912-4bd8-9366-aa485e3b5500\") " pod="openshift-console/console-667b55fd6f-6m7wt" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.456069 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0406448d-8912-4bd8-9366-aa485e3b5500-trusted-ca-bundle\") pod \"console-667b55fd6f-6m7wt\" (UID: \"0406448d-8912-4bd8-9366-aa485e3b5500\") " pod="openshift-console/console-667b55fd6f-6m7wt" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.456095 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0406448d-8912-4bd8-9366-aa485e3b5500-console-serving-cert\") pod \"console-667b55fd6f-6m7wt\" (UID: \"0406448d-8912-4bd8-9366-aa485e3b5500\") " pod="openshift-console/console-667b55fd6f-6m7wt" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.456121 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0406448d-8912-4bd8-9366-aa485e3b5500-console-config\") pod \"console-667b55fd6f-6m7wt\" (UID: \"0406448d-8912-4bd8-9366-aa485e3b5500\") " pod="openshift-console/console-667b55fd6f-6m7wt" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.456179 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5j78\" (UniqueName: \"kubernetes.io/projected/5f75e4a7-c411-444c-820f-168a7f5e51fb-kube-api-access-k5j78\") pod \"nmstate-console-plugin-6ff7998486-2hlv5\" (UID: \"5f75e4a7-c411-444c-820f-168a7f5e51fb\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-2hlv5" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.456237 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5f75e4a7-c411-444c-820f-168a7f5e51fb-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-2hlv5\" (UID: \"5f75e4a7-c411-444c-820f-168a7f5e51fb\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-2hlv5" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.456267 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5f75e4a7-c411-444c-820f-168a7f5e51fb-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-2hlv5\" (UID: \"5f75e4a7-c411-444c-820f-168a7f5e51fb\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-2hlv5" Jan 01 08:39:55 crc kubenswrapper[4867]: E0101 08:39:55.456461 4867 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 01 08:39:55 crc kubenswrapper[4867]: E0101 08:39:55.456557 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f75e4a7-c411-444c-820f-168a7f5e51fb-plugin-serving-cert podName:5f75e4a7-c411-444c-820f-168a7f5e51fb nodeName:}" failed. No retries permitted until 2026-01-01 08:39:55.95651498 +0000 UTC m=+805.091783749 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/5f75e4a7-c411-444c-820f-168a7f5e51fb-plugin-serving-cert") pod "nmstate-console-plugin-6ff7998486-2hlv5" (UID: "5f75e4a7-c411-444c-820f-168a7f5e51fb") : secret "plugin-serving-cert" not found Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.457390 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5f75e4a7-c411-444c-820f-168a7f5e51fb-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-2hlv5\" (UID: \"5f75e4a7-c411-444c-820f-168a7f5e51fb\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-2hlv5" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.491471 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5j78\" (UniqueName: \"kubernetes.io/projected/5f75e4a7-c411-444c-820f-168a7f5e51fb-kube-api-access-k5j78\") pod \"nmstate-console-plugin-6ff7998486-2hlv5\" (UID: \"5f75e4a7-c411-444c-820f-168a7f5e51fb\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-2hlv5" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.560082 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0406448d-8912-4bd8-9366-aa485e3b5500-oauth-serving-cert\") pod \"console-667b55fd6f-6m7wt\" (UID: \"0406448d-8912-4bd8-9366-aa485e3b5500\") " pod="openshift-console/console-667b55fd6f-6m7wt" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.560280 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0406448d-8912-4bd8-9366-aa485e3b5500-console-oauth-config\") pod \"console-667b55fd6f-6m7wt\" (UID: \"0406448d-8912-4bd8-9366-aa485e3b5500\") " pod="openshift-console/console-667b55fd6f-6m7wt" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.560324 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0406448d-8912-4bd8-9366-aa485e3b5500-service-ca\") pod \"console-667b55fd6f-6m7wt\" (UID: \"0406448d-8912-4bd8-9366-aa485e3b5500\") " pod="openshift-console/console-667b55fd6f-6m7wt" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.560345 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9vhk\" (UniqueName: \"kubernetes.io/projected/0406448d-8912-4bd8-9366-aa485e3b5500-kube-api-access-k9vhk\") pod \"console-667b55fd6f-6m7wt\" (UID: \"0406448d-8912-4bd8-9366-aa485e3b5500\") " pod="openshift-console/console-667b55fd6f-6m7wt" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.560379 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0406448d-8912-4bd8-9366-aa485e3b5500-trusted-ca-bundle\") pod \"console-667b55fd6f-6m7wt\" (UID: \"0406448d-8912-4bd8-9366-aa485e3b5500\") " pod="openshift-console/console-667b55fd6f-6m7wt" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.560416 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0406448d-8912-4bd8-9366-aa485e3b5500-console-serving-cert\") pod \"console-667b55fd6f-6m7wt\" (UID: \"0406448d-8912-4bd8-9366-aa485e3b5500\") " pod="openshift-console/console-667b55fd6f-6m7wt" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.560437 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0406448d-8912-4bd8-9366-aa485e3b5500-console-config\") pod \"console-667b55fd6f-6m7wt\" (UID: \"0406448d-8912-4bd8-9366-aa485e3b5500\") " pod="openshift-console/console-667b55fd6f-6m7wt" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.561973 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0406448d-8912-4bd8-9366-aa485e3b5500-console-config\") pod \"console-667b55fd6f-6m7wt\" (UID: \"0406448d-8912-4bd8-9366-aa485e3b5500\") " pod="openshift-console/console-667b55fd6f-6m7wt" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.562047 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0406448d-8912-4bd8-9366-aa485e3b5500-oauth-serving-cert\") pod \"console-667b55fd6f-6m7wt\" (UID: \"0406448d-8912-4bd8-9366-aa485e3b5500\") " pod="openshift-console/console-667b55fd6f-6m7wt" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.562054 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0406448d-8912-4bd8-9366-aa485e3b5500-trusted-ca-bundle\") pod \"console-667b55fd6f-6m7wt\" (UID: \"0406448d-8912-4bd8-9366-aa485e3b5500\") " pod="openshift-console/console-667b55fd6f-6m7wt" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.562096 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0406448d-8912-4bd8-9366-aa485e3b5500-service-ca\") pod \"console-667b55fd6f-6m7wt\" (UID: \"0406448d-8912-4bd8-9366-aa485e3b5500\") " pod="openshift-console/console-667b55fd6f-6m7wt" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.566081 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0406448d-8912-4bd8-9366-aa485e3b5500-console-oauth-config\") pod \"console-667b55fd6f-6m7wt\" (UID: \"0406448d-8912-4bd8-9366-aa485e3b5500\") " pod="openshift-console/console-667b55fd6f-6m7wt" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.566204 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0406448d-8912-4bd8-9366-aa485e3b5500-console-serving-cert\") pod \"console-667b55fd6f-6m7wt\" (UID: \"0406448d-8912-4bd8-9366-aa485e3b5500\") " pod="openshift-console/console-667b55fd6f-6m7wt" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.583744 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9vhk\" (UniqueName: \"kubernetes.io/projected/0406448d-8912-4bd8-9366-aa485e3b5500-kube-api-access-k9vhk\") pod \"console-667b55fd6f-6m7wt\" (UID: \"0406448d-8912-4bd8-9366-aa485e3b5500\") " pod="openshift-console/console-667b55fd6f-6m7wt" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.652066 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-56bcb"] Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.695596 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-x2hhj"] Jan 01 08:39:55 crc kubenswrapper[4867]: W0101 08:39:55.703914 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55028d16_a688_40c2_a1e7_eacb136d5ea1.slice/crio-f9ce97ac06260f3be41aa5d24a543d542f1ee02ca4553c5ae6454936adc885da WatchSource:0}: Error finding container f9ce97ac06260f3be41aa5d24a543d542f1ee02ca4553c5ae6454936adc885da: Status 404 returned error can't find the container with id f9ce97ac06260f3be41aa5d24a543d542f1ee02ca4553c5ae6454936adc885da Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.795673 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-667b55fd6f-6m7wt" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.965723 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5f75e4a7-c411-444c-820f-168a7f5e51fb-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-2hlv5\" (UID: \"5f75e4a7-c411-444c-820f-168a7f5e51fb\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-2hlv5" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.971167 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5f75e4a7-c411-444c-820f-168a7f5e51fb-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-2hlv5\" (UID: \"5f75e4a7-c411-444c-820f-168a7f5e51fb\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-2hlv5" Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.979705 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-56bcb" event={"ID":"5c066e2d-baa4-4f40-a024-e8b4a5c67e1a","Type":"ContainerStarted","Data":"36f38eaa1f60ba957cd3f31d7fd89e4f45908f85beae79930970081762019b98"} Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.980434 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-kt6pp" event={"ID":"d382f382-e329-4c64-9d5f-daa382470de3","Type":"ContainerStarted","Data":"9633f1ac357d49521f05003b3aea273daeb09c8c08f170d3fd79e059d79826b6"} Jan 01 08:39:55 crc kubenswrapper[4867]: I0101 08:39:55.981212 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-x2hhj" event={"ID":"55028d16-a688-40c2-a1e7-eacb136d5ea1","Type":"ContainerStarted","Data":"f9ce97ac06260f3be41aa5d24a543d542f1ee02ca4553c5ae6454936adc885da"} Jan 01 08:39:56 crc kubenswrapper[4867]: I0101 08:39:56.018077 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-667b55fd6f-6m7wt"] Jan 01 08:39:56 crc kubenswrapper[4867]: I0101 08:39:56.137965 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-2hlv5" Jan 01 08:39:56 crc kubenswrapper[4867]: I0101 08:39:56.374613 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-2hlv5"] Jan 01 08:39:56 crc kubenswrapper[4867]: W0101 08:39:56.379718 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f75e4a7_c411_444c_820f_168a7f5e51fb.slice/crio-afb24d5adaa703039ca6e5a5fb48711e6a42d86e706a5983b377738ffa6dacc2 WatchSource:0}: Error finding container afb24d5adaa703039ca6e5a5fb48711e6a42d86e706a5983b377738ffa6dacc2: Status 404 returned error can't find the container with id afb24d5adaa703039ca6e5a5fb48711e6a42d86e706a5983b377738ffa6dacc2 Jan 01 08:39:57 crc kubenswrapper[4867]: I0101 08:39:57.003179 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-2hlv5" event={"ID":"5f75e4a7-c411-444c-820f-168a7f5e51fb","Type":"ContainerStarted","Data":"afb24d5adaa703039ca6e5a5fb48711e6a42d86e706a5983b377738ffa6dacc2"} Jan 01 08:39:57 crc kubenswrapper[4867]: I0101 08:39:57.005737 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-667b55fd6f-6m7wt" event={"ID":"0406448d-8912-4bd8-9366-aa485e3b5500","Type":"ContainerStarted","Data":"b624746a63cd1a06ac83bfa7286060baa4ebef5eba951ec12258994c0539c1a3"} Jan 01 08:39:57 crc kubenswrapper[4867]: I0101 08:39:57.005761 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-667b55fd6f-6m7wt" event={"ID":"0406448d-8912-4bd8-9366-aa485e3b5500","Type":"ContainerStarted","Data":"a9341db314861a69d502c76e44f4c51be36f3bb681979142e1b65f8146e0e1f6"} Jan 01 08:39:57 crc kubenswrapper[4867]: I0101 08:39:57.026439 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-667b55fd6f-6m7wt" podStartSLOduration=2.02641437 podStartE2EDuration="2.02641437s" podCreationTimestamp="2026-01-01 08:39:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:39:57.020212309 +0000 UTC m=+806.155481098" watchObservedRunningTime="2026-01-01 08:39:57.02641437 +0000 UTC m=+806.161683139" Jan 01 08:39:58 crc kubenswrapper[4867]: I0101 08:39:58.018196 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-x2hhj" event={"ID":"55028d16-a688-40c2-a1e7-eacb136d5ea1","Type":"ContainerStarted","Data":"50825bea1f806493528249124781e7b6a7a52a39d934f8136c4b1f2d2b260075"} Jan 01 08:39:58 crc kubenswrapper[4867]: I0101 08:39:58.018591 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-f8fb84555-x2hhj" Jan 01 08:39:58 crc kubenswrapper[4867]: I0101 08:39:58.020440 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-56bcb" event={"ID":"5c066e2d-baa4-4f40-a024-e8b4a5c67e1a","Type":"ContainerStarted","Data":"694ff954fdd8269408a8a63a801cc1f7d04dc11dfb1fd7ba0988efbbd02b4283"} Jan 01 08:39:58 crc kubenswrapper[4867]: I0101 08:39:58.037932 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-f8fb84555-x2hhj" podStartSLOduration=1.050021335 podStartE2EDuration="3.037871039s" podCreationTimestamp="2026-01-01 08:39:55 +0000 UTC" firstStartedPulling="2026-01-01 08:39:55.708820517 +0000 UTC m=+804.844089286" lastFinishedPulling="2026-01-01 08:39:57.696670221 +0000 UTC m=+806.831938990" observedRunningTime="2026-01-01 08:39:58.032012328 +0000 UTC m=+807.167281117" watchObservedRunningTime="2026-01-01 08:39:58.037871039 +0000 UTC m=+807.173139818" Jan 01 08:39:58 crc kubenswrapper[4867]: I0101 08:39:58.055732 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-kt6pp" podStartSLOduration=0.813330385 podStartE2EDuration="3.055713008s" podCreationTimestamp="2026-01-01 08:39:55 +0000 UTC" firstStartedPulling="2026-01-01 08:39:55.489387592 +0000 UTC m=+804.624656361" lastFinishedPulling="2026-01-01 08:39:57.731770215 +0000 UTC m=+806.867038984" observedRunningTime="2026-01-01 08:39:58.052090339 +0000 UTC m=+807.187359138" watchObservedRunningTime="2026-01-01 08:39:58.055713008 +0000 UTC m=+807.190981787" Jan 01 08:39:59 crc kubenswrapper[4867]: I0101 08:39:59.032173 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-kt6pp" event={"ID":"d382f382-e329-4c64-9d5f-daa382470de3","Type":"ContainerStarted","Data":"5667874b5951bf6e4bcf8118300d17576ab646d671feb816a1e6be07a7a5afad"} Jan 01 08:39:59 crc kubenswrapper[4867]: I0101 08:39:59.032862 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-kt6pp" Jan 01 08:39:59 crc kubenswrapper[4867]: I0101 08:39:59.034337 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-2hlv5" event={"ID":"5f75e4a7-c411-444c-820f-168a7f5e51fb","Type":"ContainerStarted","Data":"f1ce19f0ca217046d78cfcc2c952304e7ecfc74a30a4827a991b061c43f03666"} Jan 01 08:39:59 crc kubenswrapper[4867]: I0101 08:39:59.062246 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-2hlv5" podStartSLOduration=1.8377384700000001 podStartE2EDuration="4.06218512s" podCreationTimestamp="2026-01-01 08:39:55 +0000 UTC" firstStartedPulling="2026-01-01 08:39:56.381784563 +0000 UTC m=+805.517053332" lastFinishedPulling="2026-01-01 08:39:58.606231213 +0000 UTC m=+807.741499982" observedRunningTime="2026-01-01 08:39:59.060267768 +0000 UTC m=+808.195536577" watchObservedRunningTime="2026-01-01 08:39:59.06218512 +0000 UTC m=+808.197453919" Jan 01 08:40:00 crc kubenswrapper[4867]: I0101 08:40:00.045914 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-56bcb" event={"ID":"5c066e2d-baa4-4f40-a024-e8b4a5c67e1a","Type":"ContainerStarted","Data":"475276f080f9e32a853697cf77b527bb3a371d8104c30c4e3037727603e1b673"} Jan 01 08:40:00 crc kubenswrapper[4867]: I0101 08:40:00.067580 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-56bcb" podStartSLOduration=0.99125808 podStartE2EDuration="5.067561672s" podCreationTimestamp="2026-01-01 08:39:55 +0000 UTC" firstStartedPulling="2026-01-01 08:39:55.66522877 +0000 UTC m=+804.800497539" lastFinishedPulling="2026-01-01 08:39:59.741532362 +0000 UTC m=+808.876801131" observedRunningTime="2026-01-01 08:40:00.065515135 +0000 UTC m=+809.200783964" watchObservedRunningTime="2026-01-01 08:40:00.067561672 +0000 UTC m=+809.202830451" Jan 01 08:40:05 crc kubenswrapper[4867]: I0101 08:40:05.482295 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-kt6pp" Jan 01 08:40:05 crc kubenswrapper[4867]: I0101 08:40:05.796425 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-667b55fd6f-6m7wt" Jan 01 08:40:05 crc kubenswrapper[4867]: I0101 08:40:05.797515 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-667b55fd6f-6m7wt" Jan 01 08:40:05 crc kubenswrapper[4867]: I0101 08:40:05.805526 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-667b55fd6f-6m7wt" Jan 01 08:40:06 crc kubenswrapper[4867]: I0101 08:40:06.101416 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-667b55fd6f-6m7wt" Jan 01 08:40:06 crc kubenswrapper[4867]: I0101 08:40:06.202979 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-6lsq2"] Jan 01 08:40:15 crc kubenswrapper[4867]: I0101 08:40:15.447422 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-f8fb84555-x2hhj" Jan 01 08:40:31 crc kubenswrapper[4867]: I0101 08:40:31.264468 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-6lsq2" podUID="25d57f2f-1353-417b-ba47-a0ceb1a4577e" containerName="console" containerID="cri-o://f24abf94f4fac3cefc439d3a5fb75778aa741a1a6910f2a0336af767d46240e2" gracePeriod=15 Jan 01 08:40:31 crc kubenswrapper[4867]: I0101 08:40:31.870675 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zkv2w"] Jan 01 08:40:31 crc kubenswrapper[4867]: I0101 08:40:31.876003 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zkv2w" Jan 01 08:40:31 crc kubenswrapper[4867]: I0101 08:40:31.884962 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 01 08:40:31 crc kubenswrapper[4867]: I0101 08:40:31.888988 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zkv2w"] Jan 01 08:40:31 crc kubenswrapper[4867]: I0101 08:40:31.968329 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgwq4\" (UniqueName: \"kubernetes.io/projected/a970fbd3-2646-4156-9fe9-a7c33b86b488-kube-api-access-rgwq4\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zkv2w\" (UID: \"a970fbd3-2646-4156-9fe9-a7c33b86b488\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zkv2w" Jan 01 08:40:31 crc kubenswrapper[4867]: I0101 08:40:31.968389 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a970fbd3-2646-4156-9fe9-a7c33b86b488-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zkv2w\" (UID: \"a970fbd3-2646-4156-9fe9-a7c33b86b488\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zkv2w" Jan 01 08:40:31 crc kubenswrapper[4867]: I0101 08:40:31.968457 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a970fbd3-2646-4156-9fe9-a7c33b86b488-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zkv2w\" (UID: \"a970fbd3-2646-4156-9fe9-a7c33b86b488\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zkv2w" Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.069418 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a970fbd3-2646-4156-9fe9-a7c33b86b488-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zkv2w\" (UID: \"a970fbd3-2646-4156-9fe9-a7c33b86b488\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zkv2w" Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.069469 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgwq4\" (UniqueName: \"kubernetes.io/projected/a970fbd3-2646-4156-9fe9-a7c33b86b488-kube-api-access-rgwq4\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zkv2w\" (UID: \"a970fbd3-2646-4156-9fe9-a7c33b86b488\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zkv2w" Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.069503 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a970fbd3-2646-4156-9fe9-a7c33b86b488-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zkv2w\" (UID: \"a970fbd3-2646-4156-9fe9-a7c33b86b488\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zkv2w" Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.070065 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a970fbd3-2646-4156-9fe9-a7c33b86b488-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zkv2w\" (UID: \"a970fbd3-2646-4156-9fe9-a7c33b86b488\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zkv2w" Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.070232 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a970fbd3-2646-4156-9fe9-a7c33b86b488-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zkv2w\" (UID: \"a970fbd3-2646-4156-9fe9-a7c33b86b488\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zkv2w" Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.090341 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgwq4\" (UniqueName: \"kubernetes.io/projected/a970fbd3-2646-4156-9fe9-a7c33b86b488-kube-api-access-rgwq4\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zkv2w\" (UID: \"a970fbd3-2646-4156-9fe9-a7c33b86b488\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zkv2w" Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.158208 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-6lsq2_25d57f2f-1353-417b-ba47-a0ceb1a4577e/console/0.log" Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.158280 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6lsq2" Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.200712 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zkv2w" Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.272243 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/25d57f2f-1353-417b-ba47-a0ceb1a4577e-console-config\") pod \"25d57f2f-1353-417b-ba47-a0ceb1a4577e\" (UID: \"25d57f2f-1353-417b-ba47-a0ceb1a4577e\") " Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.272607 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25d57f2f-1353-417b-ba47-a0ceb1a4577e-trusted-ca-bundle\") pod \"25d57f2f-1353-417b-ba47-a0ceb1a4577e\" (UID: \"25d57f2f-1353-417b-ba47-a0ceb1a4577e\") " Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.272665 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/25d57f2f-1353-417b-ba47-a0ceb1a4577e-console-oauth-config\") pod \"25d57f2f-1353-417b-ba47-a0ceb1a4577e\" (UID: \"25d57f2f-1353-417b-ba47-a0ceb1a4577e\") " Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.272729 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrpfd\" (UniqueName: \"kubernetes.io/projected/25d57f2f-1353-417b-ba47-a0ceb1a4577e-kube-api-access-vrpfd\") pod \"25d57f2f-1353-417b-ba47-a0ceb1a4577e\" (UID: \"25d57f2f-1353-417b-ba47-a0ceb1a4577e\") " Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.272751 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/25d57f2f-1353-417b-ba47-a0ceb1a4577e-service-ca\") pod \"25d57f2f-1353-417b-ba47-a0ceb1a4577e\" (UID: \"25d57f2f-1353-417b-ba47-a0ceb1a4577e\") " Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.272772 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/25d57f2f-1353-417b-ba47-a0ceb1a4577e-oauth-serving-cert\") pod \"25d57f2f-1353-417b-ba47-a0ceb1a4577e\" (UID: \"25d57f2f-1353-417b-ba47-a0ceb1a4577e\") " Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.272809 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/25d57f2f-1353-417b-ba47-a0ceb1a4577e-console-serving-cert\") pod \"25d57f2f-1353-417b-ba47-a0ceb1a4577e\" (UID: \"25d57f2f-1353-417b-ba47-a0ceb1a4577e\") " Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.273168 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25d57f2f-1353-417b-ba47-a0ceb1a4577e-console-config" (OuterVolumeSpecName: "console-config") pod "25d57f2f-1353-417b-ba47-a0ceb1a4577e" (UID: "25d57f2f-1353-417b-ba47-a0ceb1a4577e"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.273227 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25d57f2f-1353-417b-ba47-a0ceb1a4577e-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "25d57f2f-1353-417b-ba47-a0ceb1a4577e" (UID: "25d57f2f-1353-417b-ba47-a0ceb1a4577e"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.274918 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25d57f2f-1353-417b-ba47-a0ceb1a4577e-service-ca" (OuterVolumeSpecName: "service-ca") pod "25d57f2f-1353-417b-ba47-a0ceb1a4577e" (UID: "25d57f2f-1353-417b-ba47-a0ceb1a4577e"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.275706 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25d57f2f-1353-417b-ba47-a0ceb1a4577e-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "25d57f2f-1353-417b-ba47-a0ceb1a4577e" (UID: "25d57f2f-1353-417b-ba47-a0ceb1a4577e"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.277299 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-6lsq2_25d57f2f-1353-417b-ba47-a0ceb1a4577e/console/0.log" Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.277351 4867 generic.go:334] "Generic (PLEG): container finished" podID="25d57f2f-1353-417b-ba47-a0ceb1a4577e" containerID="f24abf94f4fac3cefc439d3a5fb75778aa741a1a6910f2a0336af767d46240e2" exitCode=2 Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.277383 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6lsq2" event={"ID":"25d57f2f-1353-417b-ba47-a0ceb1a4577e","Type":"ContainerDied","Data":"f24abf94f4fac3cefc439d3a5fb75778aa741a1a6910f2a0336af767d46240e2"} Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.277412 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6lsq2" event={"ID":"25d57f2f-1353-417b-ba47-a0ceb1a4577e","Type":"ContainerDied","Data":"78c1070c6faeeee7298b51fd78b1b7dd587dfc15453ad2223bc05b674894e2a6"} Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.277535 4867 scope.go:117] "RemoveContainer" containerID="f24abf94f4fac3cefc439d3a5fb75778aa741a1a6910f2a0336af767d46240e2" Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.277655 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6lsq2" Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.279160 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25d57f2f-1353-417b-ba47-a0ceb1a4577e-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "25d57f2f-1353-417b-ba47-a0ceb1a4577e" (UID: "25d57f2f-1353-417b-ba47-a0ceb1a4577e"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.279320 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25d57f2f-1353-417b-ba47-a0ceb1a4577e-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "25d57f2f-1353-417b-ba47-a0ceb1a4577e" (UID: "25d57f2f-1353-417b-ba47-a0ceb1a4577e"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.280061 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25d57f2f-1353-417b-ba47-a0ceb1a4577e-kube-api-access-vrpfd" (OuterVolumeSpecName: "kube-api-access-vrpfd") pod "25d57f2f-1353-417b-ba47-a0ceb1a4577e" (UID: "25d57f2f-1353-417b-ba47-a0ceb1a4577e"). InnerVolumeSpecName "kube-api-access-vrpfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.317050 4867 scope.go:117] "RemoveContainer" containerID="f24abf94f4fac3cefc439d3a5fb75778aa741a1a6910f2a0336af767d46240e2" Jan 01 08:40:32 crc kubenswrapper[4867]: E0101 08:40:32.348852 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f24abf94f4fac3cefc439d3a5fb75778aa741a1a6910f2a0336af767d46240e2\": container with ID starting with f24abf94f4fac3cefc439d3a5fb75778aa741a1a6910f2a0336af767d46240e2 not found: ID does not exist" containerID="f24abf94f4fac3cefc439d3a5fb75778aa741a1a6910f2a0336af767d46240e2" Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.348936 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f24abf94f4fac3cefc439d3a5fb75778aa741a1a6910f2a0336af767d46240e2"} err="failed to get container status \"f24abf94f4fac3cefc439d3a5fb75778aa741a1a6910f2a0336af767d46240e2\": rpc error: code = NotFound desc = could not find container \"f24abf94f4fac3cefc439d3a5fb75778aa741a1a6910f2a0336af767d46240e2\": container with ID starting with f24abf94f4fac3cefc439d3a5fb75778aa741a1a6910f2a0336af767d46240e2 not found: ID does not exist" Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.373941 4867 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/25d57f2f-1353-417b-ba47-a0ceb1a4577e-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.373986 4867 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/25d57f2f-1353-417b-ba47-a0ceb1a4577e-console-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.373996 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25d57f2f-1353-417b-ba47-a0ceb1a4577e-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.374005 4867 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/25d57f2f-1353-417b-ba47-a0ceb1a4577e-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.374016 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrpfd\" (UniqueName: \"kubernetes.io/projected/25d57f2f-1353-417b-ba47-a0ceb1a4577e-kube-api-access-vrpfd\") on node \"crc\" DevicePath \"\"" Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.374027 4867 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/25d57f2f-1353-417b-ba47-a0ceb1a4577e-service-ca\") on node \"crc\" DevicePath \"\"" Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.374035 4867 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/25d57f2f-1353-417b-ba47-a0ceb1a4577e-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.466760 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zkv2w"] Jan 01 08:40:32 crc kubenswrapper[4867]: W0101 08:40:32.470188 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda970fbd3_2646_4156_9fe9_a7c33b86b488.slice/crio-4b8c4191787aef651ea6b49c87afff8068d2e317c43ce2ac3a6be65079028884 WatchSource:0}: Error finding container 4b8c4191787aef651ea6b49c87afff8068d2e317c43ce2ac3a6be65079028884: Status 404 returned error can't find the container with id 4b8c4191787aef651ea6b49c87afff8068d2e317c43ce2ac3a6be65079028884 Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.621793 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-6lsq2"] Jan 01 08:40:32 crc kubenswrapper[4867]: I0101 08:40:32.630572 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-6lsq2"] Jan 01 08:40:33 crc kubenswrapper[4867]: I0101 08:40:33.144940 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25d57f2f-1353-417b-ba47-a0ceb1a4577e" path="/var/lib/kubelet/pods/25d57f2f-1353-417b-ba47-a0ceb1a4577e/volumes" Jan 01 08:40:33 crc kubenswrapper[4867]: I0101 08:40:33.291815 4867 generic.go:334] "Generic (PLEG): container finished" podID="a970fbd3-2646-4156-9fe9-a7c33b86b488" containerID="fd7c22db28bef94223a94083c6080a711435fce88bd36fe7743ba821e5a1692e" exitCode=0 Jan 01 08:40:33 crc kubenswrapper[4867]: I0101 08:40:33.292143 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zkv2w" event={"ID":"a970fbd3-2646-4156-9fe9-a7c33b86b488","Type":"ContainerDied","Data":"fd7c22db28bef94223a94083c6080a711435fce88bd36fe7743ba821e5a1692e"} Jan 01 08:40:33 crc kubenswrapper[4867]: I0101 08:40:33.292428 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zkv2w" event={"ID":"a970fbd3-2646-4156-9fe9-a7c33b86b488","Type":"ContainerStarted","Data":"4b8c4191787aef651ea6b49c87afff8068d2e317c43ce2ac3a6be65079028884"} Jan 01 08:40:35 crc kubenswrapper[4867]: I0101 08:40:35.314061 4867 generic.go:334] "Generic (PLEG): container finished" podID="a970fbd3-2646-4156-9fe9-a7c33b86b488" containerID="070cce07934beb4d2305bfb4df4443cb891406fe7471ecb5ea91d9b78624385b" exitCode=0 Jan 01 08:40:35 crc kubenswrapper[4867]: I0101 08:40:35.314171 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zkv2w" event={"ID":"a970fbd3-2646-4156-9fe9-a7c33b86b488","Type":"ContainerDied","Data":"070cce07934beb4d2305bfb4df4443cb891406fe7471ecb5ea91d9b78624385b"} Jan 01 08:40:36 crc kubenswrapper[4867]: I0101 08:40:36.325649 4867 generic.go:334] "Generic (PLEG): container finished" podID="a970fbd3-2646-4156-9fe9-a7c33b86b488" containerID="b3c8156e18db72094134dbf881f5a0a75c5965133ffe1fdb021e502eece7c23e" exitCode=0 Jan 01 08:40:36 crc kubenswrapper[4867]: I0101 08:40:36.325711 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zkv2w" event={"ID":"a970fbd3-2646-4156-9fe9-a7c33b86b488","Type":"ContainerDied","Data":"b3c8156e18db72094134dbf881f5a0a75c5965133ffe1fdb021e502eece7c23e"} Jan 01 08:40:37 crc kubenswrapper[4867]: I0101 08:40:37.697309 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zkv2w" Jan 01 08:40:37 crc kubenswrapper[4867]: I0101 08:40:37.867507 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a970fbd3-2646-4156-9fe9-a7c33b86b488-util\") pod \"a970fbd3-2646-4156-9fe9-a7c33b86b488\" (UID: \"a970fbd3-2646-4156-9fe9-a7c33b86b488\") " Jan 01 08:40:37 crc kubenswrapper[4867]: I0101 08:40:37.867612 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a970fbd3-2646-4156-9fe9-a7c33b86b488-bundle\") pod \"a970fbd3-2646-4156-9fe9-a7c33b86b488\" (UID: \"a970fbd3-2646-4156-9fe9-a7c33b86b488\") " Jan 01 08:40:37 crc kubenswrapper[4867]: I0101 08:40:37.867649 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgwq4\" (UniqueName: \"kubernetes.io/projected/a970fbd3-2646-4156-9fe9-a7c33b86b488-kube-api-access-rgwq4\") pod \"a970fbd3-2646-4156-9fe9-a7c33b86b488\" (UID: \"a970fbd3-2646-4156-9fe9-a7c33b86b488\") " Jan 01 08:40:37 crc kubenswrapper[4867]: I0101 08:40:37.869357 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a970fbd3-2646-4156-9fe9-a7c33b86b488-bundle" (OuterVolumeSpecName: "bundle") pod "a970fbd3-2646-4156-9fe9-a7c33b86b488" (UID: "a970fbd3-2646-4156-9fe9-a7c33b86b488"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:40:37 crc kubenswrapper[4867]: I0101 08:40:37.877532 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a970fbd3-2646-4156-9fe9-a7c33b86b488-kube-api-access-rgwq4" (OuterVolumeSpecName: "kube-api-access-rgwq4") pod "a970fbd3-2646-4156-9fe9-a7c33b86b488" (UID: "a970fbd3-2646-4156-9fe9-a7c33b86b488"). InnerVolumeSpecName "kube-api-access-rgwq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:40:37 crc kubenswrapper[4867]: I0101 08:40:37.969265 4867 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a970fbd3-2646-4156-9fe9-a7c33b86b488-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:40:37 crc kubenswrapper[4867]: I0101 08:40:37.969338 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgwq4\" (UniqueName: \"kubernetes.io/projected/a970fbd3-2646-4156-9fe9-a7c33b86b488-kube-api-access-rgwq4\") on node \"crc\" DevicePath \"\"" Jan 01 08:40:38 crc kubenswrapper[4867]: I0101 08:40:38.059486 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a970fbd3-2646-4156-9fe9-a7c33b86b488-util" (OuterVolumeSpecName: "util") pod "a970fbd3-2646-4156-9fe9-a7c33b86b488" (UID: "a970fbd3-2646-4156-9fe9-a7c33b86b488"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:40:38 crc kubenswrapper[4867]: I0101 08:40:38.070466 4867 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a970fbd3-2646-4156-9fe9-a7c33b86b488-util\") on node \"crc\" DevicePath \"\"" Jan 01 08:40:38 crc kubenswrapper[4867]: I0101 08:40:38.346848 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zkv2w" event={"ID":"a970fbd3-2646-4156-9fe9-a7c33b86b488","Type":"ContainerDied","Data":"4b8c4191787aef651ea6b49c87afff8068d2e317c43ce2ac3a6be65079028884"} Jan 01 08:40:38 crc kubenswrapper[4867]: I0101 08:40:38.347428 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b8c4191787aef651ea6b49c87afff8068d2e317c43ce2ac3a6be65079028884" Jan 01 08:40:38 crc kubenswrapper[4867]: I0101 08:40:38.347002 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zkv2w" Jan 01 08:40:46 crc kubenswrapper[4867]: I0101 08:40:46.841935 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-74c858c997-zhnxl"] Jan 01 08:40:46 crc kubenswrapper[4867]: E0101 08:40:46.842655 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a970fbd3-2646-4156-9fe9-a7c33b86b488" containerName="extract" Jan 01 08:40:46 crc kubenswrapper[4867]: I0101 08:40:46.842670 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="a970fbd3-2646-4156-9fe9-a7c33b86b488" containerName="extract" Jan 01 08:40:46 crc kubenswrapper[4867]: E0101 08:40:46.842683 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a970fbd3-2646-4156-9fe9-a7c33b86b488" containerName="util" Jan 01 08:40:46 crc kubenswrapper[4867]: I0101 08:40:46.842689 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="a970fbd3-2646-4156-9fe9-a7c33b86b488" containerName="util" Jan 01 08:40:46 crc kubenswrapper[4867]: E0101 08:40:46.842702 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25d57f2f-1353-417b-ba47-a0ceb1a4577e" containerName="console" Jan 01 08:40:46 crc kubenswrapper[4867]: I0101 08:40:46.842711 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="25d57f2f-1353-417b-ba47-a0ceb1a4577e" containerName="console" Jan 01 08:40:46 crc kubenswrapper[4867]: E0101 08:40:46.842722 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a970fbd3-2646-4156-9fe9-a7c33b86b488" containerName="pull" Jan 01 08:40:46 crc kubenswrapper[4867]: I0101 08:40:46.842729 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="a970fbd3-2646-4156-9fe9-a7c33b86b488" containerName="pull" Jan 01 08:40:46 crc kubenswrapper[4867]: I0101 08:40:46.842866 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="25d57f2f-1353-417b-ba47-a0ceb1a4577e" containerName="console" Jan 01 08:40:46 crc kubenswrapper[4867]: I0101 08:40:46.842895 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="a970fbd3-2646-4156-9fe9-a7c33b86b488" containerName="extract" Jan 01 08:40:46 crc kubenswrapper[4867]: I0101 08:40:46.843255 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-74c858c997-zhnxl" Jan 01 08:40:46 crc kubenswrapper[4867]: I0101 08:40:46.849294 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 01 08:40:46 crc kubenswrapper[4867]: I0101 08:40:46.851055 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 01 08:40:46 crc kubenswrapper[4867]: I0101 08:40:46.851174 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 01 08:40:46 crc kubenswrapper[4867]: I0101 08:40:46.851360 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-jpg8f" Jan 01 08:40:46 crc kubenswrapper[4867]: I0101 08:40:46.851718 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 01 08:40:46 crc kubenswrapper[4867]: I0101 08:40:46.855409 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-74c858c997-zhnxl"] Jan 01 08:40:46 crc kubenswrapper[4867]: I0101 08:40:46.892911 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99dq2\" (UniqueName: \"kubernetes.io/projected/1a30748a-7ac7-4db0-89b3-17a43c7e3fde-kube-api-access-99dq2\") pod \"metallb-operator-controller-manager-74c858c997-zhnxl\" (UID: \"1a30748a-7ac7-4db0-89b3-17a43c7e3fde\") " pod="metallb-system/metallb-operator-controller-manager-74c858c997-zhnxl" Jan 01 08:40:46 crc kubenswrapper[4867]: I0101 08:40:46.893161 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1a30748a-7ac7-4db0-89b3-17a43c7e3fde-apiservice-cert\") pod \"metallb-operator-controller-manager-74c858c997-zhnxl\" (UID: \"1a30748a-7ac7-4db0-89b3-17a43c7e3fde\") " pod="metallb-system/metallb-operator-controller-manager-74c858c997-zhnxl" Jan 01 08:40:46 crc kubenswrapper[4867]: I0101 08:40:46.893193 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1a30748a-7ac7-4db0-89b3-17a43c7e3fde-webhook-cert\") pod \"metallb-operator-controller-manager-74c858c997-zhnxl\" (UID: \"1a30748a-7ac7-4db0-89b3-17a43c7e3fde\") " pod="metallb-system/metallb-operator-controller-manager-74c858c997-zhnxl" Jan 01 08:40:46 crc kubenswrapper[4867]: I0101 08:40:46.994609 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99dq2\" (UniqueName: \"kubernetes.io/projected/1a30748a-7ac7-4db0-89b3-17a43c7e3fde-kube-api-access-99dq2\") pod \"metallb-operator-controller-manager-74c858c997-zhnxl\" (UID: \"1a30748a-7ac7-4db0-89b3-17a43c7e3fde\") " pod="metallb-system/metallb-operator-controller-manager-74c858c997-zhnxl" Jan 01 08:40:46 crc kubenswrapper[4867]: I0101 08:40:46.994679 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1a30748a-7ac7-4db0-89b3-17a43c7e3fde-apiservice-cert\") pod \"metallb-operator-controller-manager-74c858c997-zhnxl\" (UID: \"1a30748a-7ac7-4db0-89b3-17a43c7e3fde\") " pod="metallb-system/metallb-operator-controller-manager-74c858c997-zhnxl" Jan 01 08:40:46 crc kubenswrapper[4867]: I0101 08:40:46.994712 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1a30748a-7ac7-4db0-89b3-17a43c7e3fde-webhook-cert\") pod \"metallb-operator-controller-manager-74c858c997-zhnxl\" (UID: \"1a30748a-7ac7-4db0-89b3-17a43c7e3fde\") " pod="metallb-system/metallb-operator-controller-manager-74c858c997-zhnxl" Jan 01 08:40:47 crc kubenswrapper[4867]: I0101 08:40:47.002976 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1a30748a-7ac7-4db0-89b3-17a43c7e3fde-webhook-cert\") pod \"metallb-operator-controller-manager-74c858c997-zhnxl\" (UID: \"1a30748a-7ac7-4db0-89b3-17a43c7e3fde\") " pod="metallb-system/metallb-operator-controller-manager-74c858c997-zhnxl" Jan 01 08:40:47 crc kubenswrapper[4867]: I0101 08:40:47.006599 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1a30748a-7ac7-4db0-89b3-17a43c7e3fde-apiservice-cert\") pod \"metallb-operator-controller-manager-74c858c997-zhnxl\" (UID: \"1a30748a-7ac7-4db0-89b3-17a43c7e3fde\") " pod="metallb-system/metallb-operator-controller-manager-74c858c997-zhnxl" Jan 01 08:40:47 crc kubenswrapper[4867]: I0101 08:40:47.009391 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99dq2\" (UniqueName: \"kubernetes.io/projected/1a30748a-7ac7-4db0-89b3-17a43c7e3fde-kube-api-access-99dq2\") pod \"metallb-operator-controller-manager-74c858c997-zhnxl\" (UID: \"1a30748a-7ac7-4db0-89b3-17a43c7e3fde\") " pod="metallb-system/metallb-operator-controller-manager-74c858c997-zhnxl" Jan 01 08:40:47 crc kubenswrapper[4867]: I0101 08:40:47.194508 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-74c858c997-zhnxl" Jan 01 08:40:47 crc kubenswrapper[4867]: I0101 08:40:47.359503 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-66c4777458-fml45"] Jan 01 08:40:47 crc kubenswrapper[4867]: I0101 08:40:47.361537 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-66c4777458-fml45" Jan 01 08:40:47 crc kubenswrapper[4867]: I0101 08:40:47.365928 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-thr49" Jan 01 08:40:47 crc kubenswrapper[4867]: I0101 08:40:47.366124 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 01 08:40:47 crc kubenswrapper[4867]: I0101 08:40:47.370289 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 01 08:40:47 crc kubenswrapper[4867]: I0101 08:40:47.378650 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-66c4777458-fml45"] Jan 01 08:40:47 crc kubenswrapper[4867]: I0101 08:40:47.399447 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/24863f96-c5b8-4c66-bcf6-5e796cf8068a-apiservice-cert\") pod \"metallb-operator-webhook-server-66c4777458-fml45\" (UID: \"24863f96-c5b8-4c66-bcf6-5e796cf8068a\") " pod="metallb-system/metallb-operator-webhook-server-66c4777458-fml45" Jan 01 08:40:47 crc kubenswrapper[4867]: I0101 08:40:47.399493 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/24863f96-c5b8-4c66-bcf6-5e796cf8068a-webhook-cert\") pod \"metallb-operator-webhook-server-66c4777458-fml45\" (UID: \"24863f96-c5b8-4c66-bcf6-5e796cf8068a\") " pod="metallb-system/metallb-operator-webhook-server-66c4777458-fml45" Jan 01 08:40:47 crc kubenswrapper[4867]: I0101 08:40:47.399559 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf528\" (UniqueName: \"kubernetes.io/projected/24863f96-c5b8-4c66-bcf6-5e796cf8068a-kube-api-access-gf528\") pod \"metallb-operator-webhook-server-66c4777458-fml45\" (UID: \"24863f96-c5b8-4c66-bcf6-5e796cf8068a\") " pod="metallb-system/metallb-operator-webhook-server-66c4777458-fml45" Jan 01 08:40:47 crc kubenswrapper[4867]: I0101 08:40:47.500340 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf528\" (UniqueName: \"kubernetes.io/projected/24863f96-c5b8-4c66-bcf6-5e796cf8068a-kube-api-access-gf528\") pod \"metallb-operator-webhook-server-66c4777458-fml45\" (UID: \"24863f96-c5b8-4c66-bcf6-5e796cf8068a\") " pod="metallb-system/metallb-operator-webhook-server-66c4777458-fml45" Jan 01 08:40:47 crc kubenswrapper[4867]: I0101 08:40:47.500402 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/24863f96-c5b8-4c66-bcf6-5e796cf8068a-apiservice-cert\") pod \"metallb-operator-webhook-server-66c4777458-fml45\" (UID: \"24863f96-c5b8-4c66-bcf6-5e796cf8068a\") " pod="metallb-system/metallb-operator-webhook-server-66c4777458-fml45" Jan 01 08:40:47 crc kubenswrapper[4867]: I0101 08:40:47.500437 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/24863f96-c5b8-4c66-bcf6-5e796cf8068a-webhook-cert\") pod \"metallb-operator-webhook-server-66c4777458-fml45\" (UID: \"24863f96-c5b8-4c66-bcf6-5e796cf8068a\") " pod="metallb-system/metallb-operator-webhook-server-66c4777458-fml45" Jan 01 08:40:47 crc kubenswrapper[4867]: I0101 08:40:47.505867 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/24863f96-c5b8-4c66-bcf6-5e796cf8068a-apiservice-cert\") pod \"metallb-operator-webhook-server-66c4777458-fml45\" (UID: \"24863f96-c5b8-4c66-bcf6-5e796cf8068a\") " pod="metallb-system/metallb-operator-webhook-server-66c4777458-fml45" Jan 01 08:40:47 crc kubenswrapper[4867]: I0101 08:40:47.505957 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/24863f96-c5b8-4c66-bcf6-5e796cf8068a-webhook-cert\") pod \"metallb-operator-webhook-server-66c4777458-fml45\" (UID: \"24863f96-c5b8-4c66-bcf6-5e796cf8068a\") " pod="metallb-system/metallb-operator-webhook-server-66c4777458-fml45" Jan 01 08:40:47 crc kubenswrapper[4867]: I0101 08:40:47.521489 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf528\" (UniqueName: \"kubernetes.io/projected/24863f96-c5b8-4c66-bcf6-5e796cf8068a-kube-api-access-gf528\") pod \"metallb-operator-webhook-server-66c4777458-fml45\" (UID: \"24863f96-c5b8-4c66-bcf6-5e796cf8068a\") " pod="metallb-system/metallb-operator-webhook-server-66c4777458-fml45" Jan 01 08:40:47 crc kubenswrapper[4867]: I0101 08:40:47.647234 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-74c858c997-zhnxl"] Jan 01 08:40:47 crc kubenswrapper[4867]: I0101 08:40:47.678975 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-66c4777458-fml45" Jan 01 08:40:47 crc kubenswrapper[4867]: I0101 08:40:47.923653 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-66c4777458-fml45"] Jan 01 08:40:47 crc kubenswrapper[4867]: W0101 08:40:47.930094 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24863f96_c5b8_4c66_bcf6_5e796cf8068a.slice/crio-071762c708365b0bb4a342dca6c169d9f5e2912981d8e4ff3ba6e9629bfb579e WatchSource:0}: Error finding container 071762c708365b0bb4a342dca6c169d9f5e2912981d8e4ff3ba6e9629bfb579e: Status 404 returned error can't find the container with id 071762c708365b0bb4a342dca6c169d9f5e2912981d8e4ff3ba6e9629bfb579e Jan 01 08:40:48 crc kubenswrapper[4867]: I0101 08:40:48.410031 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-74c858c997-zhnxl" event={"ID":"1a30748a-7ac7-4db0-89b3-17a43c7e3fde","Type":"ContainerStarted","Data":"687106a1d1c2ed06a347687236bc4da616c6be8c7ae458ab9922ff67712b8600"} Jan 01 08:40:48 crc kubenswrapper[4867]: I0101 08:40:48.411346 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-66c4777458-fml45" event={"ID":"24863f96-c5b8-4c66-bcf6-5e796cf8068a","Type":"ContainerStarted","Data":"071762c708365b0bb4a342dca6c169d9f5e2912981d8e4ff3ba6e9629bfb579e"} Jan 01 08:40:51 crc kubenswrapper[4867]: I0101 08:40:51.434455 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-74c858c997-zhnxl" event={"ID":"1a30748a-7ac7-4db0-89b3-17a43c7e3fde","Type":"ContainerStarted","Data":"0e1e770ae520266b73aeb4f2e0a0c13923294081f929c4e720cee49e26b77c9a"} Jan 01 08:40:51 crc kubenswrapper[4867]: I0101 08:40:51.435021 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-74c858c997-zhnxl" Jan 01 08:40:51 crc kubenswrapper[4867]: I0101 08:40:51.452803 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-74c858c997-zhnxl" podStartSLOduration=2.430476065 podStartE2EDuration="5.452782843s" podCreationTimestamp="2026-01-01 08:40:46 +0000 UTC" firstStartedPulling="2026-01-01 08:40:47.653512704 +0000 UTC m=+856.788781463" lastFinishedPulling="2026-01-01 08:40:50.675819472 +0000 UTC m=+859.811088241" observedRunningTime="2026-01-01 08:40:51.451304522 +0000 UTC m=+860.586573311" watchObservedRunningTime="2026-01-01 08:40:51.452782843 +0000 UTC m=+860.588051612" Jan 01 08:40:53 crc kubenswrapper[4867]: I0101 08:40:53.449112 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-66c4777458-fml45" event={"ID":"24863f96-c5b8-4c66-bcf6-5e796cf8068a","Type":"ContainerStarted","Data":"4ff20a3b612c8e23420d12f8a4a19642212de12673f5fd8fa96aa281beede9ad"} Jan 01 08:40:53 crc kubenswrapper[4867]: I0101 08:40:53.451390 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-66c4777458-fml45" Jan 01 08:40:53 crc kubenswrapper[4867]: I0101 08:40:53.467828 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-66c4777458-fml45" podStartSLOduration=1.967357789 podStartE2EDuration="6.467811163s" podCreationTimestamp="2026-01-01 08:40:47 +0000 UTC" firstStartedPulling="2026-01-01 08:40:47.931468065 +0000 UTC m=+857.066736834" lastFinishedPulling="2026-01-01 08:40:52.431921439 +0000 UTC m=+861.567190208" observedRunningTime="2026-01-01 08:40:53.46447985 +0000 UTC m=+862.599748649" watchObservedRunningTime="2026-01-01 08:40:53.467811163 +0000 UTC m=+862.603079932" Jan 01 08:41:07 crc kubenswrapper[4867]: I0101 08:41:07.683295 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-66c4777458-fml45" Jan 01 08:41:08 crc kubenswrapper[4867]: I0101 08:41:08.021221 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-x69jr"] Jan 01 08:41:08 crc kubenswrapper[4867]: I0101 08:41:08.022634 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x69jr" Jan 01 08:41:08 crc kubenswrapper[4867]: I0101 08:41:08.069798 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x69jr"] Jan 01 08:41:08 crc kubenswrapper[4867]: I0101 08:41:08.186010 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88bd2548-0a92-47ed-b282-eb47fc48fc9c-utilities\") pod \"redhat-marketplace-x69jr\" (UID: \"88bd2548-0a92-47ed-b282-eb47fc48fc9c\") " pod="openshift-marketplace/redhat-marketplace-x69jr" Jan 01 08:41:08 crc kubenswrapper[4867]: I0101 08:41:08.186194 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r8wr\" (UniqueName: \"kubernetes.io/projected/88bd2548-0a92-47ed-b282-eb47fc48fc9c-kube-api-access-5r8wr\") pod \"redhat-marketplace-x69jr\" (UID: \"88bd2548-0a92-47ed-b282-eb47fc48fc9c\") " pod="openshift-marketplace/redhat-marketplace-x69jr" Jan 01 08:41:08 crc kubenswrapper[4867]: I0101 08:41:08.186368 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88bd2548-0a92-47ed-b282-eb47fc48fc9c-catalog-content\") pod \"redhat-marketplace-x69jr\" (UID: \"88bd2548-0a92-47ed-b282-eb47fc48fc9c\") " pod="openshift-marketplace/redhat-marketplace-x69jr" Jan 01 08:41:08 crc kubenswrapper[4867]: I0101 08:41:08.289283 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88bd2548-0a92-47ed-b282-eb47fc48fc9c-catalog-content\") pod \"redhat-marketplace-x69jr\" (UID: \"88bd2548-0a92-47ed-b282-eb47fc48fc9c\") " pod="openshift-marketplace/redhat-marketplace-x69jr" Jan 01 08:41:08 crc kubenswrapper[4867]: I0101 08:41:08.289375 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88bd2548-0a92-47ed-b282-eb47fc48fc9c-utilities\") pod \"redhat-marketplace-x69jr\" (UID: \"88bd2548-0a92-47ed-b282-eb47fc48fc9c\") " pod="openshift-marketplace/redhat-marketplace-x69jr" Jan 01 08:41:08 crc kubenswrapper[4867]: I0101 08:41:08.289450 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r8wr\" (UniqueName: \"kubernetes.io/projected/88bd2548-0a92-47ed-b282-eb47fc48fc9c-kube-api-access-5r8wr\") pod \"redhat-marketplace-x69jr\" (UID: \"88bd2548-0a92-47ed-b282-eb47fc48fc9c\") " pod="openshift-marketplace/redhat-marketplace-x69jr" Jan 01 08:41:08 crc kubenswrapper[4867]: I0101 08:41:08.289877 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88bd2548-0a92-47ed-b282-eb47fc48fc9c-utilities\") pod \"redhat-marketplace-x69jr\" (UID: \"88bd2548-0a92-47ed-b282-eb47fc48fc9c\") " pod="openshift-marketplace/redhat-marketplace-x69jr" Jan 01 08:41:08 crc kubenswrapper[4867]: I0101 08:41:08.289914 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88bd2548-0a92-47ed-b282-eb47fc48fc9c-catalog-content\") pod \"redhat-marketplace-x69jr\" (UID: \"88bd2548-0a92-47ed-b282-eb47fc48fc9c\") " pod="openshift-marketplace/redhat-marketplace-x69jr" Jan 01 08:41:08 crc kubenswrapper[4867]: I0101 08:41:08.321199 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r8wr\" (UniqueName: \"kubernetes.io/projected/88bd2548-0a92-47ed-b282-eb47fc48fc9c-kube-api-access-5r8wr\") pod \"redhat-marketplace-x69jr\" (UID: \"88bd2548-0a92-47ed-b282-eb47fc48fc9c\") " pod="openshift-marketplace/redhat-marketplace-x69jr" Jan 01 08:41:08 crc kubenswrapper[4867]: I0101 08:41:08.337588 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x69jr" Jan 01 08:41:08 crc kubenswrapper[4867]: I0101 08:41:08.773084 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x69jr"] Jan 01 08:41:09 crc kubenswrapper[4867]: I0101 08:41:09.664455 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x69jr" event={"ID":"88bd2548-0a92-47ed-b282-eb47fc48fc9c","Type":"ContainerStarted","Data":"1e8e84344a3e2977f7c9a84837d9ccc682e2c901b016a59916e3eada69db80bb"} Jan 01 08:41:10 crc kubenswrapper[4867]: I0101 08:41:10.673487 4867 generic.go:334] "Generic (PLEG): container finished" podID="88bd2548-0a92-47ed-b282-eb47fc48fc9c" containerID="07a25c468b279dd7219a4510c225fde8b032c1d938824474ac6b422d058bded3" exitCode=0 Jan 01 08:41:10 crc kubenswrapper[4867]: I0101 08:41:10.673562 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x69jr" event={"ID":"88bd2548-0a92-47ed-b282-eb47fc48fc9c","Type":"ContainerDied","Data":"07a25c468b279dd7219a4510c225fde8b032c1d938824474ac6b422d058bded3"} Jan 01 08:41:11 crc kubenswrapper[4867]: I0101 08:41:11.681322 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x69jr" event={"ID":"88bd2548-0a92-47ed-b282-eb47fc48fc9c","Type":"ContainerStarted","Data":"f0287d0cde5377c5ce1f3d9307e1d9418530586e1227a9acdf1f14e539141ecc"} Jan 01 08:41:12 crc kubenswrapper[4867]: I0101 08:41:12.691531 4867 generic.go:334] "Generic (PLEG): container finished" podID="88bd2548-0a92-47ed-b282-eb47fc48fc9c" containerID="f0287d0cde5377c5ce1f3d9307e1d9418530586e1227a9acdf1f14e539141ecc" exitCode=0 Jan 01 08:41:12 crc kubenswrapper[4867]: I0101 08:41:12.691599 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x69jr" event={"ID":"88bd2548-0a92-47ed-b282-eb47fc48fc9c","Type":"ContainerDied","Data":"f0287d0cde5377c5ce1f3d9307e1d9418530586e1227a9acdf1f14e539141ecc"} Jan 01 08:41:13 crc kubenswrapper[4867]: I0101 08:41:13.699601 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x69jr" event={"ID":"88bd2548-0a92-47ed-b282-eb47fc48fc9c","Type":"ContainerStarted","Data":"65c090cce5542835ce02c1ed9d19a6ebc4e53faa33cb3d40fd80b1ce093b5e71"} Jan 01 08:41:13 crc kubenswrapper[4867]: I0101 08:41:13.717270 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-x69jr" podStartSLOduration=3.008026007 podStartE2EDuration="5.717253929s" podCreationTimestamp="2026-01-01 08:41:08 +0000 UTC" firstStartedPulling="2026-01-01 08:41:10.676745896 +0000 UTC m=+879.812014695" lastFinishedPulling="2026-01-01 08:41:13.385973838 +0000 UTC m=+882.521242617" observedRunningTime="2026-01-01 08:41:13.71583088 +0000 UTC m=+882.851099699" watchObservedRunningTime="2026-01-01 08:41:13.717253929 +0000 UTC m=+882.852522688" Jan 01 08:41:18 crc kubenswrapper[4867]: I0101 08:41:18.338326 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-x69jr" Jan 01 08:41:18 crc kubenswrapper[4867]: I0101 08:41:18.338742 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-x69jr" Jan 01 08:41:18 crc kubenswrapper[4867]: I0101 08:41:18.408175 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-x69jr" Jan 01 08:41:18 crc kubenswrapper[4867]: I0101 08:41:18.795296 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-x69jr" Jan 01 08:41:20 crc kubenswrapper[4867]: I0101 08:41:20.816538 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x69jr"] Jan 01 08:41:20 crc kubenswrapper[4867]: I0101 08:41:20.817821 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-x69jr" podUID="88bd2548-0a92-47ed-b282-eb47fc48fc9c" containerName="registry-server" containerID="cri-o://65c090cce5542835ce02c1ed9d19a6ebc4e53faa33cb3d40fd80b1ce093b5e71" gracePeriod=2 Jan 01 08:41:21 crc kubenswrapper[4867]: I0101 08:41:21.182416 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x69jr" Jan 01 08:41:21 crc kubenswrapper[4867]: I0101 08:41:21.286514 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88bd2548-0a92-47ed-b282-eb47fc48fc9c-catalog-content\") pod \"88bd2548-0a92-47ed-b282-eb47fc48fc9c\" (UID: \"88bd2548-0a92-47ed-b282-eb47fc48fc9c\") " Jan 01 08:41:21 crc kubenswrapper[4867]: I0101 08:41:21.286616 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5r8wr\" (UniqueName: \"kubernetes.io/projected/88bd2548-0a92-47ed-b282-eb47fc48fc9c-kube-api-access-5r8wr\") pod \"88bd2548-0a92-47ed-b282-eb47fc48fc9c\" (UID: \"88bd2548-0a92-47ed-b282-eb47fc48fc9c\") " Jan 01 08:41:21 crc kubenswrapper[4867]: I0101 08:41:21.286795 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88bd2548-0a92-47ed-b282-eb47fc48fc9c-utilities\") pod \"88bd2548-0a92-47ed-b282-eb47fc48fc9c\" (UID: \"88bd2548-0a92-47ed-b282-eb47fc48fc9c\") " Jan 01 08:41:21 crc kubenswrapper[4867]: I0101 08:41:21.287477 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88bd2548-0a92-47ed-b282-eb47fc48fc9c-utilities" (OuterVolumeSpecName: "utilities") pod "88bd2548-0a92-47ed-b282-eb47fc48fc9c" (UID: "88bd2548-0a92-47ed-b282-eb47fc48fc9c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:41:21 crc kubenswrapper[4867]: I0101 08:41:21.288012 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88bd2548-0a92-47ed-b282-eb47fc48fc9c-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 08:41:21 crc kubenswrapper[4867]: I0101 08:41:21.294047 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88bd2548-0a92-47ed-b282-eb47fc48fc9c-kube-api-access-5r8wr" (OuterVolumeSpecName: "kube-api-access-5r8wr") pod "88bd2548-0a92-47ed-b282-eb47fc48fc9c" (UID: "88bd2548-0a92-47ed-b282-eb47fc48fc9c"). InnerVolumeSpecName "kube-api-access-5r8wr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:41:21 crc kubenswrapper[4867]: I0101 08:41:21.315263 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88bd2548-0a92-47ed-b282-eb47fc48fc9c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88bd2548-0a92-47ed-b282-eb47fc48fc9c" (UID: "88bd2548-0a92-47ed-b282-eb47fc48fc9c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:41:21 crc kubenswrapper[4867]: I0101 08:41:21.388403 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88bd2548-0a92-47ed-b282-eb47fc48fc9c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 08:41:21 crc kubenswrapper[4867]: I0101 08:41:21.388638 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5r8wr\" (UniqueName: \"kubernetes.io/projected/88bd2548-0a92-47ed-b282-eb47fc48fc9c-kube-api-access-5r8wr\") on node \"crc\" DevicePath \"\"" Jan 01 08:41:21 crc kubenswrapper[4867]: I0101 08:41:21.760322 4867 generic.go:334] "Generic (PLEG): container finished" podID="88bd2548-0a92-47ed-b282-eb47fc48fc9c" containerID="65c090cce5542835ce02c1ed9d19a6ebc4e53faa33cb3d40fd80b1ce093b5e71" exitCode=0 Jan 01 08:41:21 crc kubenswrapper[4867]: I0101 08:41:21.760376 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x69jr" event={"ID":"88bd2548-0a92-47ed-b282-eb47fc48fc9c","Type":"ContainerDied","Data":"65c090cce5542835ce02c1ed9d19a6ebc4e53faa33cb3d40fd80b1ce093b5e71"} Jan 01 08:41:21 crc kubenswrapper[4867]: I0101 08:41:21.760398 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x69jr" Jan 01 08:41:21 crc kubenswrapper[4867]: I0101 08:41:21.760423 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x69jr" event={"ID":"88bd2548-0a92-47ed-b282-eb47fc48fc9c","Type":"ContainerDied","Data":"1e8e84344a3e2977f7c9a84837d9ccc682e2c901b016a59916e3eada69db80bb"} Jan 01 08:41:21 crc kubenswrapper[4867]: I0101 08:41:21.760452 4867 scope.go:117] "RemoveContainer" containerID="65c090cce5542835ce02c1ed9d19a6ebc4e53faa33cb3d40fd80b1ce093b5e71" Jan 01 08:41:21 crc kubenswrapper[4867]: I0101 08:41:21.783048 4867 scope.go:117] "RemoveContainer" containerID="f0287d0cde5377c5ce1f3d9307e1d9418530586e1227a9acdf1f14e539141ecc" Jan 01 08:41:21 crc kubenswrapper[4867]: I0101 08:41:21.800102 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x69jr"] Jan 01 08:41:21 crc kubenswrapper[4867]: I0101 08:41:21.814584 4867 scope.go:117] "RemoveContainer" containerID="07a25c468b279dd7219a4510c225fde8b032c1d938824474ac6b422d058bded3" Jan 01 08:41:21 crc kubenswrapper[4867]: I0101 08:41:21.815445 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-x69jr"] Jan 01 08:41:21 crc kubenswrapper[4867]: I0101 08:41:21.829062 4867 scope.go:117] "RemoveContainer" containerID="65c090cce5542835ce02c1ed9d19a6ebc4e53faa33cb3d40fd80b1ce093b5e71" Jan 01 08:41:21 crc kubenswrapper[4867]: E0101 08:41:21.829345 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65c090cce5542835ce02c1ed9d19a6ebc4e53faa33cb3d40fd80b1ce093b5e71\": container with ID starting with 65c090cce5542835ce02c1ed9d19a6ebc4e53faa33cb3d40fd80b1ce093b5e71 not found: ID does not exist" containerID="65c090cce5542835ce02c1ed9d19a6ebc4e53faa33cb3d40fd80b1ce093b5e71" Jan 01 08:41:21 crc kubenswrapper[4867]: I0101 08:41:21.829377 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65c090cce5542835ce02c1ed9d19a6ebc4e53faa33cb3d40fd80b1ce093b5e71"} err="failed to get container status \"65c090cce5542835ce02c1ed9d19a6ebc4e53faa33cb3d40fd80b1ce093b5e71\": rpc error: code = NotFound desc = could not find container \"65c090cce5542835ce02c1ed9d19a6ebc4e53faa33cb3d40fd80b1ce093b5e71\": container with ID starting with 65c090cce5542835ce02c1ed9d19a6ebc4e53faa33cb3d40fd80b1ce093b5e71 not found: ID does not exist" Jan 01 08:41:21 crc kubenswrapper[4867]: I0101 08:41:21.829401 4867 scope.go:117] "RemoveContainer" containerID="f0287d0cde5377c5ce1f3d9307e1d9418530586e1227a9acdf1f14e539141ecc" Jan 01 08:41:21 crc kubenswrapper[4867]: E0101 08:41:21.829626 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0287d0cde5377c5ce1f3d9307e1d9418530586e1227a9acdf1f14e539141ecc\": container with ID starting with f0287d0cde5377c5ce1f3d9307e1d9418530586e1227a9acdf1f14e539141ecc not found: ID does not exist" containerID="f0287d0cde5377c5ce1f3d9307e1d9418530586e1227a9acdf1f14e539141ecc" Jan 01 08:41:21 crc kubenswrapper[4867]: I0101 08:41:21.829652 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0287d0cde5377c5ce1f3d9307e1d9418530586e1227a9acdf1f14e539141ecc"} err="failed to get container status \"f0287d0cde5377c5ce1f3d9307e1d9418530586e1227a9acdf1f14e539141ecc\": rpc error: code = NotFound desc = could not find container \"f0287d0cde5377c5ce1f3d9307e1d9418530586e1227a9acdf1f14e539141ecc\": container with ID starting with f0287d0cde5377c5ce1f3d9307e1d9418530586e1227a9acdf1f14e539141ecc not found: ID does not exist" Jan 01 08:41:21 crc kubenswrapper[4867]: I0101 08:41:21.829670 4867 scope.go:117] "RemoveContainer" containerID="07a25c468b279dd7219a4510c225fde8b032c1d938824474ac6b422d058bded3" Jan 01 08:41:21 crc kubenswrapper[4867]: E0101 08:41:21.830105 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07a25c468b279dd7219a4510c225fde8b032c1d938824474ac6b422d058bded3\": container with ID starting with 07a25c468b279dd7219a4510c225fde8b032c1d938824474ac6b422d058bded3 not found: ID does not exist" containerID="07a25c468b279dd7219a4510c225fde8b032c1d938824474ac6b422d058bded3" Jan 01 08:41:21 crc kubenswrapper[4867]: I0101 08:41:21.830128 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07a25c468b279dd7219a4510c225fde8b032c1d938824474ac6b422d058bded3"} err="failed to get container status \"07a25c468b279dd7219a4510c225fde8b032c1d938824474ac6b422d058bded3\": rpc error: code = NotFound desc = could not find container \"07a25c468b279dd7219a4510c225fde8b032c1d938824474ac6b422d058bded3\": container with ID starting with 07a25c468b279dd7219a4510c225fde8b032c1d938824474ac6b422d058bded3 not found: ID does not exist" Jan 01 08:41:23 crc kubenswrapper[4867]: I0101 08:41:23.141250 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88bd2548-0a92-47ed-b282-eb47fc48fc9c" path="/var/lib/kubelet/pods/88bd2548-0a92-47ed-b282-eb47fc48fc9c/volumes" Jan 01 08:41:27 crc kubenswrapper[4867]: I0101 08:41:27.196820 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-74c858c997-zhnxl" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.019922 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hzkdp"] Jan 01 08:41:28 crc kubenswrapper[4867]: E0101 08:41:28.020170 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88bd2548-0a92-47ed-b282-eb47fc48fc9c" containerName="registry-server" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.020188 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="88bd2548-0a92-47ed-b282-eb47fc48fc9c" containerName="registry-server" Jan 01 08:41:28 crc kubenswrapper[4867]: E0101 08:41:28.020204 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88bd2548-0a92-47ed-b282-eb47fc48fc9c" containerName="extract-content" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.020213 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="88bd2548-0a92-47ed-b282-eb47fc48fc9c" containerName="extract-content" Jan 01 08:41:28 crc kubenswrapper[4867]: E0101 08:41:28.020240 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88bd2548-0a92-47ed-b282-eb47fc48fc9c" containerName="extract-utilities" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.020250 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="88bd2548-0a92-47ed-b282-eb47fc48fc9c" containerName="extract-utilities" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.020369 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="88bd2548-0a92-47ed-b282-eb47fc48fc9c" containerName="registry-server" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.021424 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzkdp" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.046519 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hzkdp"] Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.084988 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkrt9\" (UniqueName: \"kubernetes.io/projected/d4ffc669-dae0-4e4a-bc04-ce3957305b71-kube-api-access-qkrt9\") pod \"certified-operators-hzkdp\" (UID: \"d4ffc669-dae0-4e4a-bc04-ce3957305b71\") " pod="openshift-marketplace/certified-operators-hzkdp" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.085057 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4ffc669-dae0-4e4a-bc04-ce3957305b71-utilities\") pod \"certified-operators-hzkdp\" (UID: \"d4ffc669-dae0-4e4a-bc04-ce3957305b71\") " pod="openshift-marketplace/certified-operators-hzkdp" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.085217 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4ffc669-dae0-4e4a-bc04-ce3957305b71-catalog-content\") pod \"certified-operators-hzkdp\" (UID: \"d4ffc669-dae0-4e4a-bc04-ce3957305b71\") " pod="openshift-marketplace/certified-operators-hzkdp" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.089570 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-g8s2w"] Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.090671 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-g8s2w" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.093745 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.093967 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-qndff" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.101865 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-blnbq"] Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.104689 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-blnbq" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.108312 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-g8s2w"] Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.112471 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.112709 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.171473 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-cc2m9"] Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.172296 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-cc2m9" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.174070 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.174302 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-2v6vh" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.174462 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.174564 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.184361 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-5bddd4b946-6k4kx"] Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.185378 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-6k4kx" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.186182 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2996cfdf-82a5-4df9-b1e8-5553e35489b4-metallb-excludel2\") pod \"speaker-cc2m9\" (UID: \"2996cfdf-82a5-4df9-b1e8-5553e35489b4\") " pod="metallb-system/speaker-cc2m9" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.186402 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2996cfdf-82a5-4df9-b1e8-5553e35489b4-memberlist\") pod \"speaker-cc2m9\" (UID: \"2996cfdf-82a5-4df9-b1e8-5553e35489b4\") " pod="metallb-system/speaker-cc2m9" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.186427 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4vqx\" (UniqueName: \"kubernetes.io/projected/46986363-a0a5-4868-83b8-b1536fb75705-kube-api-access-n4vqx\") pod \"frr-k8s-blnbq\" (UID: \"46986363-a0a5-4868-83b8-b1536fb75705\") " pod="metallb-system/frr-k8s-blnbq" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.186462 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkrt9\" (UniqueName: \"kubernetes.io/projected/d4ffc669-dae0-4e4a-bc04-ce3957305b71-kube-api-access-qkrt9\") pod \"certified-operators-hzkdp\" (UID: \"d4ffc669-dae0-4e4a-bc04-ce3957305b71\") " pod="openshift-marketplace/certified-operators-hzkdp" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.186491 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/46986363-a0a5-4868-83b8-b1536fb75705-frr-conf\") pod \"frr-k8s-blnbq\" (UID: \"46986363-a0a5-4868-83b8-b1536fb75705\") " pod="metallb-system/frr-k8s-blnbq" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.186506 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1fe21b9b-fc35-41dd-aa42-deb7bef61c21-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-g8s2w\" (UID: \"1fe21b9b-fc35-41dd-aa42-deb7bef61c21\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-g8s2w" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.186523 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4ffc669-dae0-4e4a-bc04-ce3957305b71-utilities\") pod \"certified-operators-hzkdp\" (UID: \"d4ffc669-dae0-4e4a-bc04-ce3957305b71\") " pod="openshift-marketplace/certified-operators-hzkdp" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.186539 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59j5q\" (UniqueName: \"kubernetes.io/projected/1fe21b9b-fc35-41dd-aa42-deb7bef61c21-kube-api-access-59j5q\") pod \"frr-k8s-webhook-server-7784b6fcf-g8s2w\" (UID: \"1fe21b9b-fc35-41dd-aa42-deb7bef61c21\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-g8s2w" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.186558 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4ffc669-dae0-4e4a-bc04-ce3957305b71-catalog-content\") pod \"certified-operators-hzkdp\" (UID: \"d4ffc669-dae0-4e4a-bc04-ce3957305b71\") " pod="openshift-marketplace/certified-operators-hzkdp" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.186574 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/46986363-a0a5-4868-83b8-b1536fb75705-frr-startup\") pod \"frr-k8s-blnbq\" (UID: \"46986363-a0a5-4868-83b8-b1536fb75705\") " pod="metallb-system/frr-k8s-blnbq" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.186592 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/46986363-a0a5-4868-83b8-b1536fb75705-frr-sockets\") pod \"frr-k8s-blnbq\" (UID: \"46986363-a0a5-4868-83b8-b1536fb75705\") " pod="metallb-system/frr-k8s-blnbq" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.186614 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77tdt\" (UniqueName: \"kubernetes.io/projected/2996cfdf-82a5-4df9-b1e8-5553e35489b4-kube-api-access-77tdt\") pod \"speaker-cc2m9\" (UID: \"2996cfdf-82a5-4df9-b1e8-5553e35489b4\") " pod="metallb-system/speaker-cc2m9" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.186653 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/46986363-a0a5-4868-83b8-b1536fb75705-metrics\") pod \"frr-k8s-blnbq\" (UID: \"46986363-a0a5-4868-83b8-b1536fb75705\") " pod="metallb-system/frr-k8s-blnbq" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.186686 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2996cfdf-82a5-4df9-b1e8-5553e35489b4-metrics-certs\") pod \"speaker-cc2m9\" (UID: \"2996cfdf-82a5-4df9-b1e8-5553e35489b4\") " pod="metallb-system/speaker-cc2m9" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.186706 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/46986363-a0a5-4868-83b8-b1536fb75705-metrics-certs\") pod \"frr-k8s-blnbq\" (UID: \"46986363-a0a5-4868-83b8-b1536fb75705\") " pod="metallb-system/frr-k8s-blnbq" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.186720 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/46986363-a0a5-4868-83b8-b1536fb75705-reloader\") pod \"frr-k8s-blnbq\" (UID: \"46986363-a0a5-4868-83b8-b1536fb75705\") " pod="metallb-system/frr-k8s-blnbq" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.187523 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4ffc669-dae0-4e4a-bc04-ce3957305b71-utilities\") pod \"certified-operators-hzkdp\" (UID: \"d4ffc669-dae0-4e4a-bc04-ce3957305b71\") " pod="openshift-marketplace/certified-operators-hzkdp" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.187738 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4ffc669-dae0-4e4a-bc04-ce3957305b71-catalog-content\") pod \"certified-operators-hzkdp\" (UID: \"d4ffc669-dae0-4e4a-bc04-ce3957305b71\") " pod="openshift-marketplace/certified-operators-hzkdp" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.196436 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.213760 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkrt9\" (UniqueName: \"kubernetes.io/projected/d4ffc669-dae0-4e4a-bc04-ce3957305b71-kube-api-access-qkrt9\") pod \"certified-operators-hzkdp\" (UID: \"d4ffc669-dae0-4e4a-bc04-ce3957305b71\") " pod="openshift-marketplace/certified-operators-hzkdp" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.230024 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-6k4kx"] Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.287256 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2996cfdf-82a5-4df9-b1e8-5553e35489b4-metrics-certs\") pod \"speaker-cc2m9\" (UID: \"2996cfdf-82a5-4df9-b1e8-5553e35489b4\") " pod="metallb-system/speaker-cc2m9" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.287297 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/46986363-a0a5-4868-83b8-b1536fb75705-metrics-certs\") pod \"frr-k8s-blnbq\" (UID: \"46986363-a0a5-4868-83b8-b1536fb75705\") " pod="metallb-system/frr-k8s-blnbq" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.287316 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/46986363-a0a5-4868-83b8-b1536fb75705-reloader\") pod \"frr-k8s-blnbq\" (UID: \"46986363-a0a5-4868-83b8-b1536fb75705\") " pod="metallb-system/frr-k8s-blnbq" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.287341 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq56q\" (UniqueName: \"kubernetes.io/projected/edc90365-7a93-4042-ba66-e7ee4e6ba188-kube-api-access-nq56q\") pod \"controller-5bddd4b946-6k4kx\" (UID: \"edc90365-7a93-4042-ba66-e7ee4e6ba188\") " pod="metallb-system/controller-5bddd4b946-6k4kx" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.287358 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2996cfdf-82a5-4df9-b1e8-5553e35489b4-metallb-excludel2\") pod \"speaker-cc2m9\" (UID: \"2996cfdf-82a5-4df9-b1e8-5553e35489b4\") " pod="metallb-system/speaker-cc2m9" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.287373 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/edc90365-7a93-4042-ba66-e7ee4e6ba188-metrics-certs\") pod \"controller-5bddd4b946-6k4kx\" (UID: \"edc90365-7a93-4042-ba66-e7ee4e6ba188\") " pod="metallb-system/controller-5bddd4b946-6k4kx" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.287393 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2996cfdf-82a5-4df9-b1e8-5553e35489b4-memberlist\") pod \"speaker-cc2m9\" (UID: \"2996cfdf-82a5-4df9-b1e8-5553e35489b4\") " pod="metallb-system/speaker-cc2m9" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.287414 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4vqx\" (UniqueName: \"kubernetes.io/projected/46986363-a0a5-4868-83b8-b1536fb75705-kube-api-access-n4vqx\") pod \"frr-k8s-blnbq\" (UID: \"46986363-a0a5-4868-83b8-b1536fb75705\") " pod="metallb-system/frr-k8s-blnbq" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.287447 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/edc90365-7a93-4042-ba66-e7ee4e6ba188-cert\") pod \"controller-5bddd4b946-6k4kx\" (UID: \"edc90365-7a93-4042-ba66-e7ee4e6ba188\") " pod="metallb-system/controller-5bddd4b946-6k4kx" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.287466 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1fe21b9b-fc35-41dd-aa42-deb7bef61c21-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-g8s2w\" (UID: \"1fe21b9b-fc35-41dd-aa42-deb7bef61c21\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-g8s2w" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.287483 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/46986363-a0a5-4868-83b8-b1536fb75705-frr-conf\") pod \"frr-k8s-blnbq\" (UID: \"46986363-a0a5-4868-83b8-b1536fb75705\") " pod="metallb-system/frr-k8s-blnbq" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.287500 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59j5q\" (UniqueName: \"kubernetes.io/projected/1fe21b9b-fc35-41dd-aa42-deb7bef61c21-kube-api-access-59j5q\") pod \"frr-k8s-webhook-server-7784b6fcf-g8s2w\" (UID: \"1fe21b9b-fc35-41dd-aa42-deb7bef61c21\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-g8s2w" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.287517 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/46986363-a0a5-4868-83b8-b1536fb75705-frr-startup\") pod \"frr-k8s-blnbq\" (UID: \"46986363-a0a5-4868-83b8-b1536fb75705\") " pod="metallb-system/frr-k8s-blnbq" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.287536 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/46986363-a0a5-4868-83b8-b1536fb75705-frr-sockets\") pod \"frr-k8s-blnbq\" (UID: \"46986363-a0a5-4868-83b8-b1536fb75705\") " pod="metallb-system/frr-k8s-blnbq" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.287558 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77tdt\" (UniqueName: \"kubernetes.io/projected/2996cfdf-82a5-4df9-b1e8-5553e35489b4-kube-api-access-77tdt\") pod \"speaker-cc2m9\" (UID: \"2996cfdf-82a5-4df9-b1e8-5553e35489b4\") " pod="metallb-system/speaker-cc2m9" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.287580 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/46986363-a0a5-4868-83b8-b1536fb75705-metrics\") pod \"frr-k8s-blnbq\" (UID: \"46986363-a0a5-4868-83b8-b1536fb75705\") " pod="metallb-system/frr-k8s-blnbq" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.287950 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/46986363-a0a5-4868-83b8-b1536fb75705-metrics\") pod \"frr-k8s-blnbq\" (UID: \"46986363-a0a5-4868-83b8-b1536fb75705\") " pod="metallb-system/frr-k8s-blnbq" Jan 01 08:41:28 crc kubenswrapper[4867]: E0101 08:41:28.288042 4867 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 01 08:41:28 crc kubenswrapper[4867]: E0101 08:41:28.288092 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2996cfdf-82a5-4df9-b1e8-5553e35489b4-metrics-certs podName:2996cfdf-82a5-4df9-b1e8-5553e35489b4 nodeName:}" failed. No retries permitted until 2026-01-01 08:41:28.788073417 +0000 UTC m=+897.923342186 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2996cfdf-82a5-4df9-b1e8-5553e35489b4-metrics-certs") pod "speaker-cc2m9" (UID: "2996cfdf-82a5-4df9-b1e8-5553e35489b4") : secret "speaker-certs-secret" not found Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.288978 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/46986363-a0a5-4868-83b8-b1536fb75705-reloader\") pod \"frr-k8s-blnbq\" (UID: \"46986363-a0a5-4868-83b8-b1536fb75705\") " pod="metallb-system/frr-k8s-blnbq" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.289588 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2996cfdf-82a5-4df9-b1e8-5553e35489b4-metallb-excludel2\") pod \"speaker-cc2m9\" (UID: \"2996cfdf-82a5-4df9-b1e8-5553e35489b4\") " pod="metallb-system/speaker-cc2m9" Jan 01 08:41:28 crc kubenswrapper[4867]: E0101 08:41:28.289669 4867 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 01 08:41:28 crc kubenswrapper[4867]: E0101 08:41:28.289726 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2996cfdf-82a5-4df9-b1e8-5553e35489b4-memberlist podName:2996cfdf-82a5-4df9-b1e8-5553e35489b4 nodeName:}" failed. No retries permitted until 2026-01-01 08:41:28.789689562 +0000 UTC m=+897.924958331 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/2996cfdf-82a5-4df9-b1e8-5553e35489b4-memberlist") pod "speaker-cc2m9" (UID: "2996cfdf-82a5-4df9-b1e8-5553e35489b4") : secret "metallb-memberlist" not found Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.290959 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/46986363-a0a5-4868-83b8-b1536fb75705-frr-startup\") pod \"frr-k8s-blnbq\" (UID: \"46986363-a0a5-4868-83b8-b1536fb75705\") " pod="metallb-system/frr-k8s-blnbq" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.292283 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/46986363-a0a5-4868-83b8-b1536fb75705-frr-conf\") pod \"frr-k8s-blnbq\" (UID: \"46986363-a0a5-4868-83b8-b1536fb75705\") " pod="metallb-system/frr-k8s-blnbq" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.292837 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/46986363-a0a5-4868-83b8-b1536fb75705-frr-sockets\") pod \"frr-k8s-blnbq\" (UID: \"46986363-a0a5-4868-83b8-b1536fb75705\") " pod="metallb-system/frr-k8s-blnbq" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.297267 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1fe21b9b-fc35-41dd-aa42-deb7bef61c21-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-g8s2w\" (UID: \"1fe21b9b-fc35-41dd-aa42-deb7bef61c21\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-g8s2w" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.297475 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/46986363-a0a5-4868-83b8-b1536fb75705-metrics-certs\") pod \"frr-k8s-blnbq\" (UID: \"46986363-a0a5-4868-83b8-b1536fb75705\") " pod="metallb-system/frr-k8s-blnbq" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.311490 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4vqx\" (UniqueName: \"kubernetes.io/projected/46986363-a0a5-4868-83b8-b1536fb75705-kube-api-access-n4vqx\") pod \"frr-k8s-blnbq\" (UID: \"46986363-a0a5-4868-83b8-b1536fb75705\") " pod="metallb-system/frr-k8s-blnbq" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.320533 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77tdt\" (UniqueName: \"kubernetes.io/projected/2996cfdf-82a5-4df9-b1e8-5553e35489b4-kube-api-access-77tdt\") pod \"speaker-cc2m9\" (UID: \"2996cfdf-82a5-4df9-b1e8-5553e35489b4\") " pod="metallb-system/speaker-cc2m9" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.322368 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59j5q\" (UniqueName: \"kubernetes.io/projected/1fe21b9b-fc35-41dd-aa42-deb7bef61c21-kube-api-access-59j5q\") pod \"frr-k8s-webhook-server-7784b6fcf-g8s2w\" (UID: \"1fe21b9b-fc35-41dd-aa42-deb7bef61c21\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-g8s2w" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.346081 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzkdp" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.389248 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/edc90365-7a93-4042-ba66-e7ee4e6ba188-cert\") pod \"controller-5bddd4b946-6k4kx\" (UID: \"edc90365-7a93-4042-ba66-e7ee4e6ba188\") " pod="metallb-system/controller-5bddd4b946-6k4kx" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.389339 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq56q\" (UniqueName: \"kubernetes.io/projected/edc90365-7a93-4042-ba66-e7ee4e6ba188-kube-api-access-nq56q\") pod \"controller-5bddd4b946-6k4kx\" (UID: \"edc90365-7a93-4042-ba66-e7ee4e6ba188\") " pod="metallb-system/controller-5bddd4b946-6k4kx" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.389358 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/edc90365-7a93-4042-ba66-e7ee4e6ba188-metrics-certs\") pod \"controller-5bddd4b946-6k4kx\" (UID: \"edc90365-7a93-4042-ba66-e7ee4e6ba188\") " pod="metallb-system/controller-5bddd4b946-6k4kx" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.392786 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/edc90365-7a93-4042-ba66-e7ee4e6ba188-metrics-certs\") pod \"controller-5bddd4b946-6k4kx\" (UID: \"edc90365-7a93-4042-ba66-e7ee4e6ba188\") " pod="metallb-system/controller-5bddd4b946-6k4kx" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.393201 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/edc90365-7a93-4042-ba66-e7ee4e6ba188-cert\") pod \"controller-5bddd4b946-6k4kx\" (UID: \"edc90365-7a93-4042-ba66-e7ee4e6ba188\") " pod="metallb-system/controller-5bddd4b946-6k4kx" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.404279 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq56q\" (UniqueName: \"kubernetes.io/projected/edc90365-7a93-4042-ba66-e7ee4e6ba188-kube-api-access-nq56q\") pod \"controller-5bddd4b946-6k4kx\" (UID: \"edc90365-7a93-4042-ba66-e7ee4e6ba188\") " pod="metallb-system/controller-5bddd4b946-6k4kx" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.413226 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-g8s2w" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.432424 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-blnbq" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.525519 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-6k4kx" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.586512 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hzkdp"] Jan 01 08:41:28 crc kubenswrapper[4867]: W0101 08:41:28.631612 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4ffc669_dae0_4e4a_bc04_ce3957305b71.slice/crio-8a6e8dd49cf29f8a2c8e2a49449d126710187a6c7ab2288ca5c2f1588a2d1a47 WatchSource:0}: Error finding container 8a6e8dd49cf29f8a2c8e2a49449d126710187a6c7ab2288ca5c2f1588a2d1a47: Status 404 returned error can't find the container with id 8a6e8dd49cf29f8a2c8e2a49449d126710187a6c7ab2288ca5c2f1588a2d1a47 Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.797825 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2996cfdf-82a5-4df9-b1e8-5553e35489b4-metrics-certs\") pod \"speaker-cc2m9\" (UID: \"2996cfdf-82a5-4df9-b1e8-5553e35489b4\") " pod="metallb-system/speaker-cc2m9" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.797871 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2996cfdf-82a5-4df9-b1e8-5553e35489b4-memberlist\") pod \"speaker-cc2m9\" (UID: \"2996cfdf-82a5-4df9-b1e8-5553e35489b4\") " pod="metallb-system/speaker-cc2m9" Jan 01 08:41:28 crc kubenswrapper[4867]: E0101 08:41:28.798045 4867 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 01 08:41:28 crc kubenswrapper[4867]: E0101 08:41:28.798086 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2996cfdf-82a5-4df9-b1e8-5553e35489b4-memberlist podName:2996cfdf-82a5-4df9-b1e8-5553e35489b4 nodeName:}" failed. No retries permitted until 2026-01-01 08:41:29.798073493 +0000 UTC m=+898.933342262 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/2996cfdf-82a5-4df9-b1e8-5553e35489b4-memberlist") pod "speaker-cc2m9" (UID: "2996cfdf-82a5-4df9-b1e8-5553e35489b4") : secret "metallb-memberlist" not found Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.805014 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2996cfdf-82a5-4df9-b1e8-5553e35489b4-metrics-certs\") pod \"speaker-cc2m9\" (UID: \"2996cfdf-82a5-4df9-b1e8-5553e35489b4\") " pod="metallb-system/speaker-cc2m9" Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.811900 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-6k4kx"] Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.843226 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-blnbq" event={"ID":"46986363-a0a5-4868-83b8-b1536fb75705","Type":"ContainerStarted","Data":"4c57148dd49d3925cf621556f2c765f1b682001e0810388f4fe78b54654beed5"} Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.844838 4867 generic.go:334] "Generic (PLEG): container finished" podID="d4ffc669-dae0-4e4a-bc04-ce3957305b71" containerID="de6d08543ee2e863daa0acc9d3a88b80817a0c4b6280c116634ae3540ce37feb" exitCode=0 Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.844864 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzkdp" event={"ID":"d4ffc669-dae0-4e4a-bc04-ce3957305b71","Type":"ContainerDied","Data":"de6d08543ee2e863daa0acc9d3a88b80817a0c4b6280c116634ae3540ce37feb"} Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.844878 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzkdp" event={"ID":"d4ffc669-dae0-4e4a-bc04-ce3957305b71","Type":"ContainerStarted","Data":"8a6e8dd49cf29f8a2c8e2a49449d126710187a6c7ab2288ca5c2f1588a2d1a47"} Jan 01 08:41:28 crc kubenswrapper[4867]: I0101 08:41:28.889840 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-g8s2w"] Jan 01 08:41:29 crc kubenswrapper[4867]: I0101 08:41:29.810872 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2996cfdf-82a5-4df9-b1e8-5553e35489b4-memberlist\") pod \"speaker-cc2m9\" (UID: \"2996cfdf-82a5-4df9-b1e8-5553e35489b4\") " pod="metallb-system/speaker-cc2m9" Jan 01 08:41:29 crc kubenswrapper[4867]: I0101 08:41:29.815858 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2996cfdf-82a5-4df9-b1e8-5553e35489b4-memberlist\") pod \"speaker-cc2m9\" (UID: \"2996cfdf-82a5-4df9-b1e8-5553e35489b4\") " pod="metallb-system/speaker-cc2m9" Jan 01 08:41:29 crc kubenswrapper[4867]: I0101 08:41:29.851766 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-g8s2w" event={"ID":"1fe21b9b-fc35-41dd-aa42-deb7bef61c21","Type":"ContainerStarted","Data":"92b1aee8a879b0b266c06ac0243fad65c6fd7d2f2ccc9fdd70f479883151220e"} Jan 01 08:41:29 crc kubenswrapper[4867]: I0101 08:41:29.853694 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-6k4kx" event={"ID":"edc90365-7a93-4042-ba66-e7ee4e6ba188","Type":"ContainerStarted","Data":"1bfc77a2bd260083c95269c7251c25a7d415e4cc00a639df03934018c16fc676"} Jan 01 08:41:29 crc kubenswrapper[4867]: I0101 08:41:29.853738 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-6k4kx" event={"ID":"edc90365-7a93-4042-ba66-e7ee4e6ba188","Type":"ContainerStarted","Data":"d64397469e25e0d324b67077e066e8d90392511259dfa8bf4cf2fe35e1a485d9"} Jan 01 08:41:29 crc kubenswrapper[4867]: I0101 08:41:29.853748 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-6k4kx" event={"ID":"edc90365-7a93-4042-ba66-e7ee4e6ba188","Type":"ContainerStarted","Data":"0240b508a00680a52b7708b542a9d19a4fa6d6a81fa0b7ca4013272d1de9fa8d"} Jan 01 08:41:29 crc kubenswrapper[4867]: I0101 08:41:29.853895 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-5bddd4b946-6k4kx" Jan 01 08:41:29 crc kubenswrapper[4867]: I0101 08:41:29.876877 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-5bddd4b946-6k4kx" podStartSLOduration=1.8768556969999999 podStartE2EDuration="1.876855697s" podCreationTimestamp="2026-01-01 08:41:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:41:29.871095637 +0000 UTC m=+899.006364406" watchObservedRunningTime="2026-01-01 08:41:29.876855697 +0000 UTC m=+899.012124466" Jan 01 08:41:29 crc kubenswrapper[4867]: I0101 08:41:29.992737 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-cc2m9" Jan 01 08:41:30 crc kubenswrapper[4867]: I0101 08:41:30.863662 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-cc2m9" event={"ID":"2996cfdf-82a5-4df9-b1e8-5553e35489b4","Type":"ContainerStarted","Data":"f9c81fad4bb2a91664c5dab457c4168cc3e73aa04c8362f044fb57f4db558cde"} Jan 01 08:41:30 crc kubenswrapper[4867]: I0101 08:41:30.863942 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-cc2m9" event={"ID":"2996cfdf-82a5-4df9-b1e8-5553e35489b4","Type":"ContainerStarted","Data":"a372a2bf8e121b209ae7e110daf02ae86fcd320bbe08519a21d89edf786bafe5"} Jan 01 08:41:30 crc kubenswrapper[4867]: I0101 08:41:30.863953 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-cc2m9" event={"ID":"2996cfdf-82a5-4df9-b1e8-5553e35489b4","Type":"ContainerStarted","Data":"d57ab08ed60dc6c5fea694fc7489253e5438c8b854964d527af45f63c53ab5ce"} Jan 01 08:41:30 crc kubenswrapper[4867]: I0101 08:41:30.864167 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-cc2m9" Jan 01 08:41:30 crc kubenswrapper[4867]: I0101 08:41:30.867022 4867 generic.go:334] "Generic (PLEG): container finished" podID="d4ffc669-dae0-4e4a-bc04-ce3957305b71" containerID="3822140d97f75a38c476744361f5bbcbd085f00f7cd261019bdb50709db6bed0" exitCode=0 Jan 01 08:41:30 crc kubenswrapper[4867]: I0101 08:41:30.867160 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzkdp" event={"ID":"d4ffc669-dae0-4e4a-bc04-ce3957305b71","Type":"ContainerDied","Data":"3822140d97f75a38c476744361f5bbcbd085f00f7cd261019bdb50709db6bed0"} Jan 01 08:41:30 crc kubenswrapper[4867]: I0101 08:41:30.893847 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-cc2m9" podStartSLOduration=2.893828884 podStartE2EDuration="2.893828884s" podCreationTimestamp="2026-01-01 08:41:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:41:30.886676826 +0000 UTC m=+900.021945595" watchObservedRunningTime="2026-01-01 08:41:30.893828884 +0000 UTC m=+900.029097663" Jan 01 08:41:31 crc kubenswrapper[4867]: I0101 08:41:31.882423 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzkdp" event={"ID":"d4ffc669-dae0-4e4a-bc04-ce3957305b71","Type":"ContainerStarted","Data":"5eca29e2615c1efbd67e6afaba1845b29224c55d0ec5bd722d383cad07894ded"} Jan 01 08:41:31 crc kubenswrapper[4867]: I0101 08:41:31.905114 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hzkdp" podStartSLOduration=1.407714417 podStartE2EDuration="3.905096674s" podCreationTimestamp="2026-01-01 08:41:28 +0000 UTC" firstStartedPulling="2026-01-01 08:41:28.846660793 +0000 UTC m=+897.981929552" lastFinishedPulling="2026-01-01 08:41:31.34404304 +0000 UTC m=+900.479311809" observedRunningTime="2026-01-01 08:41:31.900343382 +0000 UTC m=+901.035612181" watchObservedRunningTime="2026-01-01 08:41:31.905096674 +0000 UTC m=+901.040365443" Jan 01 08:41:36 crc kubenswrapper[4867]: I0101 08:41:36.910603 4867 generic.go:334] "Generic (PLEG): container finished" podID="46986363-a0a5-4868-83b8-b1536fb75705" containerID="14ac7c02020a3813b34931aeab9d9e2e9201b4f4170619ea1ae9d3db3549d1dc" exitCode=0 Jan 01 08:41:36 crc kubenswrapper[4867]: I0101 08:41:36.910661 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-blnbq" event={"ID":"46986363-a0a5-4868-83b8-b1536fb75705","Type":"ContainerDied","Data":"14ac7c02020a3813b34931aeab9d9e2e9201b4f4170619ea1ae9d3db3549d1dc"} Jan 01 08:41:36 crc kubenswrapper[4867]: I0101 08:41:36.913678 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-g8s2w" event={"ID":"1fe21b9b-fc35-41dd-aa42-deb7bef61c21","Type":"ContainerStarted","Data":"f30141983144e97914f6afa9902afbe064b21cd0f4d45ae8622fbcc031bccda9"} Jan 01 08:41:36 crc kubenswrapper[4867]: I0101 08:41:36.913842 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-g8s2w" Jan 01 08:41:36 crc kubenswrapper[4867]: I0101 08:41:36.949588 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-g8s2w" podStartSLOduration=1.7302772549999998 podStartE2EDuration="8.949566058s" podCreationTimestamp="2026-01-01 08:41:28 +0000 UTC" firstStartedPulling="2026-01-01 08:41:28.90813992 +0000 UTC m=+898.043408689" lastFinishedPulling="2026-01-01 08:41:36.127428713 +0000 UTC m=+905.262697492" observedRunningTime="2026-01-01 08:41:36.946960385 +0000 UTC m=+906.082229154" watchObservedRunningTime="2026-01-01 08:41:36.949566058 +0000 UTC m=+906.084834827" Jan 01 08:41:37 crc kubenswrapper[4867]: I0101 08:41:37.923348 4867 generic.go:334] "Generic (PLEG): container finished" podID="46986363-a0a5-4868-83b8-b1536fb75705" containerID="576ef27a65233749589d9ee5a03185dbe1b2e7f8f4c470d75f6c4f40c2a2f55b" exitCode=0 Jan 01 08:41:37 crc kubenswrapper[4867]: I0101 08:41:37.923417 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-blnbq" event={"ID":"46986363-a0a5-4868-83b8-b1536fb75705","Type":"ContainerDied","Data":"576ef27a65233749589d9ee5a03185dbe1b2e7f8f4c470d75f6c4f40c2a2f55b"} Jan 01 08:41:38 crc kubenswrapper[4867]: I0101 08:41:38.222519 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fsr4l"] Jan 01 08:41:38 crc kubenswrapper[4867]: I0101 08:41:38.224099 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fsr4l" Jan 01 08:41:38 crc kubenswrapper[4867]: I0101 08:41:38.237842 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fsr4l"] Jan 01 08:41:38 crc kubenswrapper[4867]: I0101 08:41:38.256411 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683-utilities\") pod \"community-operators-fsr4l\" (UID: \"74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683\") " pod="openshift-marketplace/community-operators-fsr4l" Jan 01 08:41:38 crc kubenswrapper[4867]: I0101 08:41:38.256463 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683-catalog-content\") pod \"community-operators-fsr4l\" (UID: \"74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683\") " pod="openshift-marketplace/community-operators-fsr4l" Jan 01 08:41:38 crc kubenswrapper[4867]: I0101 08:41:38.256507 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd9z6\" (UniqueName: \"kubernetes.io/projected/74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683-kube-api-access-rd9z6\") pod \"community-operators-fsr4l\" (UID: \"74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683\") " pod="openshift-marketplace/community-operators-fsr4l" Jan 01 08:41:38 crc kubenswrapper[4867]: I0101 08:41:38.346804 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hzkdp" Jan 01 08:41:38 crc kubenswrapper[4867]: I0101 08:41:38.346862 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hzkdp" Jan 01 08:41:38 crc kubenswrapper[4867]: I0101 08:41:38.357982 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683-utilities\") pod \"community-operators-fsr4l\" (UID: \"74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683\") " pod="openshift-marketplace/community-operators-fsr4l" Jan 01 08:41:38 crc kubenswrapper[4867]: I0101 08:41:38.358023 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683-catalog-content\") pod \"community-operators-fsr4l\" (UID: \"74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683\") " pod="openshift-marketplace/community-operators-fsr4l" Jan 01 08:41:38 crc kubenswrapper[4867]: I0101 08:41:38.358055 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd9z6\" (UniqueName: \"kubernetes.io/projected/74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683-kube-api-access-rd9z6\") pod \"community-operators-fsr4l\" (UID: \"74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683\") " pod="openshift-marketplace/community-operators-fsr4l" Jan 01 08:41:38 crc kubenswrapper[4867]: I0101 08:41:38.358909 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683-utilities\") pod \"community-operators-fsr4l\" (UID: \"74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683\") " pod="openshift-marketplace/community-operators-fsr4l" Jan 01 08:41:38 crc kubenswrapper[4867]: I0101 08:41:38.359003 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683-catalog-content\") pod \"community-operators-fsr4l\" (UID: \"74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683\") " pod="openshift-marketplace/community-operators-fsr4l" Jan 01 08:41:38 crc kubenswrapper[4867]: I0101 08:41:38.393862 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd9z6\" (UniqueName: \"kubernetes.io/projected/74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683-kube-api-access-rd9z6\") pod \"community-operators-fsr4l\" (UID: \"74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683\") " pod="openshift-marketplace/community-operators-fsr4l" Jan 01 08:41:38 crc kubenswrapper[4867]: I0101 08:41:38.418357 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hzkdp" Jan 01 08:41:38 crc kubenswrapper[4867]: I0101 08:41:38.530028 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-5bddd4b946-6k4kx" Jan 01 08:41:38 crc kubenswrapper[4867]: I0101 08:41:38.545633 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fsr4l" Jan 01 08:41:38 crc kubenswrapper[4867]: I0101 08:41:38.797099 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fsr4l"] Jan 01 08:41:38 crc kubenswrapper[4867]: I0101 08:41:38.930270 4867 generic.go:334] "Generic (PLEG): container finished" podID="46986363-a0a5-4868-83b8-b1536fb75705" containerID="accf5f8f96eb1bea06d03bac96982e0102899123735df6e2c457c9cbfc3e3251" exitCode=0 Jan 01 08:41:38 crc kubenswrapper[4867]: I0101 08:41:38.930477 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-blnbq" event={"ID":"46986363-a0a5-4868-83b8-b1536fb75705","Type":"ContainerDied","Data":"accf5f8f96eb1bea06d03bac96982e0102899123735df6e2c457c9cbfc3e3251"} Jan 01 08:41:38 crc kubenswrapper[4867]: I0101 08:41:38.931960 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fsr4l" event={"ID":"74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683","Type":"ContainerStarted","Data":"ca01fb51fce8628534d746145f08a14173490b09b50e22323f1593a0d12f9a81"} Jan 01 08:41:38 crc kubenswrapper[4867]: I0101 08:41:38.979150 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hzkdp" Jan 01 08:41:39 crc kubenswrapper[4867]: I0101 08:41:39.943645 4867 generic.go:334] "Generic (PLEG): container finished" podID="74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683" containerID="38f40fedf4c47e39c7e4206493af09aacb55b15ebdc9284fdebbc74821801866" exitCode=0 Jan 01 08:41:39 crc kubenswrapper[4867]: I0101 08:41:39.943974 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fsr4l" event={"ID":"74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683","Type":"ContainerDied","Data":"38f40fedf4c47e39c7e4206493af09aacb55b15ebdc9284fdebbc74821801866"} Jan 01 08:41:39 crc kubenswrapper[4867]: I0101 08:41:39.951747 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-blnbq" event={"ID":"46986363-a0a5-4868-83b8-b1536fb75705","Type":"ContainerStarted","Data":"160b27a897a088f65b3d9021526a96fb854459bcfede6fc20ecd845c46d4a61f"} Jan 01 08:41:39 crc kubenswrapper[4867]: I0101 08:41:39.951810 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-blnbq" event={"ID":"46986363-a0a5-4868-83b8-b1536fb75705","Type":"ContainerStarted","Data":"f407f3d911d03b45b5756628c9447f7c05d90f0ec817631e5bb9681325d54b36"} Jan 01 08:41:39 crc kubenswrapper[4867]: I0101 08:41:39.951833 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-blnbq" event={"ID":"46986363-a0a5-4868-83b8-b1536fb75705","Type":"ContainerStarted","Data":"bd4b22538990dfe17f9d2b0b5888b7bbb73b6fb35c2397f45c7f7d867fa8f01e"} Jan 01 08:41:40 crc kubenswrapper[4867]: I0101 08:41:40.965427 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-blnbq" event={"ID":"46986363-a0a5-4868-83b8-b1536fb75705","Type":"ContainerStarted","Data":"0cf7125d404a003bea28f5ddfafcf90ad3f9961a7134c4b1fa5a75411b0463ef"} Jan 01 08:41:40 crc kubenswrapper[4867]: I0101 08:41:40.965750 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-blnbq" Jan 01 08:41:40 crc kubenswrapper[4867]: I0101 08:41:40.965764 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-blnbq" event={"ID":"46986363-a0a5-4868-83b8-b1536fb75705","Type":"ContainerStarted","Data":"f23a788d5daaf01ceb4ca772712c0bcc8fb91c51bb41b260719d7e5ac11e54c4"} Jan 01 08:41:40 crc kubenswrapper[4867]: I0101 08:41:40.965777 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-blnbq" event={"ID":"46986363-a0a5-4868-83b8-b1536fb75705","Type":"ContainerStarted","Data":"23a211ad2e0cd27878d1f9615d4b81e738a930c4b3468d2679bc2610fb47377b"} Jan 01 08:41:40 crc kubenswrapper[4867]: I0101 08:41:40.968698 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fsr4l" event={"ID":"74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683","Type":"ContainerStarted","Data":"2d28d1925facab19b7b5ec6b7384cfc5fa3837d7aaf3b1a5813e5a3e4b740591"} Jan 01 08:41:40 crc kubenswrapper[4867]: I0101 08:41:40.988444 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-blnbq" podStartSLOduration=5.530268814 podStartE2EDuration="12.98838497s" podCreationTimestamp="2026-01-01 08:41:28 +0000 UTC" firstStartedPulling="2026-01-01 08:41:28.645344011 +0000 UTC m=+897.780612780" lastFinishedPulling="2026-01-01 08:41:36.103460157 +0000 UTC m=+905.238728936" observedRunningTime="2026-01-01 08:41:40.987169836 +0000 UTC m=+910.122438645" watchObservedRunningTime="2026-01-01 08:41:40.98838497 +0000 UTC m=+910.123653739" Jan 01 08:41:41 crc kubenswrapper[4867]: I0101 08:41:41.977031 4867 generic.go:334] "Generic (PLEG): container finished" podID="74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683" containerID="2d28d1925facab19b7b5ec6b7384cfc5fa3837d7aaf3b1a5813e5a3e4b740591" exitCode=0 Jan 01 08:41:41 crc kubenswrapper[4867]: I0101 08:41:41.977130 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fsr4l" event={"ID":"74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683","Type":"ContainerDied","Data":"2d28d1925facab19b7b5ec6b7384cfc5fa3837d7aaf3b1a5813e5a3e4b740591"} Jan 01 08:41:42 crc kubenswrapper[4867]: I0101 08:41:42.984833 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fsr4l" event={"ID":"74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683","Type":"ContainerStarted","Data":"45268a3439b059d837cfd12f2c18478e939f45aeb3287cc88d5d661cb77c502b"} Jan 01 08:41:43 crc kubenswrapper[4867]: I0101 08:41:43.007146 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fsr4l" podStartSLOduration=2.609917698 podStartE2EDuration="5.007124523s" podCreationTimestamp="2026-01-01 08:41:38 +0000 UTC" firstStartedPulling="2026-01-01 08:41:39.946248194 +0000 UTC m=+909.081517003" lastFinishedPulling="2026-01-01 08:41:42.343455049 +0000 UTC m=+911.478723828" observedRunningTime="2026-01-01 08:41:43.003582784 +0000 UTC m=+912.138851583" watchObservedRunningTime="2026-01-01 08:41:43.007124523 +0000 UTC m=+912.142393302" Jan 01 08:41:43 crc kubenswrapper[4867]: I0101 08:41:43.433018 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-blnbq" Jan 01 08:41:43 crc kubenswrapper[4867]: I0101 08:41:43.481556 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-blnbq" Jan 01 08:41:44 crc kubenswrapper[4867]: I0101 08:41:44.619119 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hzkdp"] Jan 01 08:41:44 crc kubenswrapper[4867]: I0101 08:41:44.619983 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hzkdp" podUID="d4ffc669-dae0-4e4a-bc04-ce3957305b71" containerName="registry-server" containerID="cri-o://5eca29e2615c1efbd67e6afaba1845b29224c55d0ec5bd722d383cad07894ded" gracePeriod=2 Jan 01 08:41:45 crc kubenswrapper[4867]: I0101 08:41:45.000039 4867 generic.go:334] "Generic (PLEG): container finished" podID="d4ffc669-dae0-4e4a-bc04-ce3957305b71" containerID="5eca29e2615c1efbd67e6afaba1845b29224c55d0ec5bd722d383cad07894ded" exitCode=0 Jan 01 08:41:45 crc kubenswrapper[4867]: I0101 08:41:45.000085 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzkdp" event={"ID":"d4ffc669-dae0-4e4a-bc04-ce3957305b71","Type":"ContainerDied","Data":"5eca29e2615c1efbd67e6afaba1845b29224c55d0ec5bd722d383cad07894ded"} Jan 01 08:41:45 crc kubenswrapper[4867]: I0101 08:41:45.624710 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzkdp" Jan 01 08:41:45 crc kubenswrapper[4867]: I0101 08:41:45.775466 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkrt9\" (UniqueName: \"kubernetes.io/projected/d4ffc669-dae0-4e4a-bc04-ce3957305b71-kube-api-access-qkrt9\") pod \"d4ffc669-dae0-4e4a-bc04-ce3957305b71\" (UID: \"d4ffc669-dae0-4e4a-bc04-ce3957305b71\") " Jan 01 08:41:45 crc kubenswrapper[4867]: I0101 08:41:45.775542 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4ffc669-dae0-4e4a-bc04-ce3957305b71-catalog-content\") pod \"d4ffc669-dae0-4e4a-bc04-ce3957305b71\" (UID: \"d4ffc669-dae0-4e4a-bc04-ce3957305b71\") " Jan 01 08:41:45 crc kubenswrapper[4867]: I0101 08:41:45.775567 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4ffc669-dae0-4e4a-bc04-ce3957305b71-utilities\") pod \"d4ffc669-dae0-4e4a-bc04-ce3957305b71\" (UID: \"d4ffc669-dae0-4e4a-bc04-ce3957305b71\") " Jan 01 08:41:45 crc kubenswrapper[4867]: I0101 08:41:45.776464 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4ffc669-dae0-4e4a-bc04-ce3957305b71-utilities" (OuterVolumeSpecName: "utilities") pod "d4ffc669-dae0-4e4a-bc04-ce3957305b71" (UID: "d4ffc669-dae0-4e4a-bc04-ce3957305b71"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:41:45 crc kubenswrapper[4867]: I0101 08:41:45.782034 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4ffc669-dae0-4e4a-bc04-ce3957305b71-kube-api-access-qkrt9" (OuterVolumeSpecName: "kube-api-access-qkrt9") pod "d4ffc669-dae0-4e4a-bc04-ce3957305b71" (UID: "d4ffc669-dae0-4e4a-bc04-ce3957305b71"). InnerVolumeSpecName "kube-api-access-qkrt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:41:45 crc kubenswrapper[4867]: I0101 08:41:45.825817 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4ffc669-dae0-4e4a-bc04-ce3957305b71-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4ffc669-dae0-4e4a-bc04-ce3957305b71" (UID: "d4ffc669-dae0-4e4a-bc04-ce3957305b71"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:41:45 crc kubenswrapper[4867]: I0101 08:41:45.877190 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkrt9\" (UniqueName: \"kubernetes.io/projected/d4ffc669-dae0-4e4a-bc04-ce3957305b71-kube-api-access-qkrt9\") on node \"crc\" DevicePath \"\"" Jan 01 08:41:45 crc kubenswrapper[4867]: I0101 08:41:45.877244 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4ffc669-dae0-4e4a-bc04-ce3957305b71-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 08:41:45 crc kubenswrapper[4867]: I0101 08:41:45.877262 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4ffc669-dae0-4e4a-bc04-ce3957305b71-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 08:41:46 crc kubenswrapper[4867]: I0101 08:41:46.011204 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzkdp" event={"ID":"d4ffc669-dae0-4e4a-bc04-ce3957305b71","Type":"ContainerDied","Data":"8a6e8dd49cf29f8a2c8e2a49449d126710187a6c7ab2288ca5c2f1588a2d1a47"} Jan 01 08:41:46 crc kubenswrapper[4867]: I0101 08:41:46.011257 4867 scope.go:117] "RemoveContainer" containerID="5eca29e2615c1efbd67e6afaba1845b29224c55d0ec5bd722d383cad07894ded" Jan 01 08:41:46 crc kubenswrapper[4867]: I0101 08:41:46.011267 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzkdp" Jan 01 08:41:46 crc kubenswrapper[4867]: I0101 08:41:46.033832 4867 scope.go:117] "RemoveContainer" containerID="3822140d97f75a38c476744361f5bbcbd085f00f7cd261019bdb50709db6bed0" Jan 01 08:41:46 crc kubenswrapper[4867]: I0101 08:41:46.047986 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hzkdp"] Jan 01 08:41:46 crc kubenswrapper[4867]: I0101 08:41:46.053432 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hzkdp"] Jan 01 08:41:46 crc kubenswrapper[4867]: I0101 08:41:46.072201 4867 scope.go:117] "RemoveContainer" containerID="de6d08543ee2e863daa0acc9d3a88b80817a0c4b6280c116634ae3540ce37feb" Jan 01 08:41:47 crc kubenswrapper[4867]: I0101 08:41:47.143365 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4ffc669-dae0-4e4a-bc04-ce3957305b71" path="/var/lib/kubelet/pods/d4ffc669-dae0-4e4a-bc04-ce3957305b71/volumes" Jan 01 08:41:48 crc kubenswrapper[4867]: I0101 08:41:48.417703 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-g8s2w" Jan 01 08:41:48 crc kubenswrapper[4867]: I0101 08:41:48.545926 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fsr4l" Jan 01 08:41:48 crc kubenswrapper[4867]: I0101 08:41:48.546324 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fsr4l" Jan 01 08:41:48 crc kubenswrapper[4867]: I0101 08:41:48.612993 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fsr4l" Jan 01 08:41:49 crc kubenswrapper[4867]: I0101 08:41:49.079854 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fsr4l" Jan 01 08:41:49 crc kubenswrapper[4867]: I0101 08:41:49.996280 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-cc2m9" Jan 01 08:41:51 crc kubenswrapper[4867]: I0101 08:41:51.331482 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 08:41:51 crc kubenswrapper[4867]: I0101 08:41:51.331573 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 08:41:51 crc kubenswrapper[4867]: I0101 08:41:51.876652 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5wcpl"] Jan 01 08:41:51 crc kubenswrapper[4867]: E0101 08:41:51.876979 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4ffc669-dae0-4e4a-bc04-ce3957305b71" containerName="extract-utilities" Jan 01 08:41:51 crc kubenswrapper[4867]: I0101 08:41:51.877000 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4ffc669-dae0-4e4a-bc04-ce3957305b71" containerName="extract-utilities" Jan 01 08:41:51 crc kubenswrapper[4867]: E0101 08:41:51.877032 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4ffc669-dae0-4e4a-bc04-ce3957305b71" containerName="extract-content" Jan 01 08:41:51 crc kubenswrapper[4867]: I0101 08:41:51.877043 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4ffc669-dae0-4e4a-bc04-ce3957305b71" containerName="extract-content" Jan 01 08:41:51 crc kubenswrapper[4867]: E0101 08:41:51.877061 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4ffc669-dae0-4e4a-bc04-ce3957305b71" containerName="registry-server" Jan 01 08:41:51 crc kubenswrapper[4867]: I0101 08:41:51.877074 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4ffc669-dae0-4e4a-bc04-ce3957305b71" containerName="registry-server" Jan 01 08:41:51 crc kubenswrapper[4867]: I0101 08:41:51.877271 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4ffc669-dae0-4e4a-bc04-ce3957305b71" containerName="registry-server" Jan 01 08:41:51 crc kubenswrapper[4867]: I0101 08:41:51.878388 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5wcpl" Jan 01 08:41:51 crc kubenswrapper[4867]: I0101 08:41:51.880360 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 01 08:41:51 crc kubenswrapper[4867]: I0101 08:41:51.884468 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5wcpl"] Jan 01 08:41:51 crc kubenswrapper[4867]: I0101 08:41:51.961008 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5482c47-b1ad-4526-b3f3-b0388ae47cc9-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5wcpl\" (UID: \"c5482c47-b1ad-4526-b3f3-b0388ae47cc9\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5wcpl" Jan 01 08:41:51 crc kubenswrapper[4867]: I0101 08:41:51.961054 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5482c47-b1ad-4526-b3f3-b0388ae47cc9-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5wcpl\" (UID: \"c5482c47-b1ad-4526-b3f3-b0388ae47cc9\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5wcpl" Jan 01 08:41:51 crc kubenswrapper[4867]: I0101 08:41:51.961101 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wftnv\" (UniqueName: \"kubernetes.io/projected/c5482c47-b1ad-4526-b3f3-b0388ae47cc9-kube-api-access-wftnv\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5wcpl\" (UID: \"c5482c47-b1ad-4526-b3f3-b0388ae47cc9\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5wcpl" Jan 01 08:41:52 crc kubenswrapper[4867]: I0101 08:41:52.014315 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fsr4l"] Jan 01 08:41:52 crc kubenswrapper[4867]: I0101 08:41:52.014653 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fsr4l" podUID="74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683" containerName="registry-server" containerID="cri-o://45268a3439b059d837cfd12f2c18478e939f45aeb3287cc88d5d661cb77c502b" gracePeriod=2 Jan 01 08:41:52 crc kubenswrapper[4867]: I0101 08:41:52.063260 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5482c47-b1ad-4526-b3f3-b0388ae47cc9-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5wcpl\" (UID: \"c5482c47-b1ad-4526-b3f3-b0388ae47cc9\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5wcpl" Jan 01 08:41:52 crc kubenswrapper[4867]: I0101 08:41:52.063697 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5482c47-b1ad-4526-b3f3-b0388ae47cc9-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5wcpl\" (UID: \"c5482c47-b1ad-4526-b3f3-b0388ae47cc9\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5wcpl" Jan 01 08:41:52 crc kubenswrapper[4867]: I0101 08:41:52.063743 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5482c47-b1ad-4526-b3f3-b0388ae47cc9-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5wcpl\" (UID: \"c5482c47-b1ad-4526-b3f3-b0388ae47cc9\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5wcpl" Jan 01 08:41:52 crc kubenswrapper[4867]: I0101 08:41:52.063820 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wftnv\" (UniqueName: \"kubernetes.io/projected/c5482c47-b1ad-4526-b3f3-b0388ae47cc9-kube-api-access-wftnv\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5wcpl\" (UID: \"c5482c47-b1ad-4526-b3f3-b0388ae47cc9\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5wcpl" Jan 01 08:41:52 crc kubenswrapper[4867]: I0101 08:41:52.064306 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5482c47-b1ad-4526-b3f3-b0388ae47cc9-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5wcpl\" (UID: \"c5482c47-b1ad-4526-b3f3-b0388ae47cc9\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5wcpl" Jan 01 08:41:52 crc kubenswrapper[4867]: I0101 08:41:52.091341 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wftnv\" (UniqueName: \"kubernetes.io/projected/c5482c47-b1ad-4526-b3f3-b0388ae47cc9-kube-api-access-wftnv\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5wcpl\" (UID: \"c5482c47-b1ad-4526-b3f3-b0388ae47cc9\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5wcpl" Jan 01 08:41:52 crc kubenswrapper[4867]: I0101 08:41:52.203087 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5wcpl" Jan 01 08:41:52 crc kubenswrapper[4867]: I0101 08:41:52.412372 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fsr4l" Jan 01 08:41:52 crc kubenswrapper[4867]: I0101 08:41:52.468857 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683-catalog-content\") pod \"74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683\" (UID: \"74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683\") " Jan 01 08:41:52 crc kubenswrapper[4867]: I0101 08:41:52.468954 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683-utilities\") pod \"74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683\" (UID: \"74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683\") " Jan 01 08:41:52 crc kubenswrapper[4867]: I0101 08:41:52.468991 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rd9z6\" (UniqueName: \"kubernetes.io/projected/74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683-kube-api-access-rd9z6\") pod \"74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683\" (UID: \"74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683\") " Jan 01 08:41:52 crc kubenswrapper[4867]: I0101 08:41:52.469912 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683-utilities" (OuterVolumeSpecName: "utilities") pod "74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683" (UID: "74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:41:52 crc kubenswrapper[4867]: I0101 08:41:52.476194 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683-kube-api-access-rd9z6" (OuterVolumeSpecName: "kube-api-access-rd9z6") pod "74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683" (UID: "74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683"). InnerVolumeSpecName "kube-api-access-rd9z6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:41:52 crc kubenswrapper[4867]: I0101 08:41:52.524143 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683" (UID: "74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:41:52 crc kubenswrapper[4867]: I0101 08:41:52.570637 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 08:41:52 crc kubenswrapper[4867]: I0101 08:41:52.570675 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 08:41:52 crc kubenswrapper[4867]: I0101 08:41:52.570695 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rd9z6\" (UniqueName: \"kubernetes.io/projected/74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683-kube-api-access-rd9z6\") on node \"crc\" DevicePath \"\"" Jan 01 08:41:52 crc kubenswrapper[4867]: I0101 08:41:52.643791 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5wcpl"] Jan 01 08:41:52 crc kubenswrapper[4867]: W0101 08:41:52.658252 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5482c47_b1ad_4526_b3f3_b0388ae47cc9.slice/crio-8eebbeda2c3f9154c0cbdd41be511f729cf3799fc0c2620c1cdd647e165fb360 WatchSource:0}: Error finding container 8eebbeda2c3f9154c0cbdd41be511f729cf3799fc0c2620c1cdd647e165fb360: Status 404 returned error can't find the container with id 8eebbeda2c3f9154c0cbdd41be511f729cf3799fc0c2620c1cdd647e165fb360 Jan 01 08:41:53 crc kubenswrapper[4867]: I0101 08:41:53.070910 4867 generic.go:334] "Generic (PLEG): container finished" podID="74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683" containerID="45268a3439b059d837cfd12f2c18478e939f45aeb3287cc88d5d661cb77c502b" exitCode=0 Jan 01 08:41:53 crc kubenswrapper[4867]: I0101 08:41:53.071016 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fsr4l" event={"ID":"74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683","Type":"ContainerDied","Data":"45268a3439b059d837cfd12f2c18478e939f45aeb3287cc88d5d661cb77c502b"} Jan 01 08:41:53 crc kubenswrapper[4867]: I0101 08:41:53.071097 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fsr4l" event={"ID":"74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683","Type":"ContainerDied","Data":"ca01fb51fce8628534d746145f08a14173490b09b50e22323f1593a0d12f9a81"} Jan 01 08:41:53 crc kubenswrapper[4867]: I0101 08:41:53.071038 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fsr4l" Jan 01 08:41:53 crc kubenswrapper[4867]: I0101 08:41:53.071118 4867 scope.go:117] "RemoveContainer" containerID="45268a3439b059d837cfd12f2c18478e939f45aeb3287cc88d5d661cb77c502b" Jan 01 08:41:53 crc kubenswrapper[4867]: I0101 08:41:53.073673 4867 generic.go:334] "Generic (PLEG): container finished" podID="c5482c47-b1ad-4526-b3f3-b0388ae47cc9" containerID="8899a39c10dfbe0c192f4e2b21608dbacda0d3a419374fe136eafaa3e00dcc7e" exitCode=0 Jan 01 08:41:53 crc kubenswrapper[4867]: I0101 08:41:53.073707 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5wcpl" event={"ID":"c5482c47-b1ad-4526-b3f3-b0388ae47cc9","Type":"ContainerDied","Data":"8899a39c10dfbe0c192f4e2b21608dbacda0d3a419374fe136eafaa3e00dcc7e"} Jan 01 08:41:53 crc kubenswrapper[4867]: I0101 08:41:53.073736 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5wcpl" event={"ID":"c5482c47-b1ad-4526-b3f3-b0388ae47cc9","Type":"ContainerStarted","Data":"8eebbeda2c3f9154c0cbdd41be511f729cf3799fc0c2620c1cdd647e165fb360"} Jan 01 08:41:53 crc kubenswrapper[4867]: I0101 08:41:53.097010 4867 scope.go:117] "RemoveContainer" containerID="2d28d1925facab19b7b5ec6b7384cfc5fa3837d7aaf3b1a5813e5a3e4b740591" Jan 01 08:41:53 crc kubenswrapper[4867]: I0101 08:41:53.151599 4867 scope.go:117] "RemoveContainer" containerID="38f40fedf4c47e39c7e4206493af09aacb55b15ebdc9284fdebbc74821801866" Jan 01 08:41:53 crc kubenswrapper[4867]: I0101 08:41:53.197951 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fsr4l"] Jan 01 08:41:53 crc kubenswrapper[4867]: I0101 08:41:53.207113 4867 scope.go:117] "RemoveContainer" containerID="45268a3439b059d837cfd12f2c18478e939f45aeb3287cc88d5d661cb77c502b" Jan 01 08:41:53 crc kubenswrapper[4867]: E0101 08:41:53.207478 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45268a3439b059d837cfd12f2c18478e939f45aeb3287cc88d5d661cb77c502b\": container with ID starting with 45268a3439b059d837cfd12f2c18478e939f45aeb3287cc88d5d661cb77c502b not found: ID does not exist" containerID="45268a3439b059d837cfd12f2c18478e939f45aeb3287cc88d5d661cb77c502b" Jan 01 08:41:53 crc kubenswrapper[4867]: I0101 08:41:53.207507 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45268a3439b059d837cfd12f2c18478e939f45aeb3287cc88d5d661cb77c502b"} err="failed to get container status \"45268a3439b059d837cfd12f2c18478e939f45aeb3287cc88d5d661cb77c502b\": rpc error: code = NotFound desc = could not find container \"45268a3439b059d837cfd12f2c18478e939f45aeb3287cc88d5d661cb77c502b\": container with ID starting with 45268a3439b059d837cfd12f2c18478e939f45aeb3287cc88d5d661cb77c502b not found: ID does not exist" Jan 01 08:41:53 crc kubenswrapper[4867]: I0101 08:41:53.207529 4867 scope.go:117] "RemoveContainer" containerID="2d28d1925facab19b7b5ec6b7384cfc5fa3837d7aaf3b1a5813e5a3e4b740591" Jan 01 08:41:53 crc kubenswrapper[4867]: E0101 08:41:53.207689 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d28d1925facab19b7b5ec6b7384cfc5fa3837d7aaf3b1a5813e5a3e4b740591\": container with ID starting with 2d28d1925facab19b7b5ec6b7384cfc5fa3837d7aaf3b1a5813e5a3e4b740591 not found: ID does not exist" containerID="2d28d1925facab19b7b5ec6b7384cfc5fa3837d7aaf3b1a5813e5a3e4b740591" Jan 01 08:41:53 crc kubenswrapper[4867]: I0101 08:41:53.207710 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d28d1925facab19b7b5ec6b7384cfc5fa3837d7aaf3b1a5813e5a3e4b740591"} err="failed to get container status \"2d28d1925facab19b7b5ec6b7384cfc5fa3837d7aaf3b1a5813e5a3e4b740591\": rpc error: code = NotFound desc = could not find container \"2d28d1925facab19b7b5ec6b7384cfc5fa3837d7aaf3b1a5813e5a3e4b740591\": container with ID starting with 2d28d1925facab19b7b5ec6b7384cfc5fa3837d7aaf3b1a5813e5a3e4b740591 not found: ID does not exist" Jan 01 08:41:53 crc kubenswrapper[4867]: I0101 08:41:53.207723 4867 scope.go:117] "RemoveContainer" containerID="38f40fedf4c47e39c7e4206493af09aacb55b15ebdc9284fdebbc74821801866" Jan 01 08:41:53 crc kubenswrapper[4867]: E0101 08:41:53.207909 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38f40fedf4c47e39c7e4206493af09aacb55b15ebdc9284fdebbc74821801866\": container with ID starting with 38f40fedf4c47e39c7e4206493af09aacb55b15ebdc9284fdebbc74821801866 not found: ID does not exist" containerID="38f40fedf4c47e39c7e4206493af09aacb55b15ebdc9284fdebbc74821801866" Jan 01 08:41:53 crc kubenswrapper[4867]: I0101 08:41:53.207931 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38f40fedf4c47e39c7e4206493af09aacb55b15ebdc9284fdebbc74821801866"} err="failed to get container status \"38f40fedf4c47e39c7e4206493af09aacb55b15ebdc9284fdebbc74821801866\": rpc error: code = NotFound desc = could not find container \"38f40fedf4c47e39c7e4206493af09aacb55b15ebdc9284fdebbc74821801866\": container with ID starting with 38f40fedf4c47e39c7e4206493af09aacb55b15ebdc9284fdebbc74821801866 not found: ID does not exist" Jan 01 08:41:53 crc kubenswrapper[4867]: I0101 08:41:53.211957 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fsr4l"] Jan 01 08:41:55 crc kubenswrapper[4867]: I0101 08:41:55.135794 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683" path="/var/lib/kubelet/pods/74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683/volumes" Jan 01 08:41:56 crc kubenswrapper[4867]: I0101 08:41:56.104398 4867 generic.go:334] "Generic (PLEG): container finished" podID="c5482c47-b1ad-4526-b3f3-b0388ae47cc9" containerID="d8942a49fc6c4fcf6fe57a17e13354a4118879ee9029aa41c48709faa1ab9187" exitCode=0 Jan 01 08:41:56 crc kubenswrapper[4867]: I0101 08:41:56.104501 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5wcpl" event={"ID":"c5482c47-b1ad-4526-b3f3-b0388ae47cc9","Type":"ContainerDied","Data":"d8942a49fc6c4fcf6fe57a17e13354a4118879ee9029aa41c48709faa1ab9187"} Jan 01 08:41:57 crc kubenswrapper[4867]: I0101 08:41:57.114812 4867 generic.go:334] "Generic (PLEG): container finished" podID="c5482c47-b1ad-4526-b3f3-b0388ae47cc9" containerID="07ebf50f61165824ad01249de6710a9e606a5a50f06f7ba7678ed22506db6c12" exitCode=0 Jan 01 08:41:57 crc kubenswrapper[4867]: I0101 08:41:57.114858 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5wcpl" event={"ID":"c5482c47-b1ad-4526-b3f3-b0388ae47cc9","Type":"ContainerDied","Data":"07ebf50f61165824ad01249de6710a9e606a5a50f06f7ba7678ed22506db6c12"} Jan 01 08:41:58 crc kubenswrapper[4867]: I0101 08:41:58.451137 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-blnbq" Jan 01 08:41:58 crc kubenswrapper[4867]: I0101 08:41:58.563423 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5wcpl" Jan 01 08:41:58 crc kubenswrapper[4867]: I0101 08:41:58.662480 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5482c47-b1ad-4526-b3f3-b0388ae47cc9-util\") pod \"c5482c47-b1ad-4526-b3f3-b0388ae47cc9\" (UID: \"c5482c47-b1ad-4526-b3f3-b0388ae47cc9\") " Jan 01 08:41:58 crc kubenswrapper[4867]: I0101 08:41:58.662536 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5482c47-b1ad-4526-b3f3-b0388ae47cc9-bundle\") pod \"c5482c47-b1ad-4526-b3f3-b0388ae47cc9\" (UID: \"c5482c47-b1ad-4526-b3f3-b0388ae47cc9\") " Jan 01 08:41:58 crc kubenswrapper[4867]: I0101 08:41:58.662633 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wftnv\" (UniqueName: \"kubernetes.io/projected/c5482c47-b1ad-4526-b3f3-b0388ae47cc9-kube-api-access-wftnv\") pod \"c5482c47-b1ad-4526-b3f3-b0388ae47cc9\" (UID: \"c5482c47-b1ad-4526-b3f3-b0388ae47cc9\") " Jan 01 08:41:58 crc kubenswrapper[4867]: I0101 08:41:58.663492 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5482c47-b1ad-4526-b3f3-b0388ae47cc9-bundle" (OuterVolumeSpecName: "bundle") pod "c5482c47-b1ad-4526-b3f3-b0388ae47cc9" (UID: "c5482c47-b1ad-4526-b3f3-b0388ae47cc9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:41:58 crc kubenswrapper[4867]: I0101 08:41:58.670907 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5482c47-b1ad-4526-b3f3-b0388ae47cc9-kube-api-access-wftnv" (OuterVolumeSpecName: "kube-api-access-wftnv") pod "c5482c47-b1ad-4526-b3f3-b0388ae47cc9" (UID: "c5482c47-b1ad-4526-b3f3-b0388ae47cc9"). InnerVolumeSpecName "kube-api-access-wftnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:41:58 crc kubenswrapper[4867]: I0101 08:41:58.673168 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5482c47-b1ad-4526-b3f3-b0388ae47cc9-util" (OuterVolumeSpecName: "util") pod "c5482c47-b1ad-4526-b3f3-b0388ae47cc9" (UID: "c5482c47-b1ad-4526-b3f3-b0388ae47cc9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:41:58 crc kubenswrapper[4867]: I0101 08:41:58.764570 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wftnv\" (UniqueName: \"kubernetes.io/projected/c5482c47-b1ad-4526-b3f3-b0388ae47cc9-kube-api-access-wftnv\") on node \"crc\" DevicePath \"\"" Jan 01 08:41:58 crc kubenswrapper[4867]: I0101 08:41:58.764637 4867 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5482c47-b1ad-4526-b3f3-b0388ae47cc9-util\") on node \"crc\" DevicePath \"\"" Jan 01 08:41:58 crc kubenswrapper[4867]: I0101 08:41:58.764658 4867 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5482c47-b1ad-4526-b3f3-b0388ae47cc9-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:41:59 crc kubenswrapper[4867]: I0101 08:41:59.132448 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5wcpl" Jan 01 08:41:59 crc kubenswrapper[4867]: I0101 08:41:59.140111 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5wcpl" event={"ID":"c5482c47-b1ad-4526-b3f3-b0388ae47cc9","Type":"ContainerDied","Data":"8eebbeda2c3f9154c0cbdd41be511f729cf3799fc0c2620c1cdd647e165fb360"} Jan 01 08:41:59 crc kubenswrapper[4867]: I0101 08:41:59.140158 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8eebbeda2c3f9154c0cbdd41be511f729cf3799fc0c2620c1cdd647e165fb360" Jan 01 08:42:05 crc kubenswrapper[4867]: I0101 08:42:05.358754 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-vpb4p"] Jan 01 08:42:05 crc kubenswrapper[4867]: E0101 08:42:05.359562 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5482c47-b1ad-4526-b3f3-b0388ae47cc9" containerName="extract" Jan 01 08:42:05 crc kubenswrapper[4867]: I0101 08:42:05.359576 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5482c47-b1ad-4526-b3f3-b0388ae47cc9" containerName="extract" Jan 01 08:42:05 crc kubenswrapper[4867]: E0101 08:42:05.359597 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683" containerName="registry-server" Jan 01 08:42:05 crc kubenswrapper[4867]: I0101 08:42:05.359604 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683" containerName="registry-server" Jan 01 08:42:05 crc kubenswrapper[4867]: E0101 08:42:05.359613 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683" containerName="extract-content" Jan 01 08:42:05 crc kubenswrapper[4867]: I0101 08:42:05.359621 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683" containerName="extract-content" Jan 01 08:42:05 crc kubenswrapper[4867]: E0101 08:42:05.359631 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5482c47-b1ad-4526-b3f3-b0388ae47cc9" containerName="pull" Jan 01 08:42:05 crc kubenswrapper[4867]: I0101 08:42:05.359638 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5482c47-b1ad-4526-b3f3-b0388ae47cc9" containerName="pull" Jan 01 08:42:05 crc kubenswrapper[4867]: E0101 08:42:05.359648 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5482c47-b1ad-4526-b3f3-b0388ae47cc9" containerName="util" Jan 01 08:42:05 crc kubenswrapper[4867]: I0101 08:42:05.359655 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5482c47-b1ad-4526-b3f3-b0388ae47cc9" containerName="util" Jan 01 08:42:05 crc kubenswrapper[4867]: E0101 08:42:05.359664 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683" containerName="extract-utilities" Jan 01 08:42:05 crc kubenswrapper[4867]: I0101 08:42:05.359671 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683" containerName="extract-utilities" Jan 01 08:42:05 crc kubenswrapper[4867]: I0101 08:42:05.360035 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="74bc2a1b-1f1d-4ed9-bdc3-9a313f0b1683" containerName="registry-server" Jan 01 08:42:05 crc kubenswrapper[4867]: I0101 08:42:05.360048 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5482c47-b1ad-4526-b3f3-b0388ae47cc9" containerName="extract" Jan 01 08:42:05 crc kubenswrapper[4867]: I0101 08:42:05.360510 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-vpb4p" Jan 01 08:42:05 crc kubenswrapper[4867]: I0101 08:42:05.363743 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 01 08:42:05 crc kubenswrapper[4867]: I0101 08:42:05.363749 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 01 08:42:05 crc kubenswrapper[4867]: I0101 08:42:05.363847 4867 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-jxgzk" Jan 01 08:42:05 crc kubenswrapper[4867]: I0101 08:42:05.378789 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-vpb4p"] Jan 01 08:42:05 crc kubenswrapper[4867]: I0101 08:42:05.459154 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn2rl\" (UniqueName: \"kubernetes.io/projected/76941014-214f-47c1-85ab-60238cf5b2b6-kube-api-access-pn2rl\") pod \"cert-manager-operator-controller-manager-64cf6dff88-vpb4p\" (UID: \"76941014-214f-47c1-85ab-60238cf5b2b6\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-vpb4p" Jan 01 08:42:05 crc kubenswrapper[4867]: I0101 08:42:05.459381 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/76941014-214f-47c1-85ab-60238cf5b2b6-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-vpb4p\" (UID: \"76941014-214f-47c1-85ab-60238cf5b2b6\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-vpb4p" Jan 01 08:42:05 crc kubenswrapper[4867]: I0101 08:42:05.560744 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn2rl\" (UniqueName: \"kubernetes.io/projected/76941014-214f-47c1-85ab-60238cf5b2b6-kube-api-access-pn2rl\") pod \"cert-manager-operator-controller-manager-64cf6dff88-vpb4p\" (UID: \"76941014-214f-47c1-85ab-60238cf5b2b6\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-vpb4p" Jan 01 08:42:05 crc kubenswrapper[4867]: I0101 08:42:05.561123 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/76941014-214f-47c1-85ab-60238cf5b2b6-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-vpb4p\" (UID: \"76941014-214f-47c1-85ab-60238cf5b2b6\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-vpb4p" Jan 01 08:42:05 crc kubenswrapper[4867]: I0101 08:42:05.561612 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/76941014-214f-47c1-85ab-60238cf5b2b6-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-vpb4p\" (UID: \"76941014-214f-47c1-85ab-60238cf5b2b6\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-vpb4p" Jan 01 08:42:05 crc kubenswrapper[4867]: I0101 08:42:05.578841 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn2rl\" (UniqueName: \"kubernetes.io/projected/76941014-214f-47c1-85ab-60238cf5b2b6-kube-api-access-pn2rl\") pod \"cert-manager-operator-controller-manager-64cf6dff88-vpb4p\" (UID: \"76941014-214f-47c1-85ab-60238cf5b2b6\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-vpb4p" Jan 01 08:42:05 crc kubenswrapper[4867]: I0101 08:42:05.676148 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-vpb4p" Jan 01 08:42:06 crc kubenswrapper[4867]: I0101 08:42:06.084297 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-vpb4p"] Jan 01 08:42:06 crc kubenswrapper[4867]: W0101 08:42:06.097451 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76941014_214f_47c1_85ab_60238cf5b2b6.slice/crio-cc6c6f85bc50f963552aa6632423171eba20c671375b131e2506d763e88ae0ef WatchSource:0}: Error finding container cc6c6f85bc50f963552aa6632423171eba20c671375b131e2506d763e88ae0ef: Status 404 returned error can't find the container with id cc6c6f85bc50f963552aa6632423171eba20c671375b131e2506d763e88ae0ef Jan 01 08:42:06 crc kubenswrapper[4867]: I0101 08:42:06.193394 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-vpb4p" event={"ID":"76941014-214f-47c1-85ab-60238cf5b2b6","Type":"ContainerStarted","Data":"cc6c6f85bc50f963552aa6632423171eba20c671375b131e2506d763e88ae0ef"} Jan 01 08:42:14 crc kubenswrapper[4867]: I0101 08:42:14.270424 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-vpb4p" event={"ID":"76941014-214f-47c1-85ab-60238cf5b2b6","Type":"ContainerStarted","Data":"be048b9edf3fde8f46684b2febd3523780172b5346c3d66b9cf9903caf95f47e"} Jan 01 08:42:14 crc kubenswrapper[4867]: I0101 08:42:14.294749 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-vpb4p" podStartSLOduration=1.317388867 podStartE2EDuration="9.294733675s" podCreationTimestamp="2026-01-01 08:42:05 +0000 UTC" firstStartedPulling="2026-01-01 08:42:06.099511975 +0000 UTC m=+935.234780744" lastFinishedPulling="2026-01-01 08:42:14.076856783 +0000 UTC m=+943.212125552" observedRunningTime="2026-01-01 08:42:14.292689338 +0000 UTC m=+943.427958177" watchObservedRunningTime="2026-01-01 08:42:14.294733675 +0000 UTC m=+943.430002434" Jan 01 08:42:18 crc kubenswrapper[4867]: I0101 08:42:18.074074 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-s5bdk"] Jan 01 08:42:18 crc kubenswrapper[4867]: I0101 08:42:18.075513 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-s5bdk" Jan 01 08:42:18 crc kubenswrapper[4867]: I0101 08:42:18.077796 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 01 08:42:18 crc kubenswrapper[4867]: I0101 08:42:18.078035 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 01 08:42:18 crc kubenswrapper[4867]: I0101 08:42:18.079539 4867 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-zfstq" Jan 01 08:42:18 crc kubenswrapper[4867]: I0101 08:42:18.091907 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-s5bdk"] Jan 01 08:42:18 crc kubenswrapper[4867]: I0101 08:42:18.177297 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v727\" (UniqueName: \"kubernetes.io/projected/53107d29-98dc-4814-9829-38a2b5243e0d-kube-api-access-5v727\") pod \"cert-manager-webhook-f4fb5df64-s5bdk\" (UID: \"53107d29-98dc-4814-9829-38a2b5243e0d\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-s5bdk" Jan 01 08:42:18 crc kubenswrapper[4867]: I0101 08:42:18.177342 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/53107d29-98dc-4814-9829-38a2b5243e0d-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-s5bdk\" (UID: \"53107d29-98dc-4814-9829-38a2b5243e0d\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-s5bdk" Jan 01 08:42:18 crc kubenswrapper[4867]: I0101 08:42:18.278535 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v727\" (UniqueName: \"kubernetes.io/projected/53107d29-98dc-4814-9829-38a2b5243e0d-kube-api-access-5v727\") pod \"cert-manager-webhook-f4fb5df64-s5bdk\" (UID: \"53107d29-98dc-4814-9829-38a2b5243e0d\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-s5bdk" Jan 01 08:42:18 crc kubenswrapper[4867]: I0101 08:42:18.278581 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/53107d29-98dc-4814-9829-38a2b5243e0d-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-s5bdk\" (UID: \"53107d29-98dc-4814-9829-38a2b5243e0d\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-s5bdk" Jan 01 08:42:18 crc kubenswrapper[4867]: I0101 08:42:18.308915 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/53107d29-98dc-4814-9829-38a2b5243e0d-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-s5bdk\" (UID: \"53107d29-98dc-4814-9829-38a2b5243e0d\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-s5bdk" Jan 01 08:42:18 crc kubenswrapper[4867]: I0101 08:42:18.317258 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v727\" (UniqueName: \"kubernetes.io/projected/53107d29-98dc-4814-9829-38a2b5243e0d-kube-api-access-5v727\") pod \"cert-manager-webhook-f4fb5df64-s5bdk\" (UID: \"53107d29-98dc-4814-9829-38a2b5243e0d\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-s5bdk" Jan 01 08:42:18 crc kubenswrapper[4867]: I0101 08:42:18.432438 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-s5bdk" Jan 01 08:42:18 crc kubenswrapper[4867]: I0101 08:42:18.870691 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-s5bdk"] Jan 01 08:42:19 crc kubenswrapper[4867]: I0101 08:42:19.297173 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-s5bdk" event={"ID":"53107d29-98dc-4814-9829-38a2b5243e0d","Type":"ContainerStarted","Data":"32f2e5a141b7d40881715ad5665b354c442e6377a191a5c7df87148cdfacecac"} Jan 01 08:42:20 crc kubenswrapper[4867]: I0101 08:42:20.548173 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-lsg2f"] Jan 01 08:42:20 crc kubenswrapper[4867]: I0101 08:42:20.549399 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-lsg2f" Jan 01 08:42:20 crc kubenswrapper[4867]: I0101 08:42:20.552071 4867 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-fhl99" Jan 01 08:42:20 crc kubenswrapper[4867]: I0101 08:42:20.555718 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-lsg2f"] Jan 01 08:42:20 crc kubenswrapper[4867]: I0101 08:42:20.607451 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wckzc\" (UniqueName: \"kubernetes.io/projected/b43c2576-b232-4853-a699-12c3c3af0886-kube-api-access-wckzc\") pod \"cert-manager-cainjector-855d9ccff4-lsg2f\" (UID: \"b43c2576-b232-4853-a699-12c3c3af0886\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-lsg2f" Jan 01 08:42:20 crc kubenswrapper[4867]: I0101 08:42:20.607559 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b43c2576-b232-4853-a699-12c3c3af0886-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-lsg2f\" (UID: \"b43c2576-b232-4853-a699-12c3c3af0886\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-lsg2f" Jan 01 08:42:20 crc kubenswrapper[4867]: I0101 08:42:20.709078 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wckzc\" (UniqueName: \"kubernetes.io/projected/b43c2576-b232-4853-a699-12c3c3af0886-kube-api-access-wckzc\") pod \"cert-manager-cainjector-855d9ccff4-lsg2f\" (UID: \"b43c2576-b232-4853-a699-12c3c3af0886\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-lsg2f" Jan 01 08:42:20 crc kubenswrapper[4867]: I0101 08:42:20.709140 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b43c2576-b232-4853-a699-12c3c3af0886-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-lsg2f\" (UID: \"b43c2576-b232-4853-a699-12c3c3af0886\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-lsg2f" Jan 01 08:42:20 crc kubenswrapper[4867]: I0101 08:42:20.727572 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b43c2576-b232-4853-a699-12c3c3af0886-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-lsg2f\" (UID: \"b43c2576-b232-4853-a699-12c3c3af0886\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-lsg2f" Jan 01 08:42:20 crc kubenswrapper[4867]: I0101 08:42:20.728247 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wckzc\" (UniqueName: \"kubernetes.io/projected/b43c2576-b232-4853-a699-12c3c3af0886-kube-api-access-wckzc\") pod \"cert-manager-cainjector-855d9ccff4-lsg2f\" (UID: \"b43c2576-b232-4853-a699-12c3c3af0886\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-lsg2f" Jan 01 08:42:20 crc kubenswrapper[4867]: I0101 08:42:20.873021 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-lsg2f" Jan 01 08:42:21 crc kubenswrapper[4867]: I0101 08:42:21.079670 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-lsg2f"] Jan 01 08:42:21 crc kubenswrapper[4867]: W0101 08:42:21.090285 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb43c2576_b232_4853_a699_12c3c3af0886.slice/crio-ad79f77f9ebbb38384dcbe11a0e25ca10fbf8fde333113b0f60a5a36903c4f8e WatchSource:0}: Error finding container ad79f77f9ebbb38384dcbe11a0e25ca10fbf8fde333113b0f60a5a36903c4f8e: Status 404 returned error can't find the container with id ad79f77f9ebbb38384dcbe11a0e25ca10fbf8fde333113b0f60a5a36903c4f8e Jan 01 08:42:21 crc kubenswrapper[4867]: I0101 08:42:21.318201 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-lsg2f" event={"ID":"b43c2576-b232-4853-a699-12c3c3af0886","Type":"ContainerStarted","Data":"ad79f77f9ebbb38384dcbe11a0e25ca10fbf8fde333113b0f60a5a36903c4f8e"} Jan 01 08:42:21 crc kubenswrapper[4867]: I0101 08:42:21.330983 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 08:42:21 crc kubenswrapper[4867]: I0101 08:42:21.331037 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 08:42:26 crc kubenswrapper[4867]: I0101 08:42:26.352440 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-lsg2f" event={"ID":"b43c2576-b232-4853-a699-12c3c3af0886","Type":"ContainerStarted","Data":"9671fb5bdfb44eab821196d22adc036456ac00d7c56393d414f5170922211e2c"} Jan 01 08:42:26 crc kubenswrapper[4867]: I0101 08:42:26.354609 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-s5bdk" event={"ID":"53107d29-98dc-4814-9829-38a2b5243e0d","Type":"ContainerStarted","Data":"e556a5717fe229a348626d454e29ab3e00697f34aab4718af9c1a2bd4ccfcf0b"} Jan 01 08:42:26 crc kubenswrapper[4867]: I0101 08:42:26.354945 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-s5bdk" Jan 01 08:42:26 crc kubenswrapper[4867]: I0101 08:42:26.374441 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-lsg2f" podStartSLOduration=1.662667487 podStartE2EDuration="6.37442252s" podCreationTimestamp="2026-01-01 08:42:20 +0000 UTC" firstStartedPulling="2026-01-01 08:42:21.093316512 +0000 UTC m=+950.228585291" lastFinishedPulling="2026-01-01 08:42:25.805071555 +0000 UTC m=+954.940340324" observedRunningTime="2026-01-01 08:42:26.369566055 +0000 UTC m=+955.504834844" watchObservedRunningTime="2026-01-01 08:42:26.37442252 +0000 UTC m=+955.509691289" Jan 01 08:42:26 crc kubenswrapper[4867]: I0101 08:42:26.392010 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-s5bdk" podStartSLOduration=1.432126671 podStartE2EDuration="8.391986708s" podCreationTimestamp="2026-01-01 08:42:18 +0000 UTC" firstStartedPulling="2026-01-01 08:42:18.878493323 +0000 UTC m=+948.013762092" lastFinishedPulling="2026-01-01 08:42:25.83835336 +0000 UTC m=+954.973622129" observedRunningTime="2026-01-01 08:42:26.386239198 +0000 UTC m=+955.521507997" watchObservedRunningTime="2026-01-01 08:42:26.391986708 +0000 UTC m=+955.527255487" Jan 01 08:42:33 crc kubenswrapper[4867]: I0101 08:42:33.437086 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-s5bdk" Jan 01 08:42:37 crc kubenswrapper[4867]: I0101 08:42:37.039208 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-xg99b"] Jan 01 08:42:37 crc kubenswrapper[4867]: I0101 08:42:37.040661 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-xg99b"] Jan 01 08:42:37 crc kubenswrapper[4867]: I0101 08:42:37.040770 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-xg99b" Jan 01 08:42:37 crc kubenswrapper[4867]: I0101 08:42:37.043735 4867 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-p7fsz" Jan 01 08:42:37 crc kubenswrapper[4867]: I0101 08:42:37.203445 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8r22\" (UniqueName: \"kubernetes.io/projected/65f1bc8c-693f-4044-b161-26ba5eb03cea-kube-api-access-l8r22\") pod \"cert-manager-86cb77c54b-xg99b\" (UID: \"65f1bc8c-693f-4044-b161-26ba5eb03cea\") " pod="cert-manager/cert-manager-86cb77c54b-xg99b" Jan 01 08:42:37 crc kubenswrapper[4867]: I0101 08:42:37.203545 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/65f1bc8c-693f-4044-b161-26ba5eb03cea-bound-sa-token\") pod \"cert-manager-86cb77c54b-xg99b\" (UID: \"65f1bc8c-693f-4044-b161-26ba5eb03cea\") " pod="cert-manager/cert-manager-86cb77c54b-xg99b" Jan 01 08:42:37 crc kubenswrapper[4867]: I0101 08:42:37.305840 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8r22\" (UniqueName: \"kubernetes.io/projected/65f1bc8c-693f-4044-b161-26ba5eb03cea-kube-api-access-l8r22\") pod \"cert-manager-86cb77c54b-xg99b\" (UID: \"65f1bc8c-693f-4044-b161-26ba5eb03cea\") " pod="cert-manager/cert-manager-86cb77c54b-xg99b" Jan 01 08:42:37 crc kubenswrapper[4867]: I0101 08:42:37.306440 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/65f1bc8c-693f-4044-b161-26ba5eb03cea-bound-sa-token\") pod \"cert-manager-86cb77c54b-xg99b\" (UID: \"65f1bc8c-693f-4044-b161-26ba5eb03cea\") " pod="cert-manager/cert-manager-86cb77c54b-xg99b" Jan 01 08:42:37 crc kubenswrapper[4867]: I0101 08:42:37.341646 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/65f1bc8c-693f-4044-b161-26ba5eb03cea-bound-sa-token\") pod \"cert-manager-86cb77c54b-xg99b\" (UID: \"65f1bc8c-693f-4044-b161-26ba5eb03cea\") " pod="cert-manager/cert-manager-86cb77c54b-xg99b" Jan 01 08:42:37 crc kubenswrapper[4867]: I0101 08:42:37.341858 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8r22\" (UniqueName: \"kubernetes.io/projected/65f1bc8c-693f-4044-b161-26ba5eb03cea-kube-api-access-l8r22\") pod \"cert-manager-86cb77c54b-xg99b\" (UID: \"65f1bc8c-693f-4044-b161-26ba5eb03cea\") " pod="cert-manager/cert-manager-86cb77c54b-xg99b" Jan 01 08:42:37 crc kubenswrapper[4867]: I0101 08:42:37.382641 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-xg99b" Jan 01 08:42:37 crc kubenswrapper[4867]: I0101 08:42:37.852049 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-xg99b"] Jan 01 08:42:38 crc kubenswrapper[4867]: I0101 08:42:38.466277 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-xg99b" event={"ID":"65f1bc8c-693f-4044-b161-26ba5eb03cea","Type":"ContainerStarted","Data":"8d5ff32f61374fcce8edaf40fd14b1c8c4556172bac95ff01840bcec74307927"} Jan 01 08:42:38 crc kubenswrapper[4867]: I0101 08:42:38.466679 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-xg99b" event={"ID":"65f1bc8c-693f-4044-b161-26ba5eb03cea","Type":"ContainerStarted","Data":"6d4c9844ad128720c93a32b2e3052965fb39a231ae57b58e4c337feb7a6edb4a"} Jan 01 08:42:38 crc kubenswrapper[4867]: I0101 08:42:38.488404 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-xg99b" podStartSLOduration=2.488386426 podStartE2EDuration="2.488386426s" podCreationTimestamp="2026-01-01 08:42:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:42:38.487467141 +0000 UTC m=+967.622735980" watchObservedRunningTime="2026-01-01 08:42:38.488386426 +0000 UTC m=+967.623655195" Jan 01 08:42:47 crc kubenswrapper[4867]: I0101 08:42:47.213978 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-8dnck"] Jan 01 08:42:47 crc kubenswrapper[4867]: I0101 08:42:47.215732 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8dnck" Jan 01 08:42:47 crc kubenswrapper[4867]: I0101 08:42:47.218336 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-5bpb2" Jan 01 08:42:47 crc kubenswrapper[4867]: I0101 08:42:47.218492 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 01 08:42:47 crc kubenswrapper[4867]: I0101 08:42:47.218618 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 01 08:42:47 crc kubenswrapper[4867]: I0101 08:42:47.249676 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8dnck"] Jan 01 08:42:47 crc kubenswrapper[4867]: I0101 08:42:47.360379 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x75fh\" (UniqueName: \"kubernetes.io/projected/ad13b7c6-c3d7-4373-96f9-f235e9b5fed8-kube-api-access-x75fh\") pod \"openstack-operator-index-8dnck\" (UID: \"ad13b7c6-c3d7-4373-96f9-f235e9b5fed8\") " pod="openstack-operators/openstack-operator-index-8dnck" Jan 01 08:42:47 crc kubenswrapper[4867]: I0101 08:42:47.462051 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x75fh\" (UniqueName: \"kubernetes.io/projected/ad13b7c6-c3d7-4373-96f9-f235e9b5fed8-kube-api-access-x75fh\") pod \"openstack-operator-index-8dnck\" (UID: \"ad13b7c6-c3d7-4373-96f9-f235e9b5fed8\") " pod="openstack-operators/openstack-operator-index-8dnck" Jan 01 08:42:47 crc kubenswrapper[4867]: I0101 08:42:47.488598 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x75fh\" (UniqueName: \"kubernetes.io/projected/ad13b7c6-c3d7-4373-96f9-f235e9b5fed8-kube-api-access-x75fh\") pod \"openstack-operator-index-8dnck\" (UID: \"ad13b7c6-c3d7-4373-96f9-f235e9b5fed8\") " pod="openstack-operators/openstack-operator-index-8dnck" Jan 01 08:42:47 crc kubenswrapper[4867]: I0101 08:42:47.554421 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8dnck" Jan 01 08:42:48 crc kubenswrapper[4867]: I0101 08:42:48.013851 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8dnck"] Jan 01 08:42:48 crc kubenswrapper[4867]: I0101 08:42:48.548144 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8dnck" event={"ID":"ad13b7c6-c3d7-4373-96f9-f235e9b5fed8","Type":"ContainerStarted","Data":"dfbcc90c9f9d38baab921a4c2eeb2f94641f263545bad1764e7e6378d84902fc"} Jan 01 08:42:49 crc kubenswrapper[4867]: I0101 08:42:49.556362 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8dnck" event={"ID":"ad13b7c6-c3d7-4373-96f9-f235e9b5fed8","Type":"ContainerStarted","Data":"89158690bb640d0606fcdd978c6f9a4a5c52ec1c5c4873d53a444ab63523747c"} Jan 01 08:42:49 crc kubenswrapper[4867]: I0101 08:42:49.586253 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-8dnck" podStartSLOduration=1.616134733 podStartE2EDuration="2.58620878s" podCreationTimestamp="2026-01-01 08:42:47 +0000 UTC" firstStartedPulling="2026-01-01 08:42:48.028084303 +0000 UTC m=+977.163353102" lastFinishedPulling="2026-01-01 08:42:48.99815838 +0000 UTC m=+978.133427149" observedRunningTime="2026-01-01 08:42:49.580690035 +0000 UTC m=+978.715958854" watchObservedRunningTime="2026-01-01 08:42:49.58620878 +0000 UTC m=+978.721477559" Jan 01 08:42:50 crc kubenswrapper[4867]: I0101 08:42:50.563953 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-8dnck"] Jan 01 08:42:51 crc kubenswrapper[4867]: I0101 08:42:51.166071 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-7657r"] Jan 01 08:42:51 crc kubenswrapper[4867]: I0101 08:42:51.166808 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7657r" Jan 01 08:42:51 crc kubenswrapper[4867]: I0101 08:42:51.180602 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7657r"] Jan 01 08:42:51 crc kubenswrapper[4867]: I0101 08:42:51.331727 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 08:42:51 crc kubenswrapper[4867]: I0101 08:42:51.331843 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 08:42:51 crc kubenswrapper[4867]: I0101 08:42:51.331993 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-69jph" Jan 01 08:42:51 crc kubenswrapper[4867]: I0101 08:42:51.333241 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5c0242f8cb2cb86cd3c1961752ae798238bc46747b9db37482dfc5091eb3d814"} pod="openshift-machine-config-operator/machine-config-daemon-69jph" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 01 08:42:51 crc kubenswrapper[4867]: I0101 08:42:51.333413 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" containerID="cri-o://5c0242f8cb2cb86cd3c1961752ae798238bc46747b9db37482dfc5091eb3d814" gracePeriod=600 Jan 01 08:42:51 crc kubenswrapper[4867]: I0101 08:42:51.334208 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kglw\" (UniqueName: \"kubernetes.io/projected/aeb647d1-966c-41c8-8ef3-7895ff67e463-kube-api-access-4kglw\") pod \"openstack-operator-index-7657r\" (UID: \"aeb647d1-966c-41c8-8ef3-7895ff67e463\") " pod="openstack-operators/openstack-operator-index-7657r" Jan 01 08:42:51 crc kubenswrapper[4867]: I0101 08:42:51.436213 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kglw\" (UniqueName: \"kubernetes.io/projected/aeb647d1-966c-41c8-8ef3-7895ff67e463-kube-api-access-4kglw\") pod \"openstack-operator-index-7657r\" (UID: \"aeb647d1-966c-41c8-8ef3-7895ff67e463\") " pod="openstack-operators/openstack-operator-index-7657r" Jan 01 08:42:51 crc kubenswrapper[4867]: I0101 08:42:51.475043 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kglw\" (UniqueName: \"kubernetes.io/projected/aeb647d1-966c-41c8-8ef3-7895ff67e463-kube-api-access-4kglw\") pod \"openstack-operator-index-7657r\" (UID: \"aeb647d1-966c-41c8-8ef3-7895ff67e463\") " pod="openstack-operators/openstack-operator-index-7657r" Jan 01 08:42:51 crc kubenswrapper[4867]: I0101 08:42:51.491514 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7657r" Jan 01 08:42:51 crc kubenswrapper[4867]: I0101 08:42:51.580778 4867 generic.go:334] "Generic (PLEG): container finished" podID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerID="5c0242f8cb2cb86cd3c1961752ae798238bc46747b9db37482dfc5091eb3d814" exitCode=0 Jan 01 08:42:51 crc kubenswrapper[4867]: I0101 08:42:51.580935 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerDied","Data":"5c0242f8cb2cb86cd3c1961752ae798238bc46747b9db37482dfc5091eb3d814"} Jan 01 08:42:51 crc kubenswrapper[4867]: I0101 08:42:51.581311 4867 scope.go:117] "RemoveContainer" containerID="b489a809c46fea0670e0a497e09fb93b663297b7dde42c0a30153339b2adc104" Jan 01 08:42:51 crc kubenswrapper[4867]: I0101 08:42:51.581408 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-8dnck" podUID="ad13b7c6-c3d7-4373-96f9-f235e9b5fed8" containerName="registry-server" containerID="cri-o://89158690bb640d0606fcdd978c6f9a4a5c52ec1c5c4873d53a444ab63523747c" gracePeriod=2 Jan 01 08:42:51 crc kubenswrapper[4867]: I0101 08:42:51.731011 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7657r"] Jan 01 08:42:51 crc kubenswrapper[4867]: W0101 08:42:51.734721 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaeb647d1_966c_41c8_8ef3_7895ff67e463.slice/crio-4c266092de8ecef7444847b3216fbb290c03190669b78d88fd5a4f3fbeb37d95 WatchSource:0}: Error finding container 4c266092de8ecef7444847b3216fbb290c03190669b78d88fd5a4f3fbeb37d95: Status 404 returned error can't find the container with id 4c266092de8ecef7444847b3216fbb290c03190669b78d88fd5a4f3fbeb37d95 Jan 01 08:42:51 crc kubenswrapper[4867]: I0101 08:42:51.890183 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8dnck" Jan 01 08:42:52 crc kubenswrapper[4867]: I0101 08:42:52.042296 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x75fh\" (UniqueName: \"kubernetes.io/projected/ad13b7c6-c3d7-4373-96f9-f235e9b5fed8-kube-api-access-x75fh\") pod \"ad13b7c6-c3d7-4373-96f9-f235e9b5fed8\" (UID: \"ad13b7c6-c3d7-4373-96f9-f235e9b5fed8\") " Jan 01 08:42:52 crc kubenswrapper[4867]: I0101 08:42:52.047301 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad13b7c6-c3d7-4373-96f9-f235e9b5fed8-kube-api-access-x75fh" (OuterVolumeSpecName: "kube-api-access-x75fh") pod "ad13b7c6-c3d7-4373-96f9-f235e9b5fed8" (UID: "ad13b7c6-c3d7-4373-96f9-f235e9b5fed8"). InnerVolumeSpecName "kube-api-access-x75fh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:42:52 crc kubenswrapper[4867]: I0101 08:42:52.143848 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x75fh\" (UniqueName: \"kubernetes.io/projected/ad13b7c6-c3d7-4373-96f9-f235e9b5fed8-kube-api-access-x75fh\") on node \"crc\" DevicePath \"\"" Jan 01 08:42:52 crc kubenswrapper[4867]: I0101 08:42:52.593224 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerStarted","Data":"81817d336fc213658d5e33bc8d0ea2842c8843cc5c0fbe3de4796b71ea1ba225"} Jan 01 08:42:52 crc kubenswrapper[4867]: I0101 08:42:52.595628 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7657r" event={"ID":"aeb647d1-966c-41c8-8ef3-7895ff67e463","Type":"ContainerStarted","Data":"044d8eefdad6d7572e0e0a588430543f0abcc385db44b4821040b695a619946f"} Jan 01 08:42:52 crc kubenswrapper[4867]: I0101 08:42:52.595653 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7657r" event={"ID":"aeb647d1-966c-41c8-8ef3-7895ff67e463","Type":"ContainerStarted","Data":"4c266092de8ecef7444847b3216fbb290c03190669b78d88fd5a4f3fbeb37d95"} Jan 01 08:42:52 crc kubenswrapper[4867]: I0101 08:42:52.598344 4867 generic.go:334] "Generic (PLEG): container finished" podID="ad13b7c6-c3d7-4373-96f9-f235e9b5fed8" containerID="89158690bb640d0606fcdd978c6f9a4a5c52ec1c5c4873d53a444ab63523747c" exitCode=0 Jan 01 08:42:52 crc kubenswrapper[4867]: I0101 08:42:52.598384 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8dnck" event={"ID":"ad13b7c6-c3d7-4373-96f9-f235e9b5fed8","Type":"ContainerDied","Data":"89158690bb640d0606fcdd978c6f9a4a5c52ec1c5c4873d53a444ab63523747c"} Jan 01 08:42:52 crc kubenswrapper[4867]: I0101 08:42:52.598382 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8dnck" Jan 01 08:42:52 crc kubenswrapper[4867]: I0101 08:42:52.598418 4867 scope.go:117] "RemoveContainer" containerID="89158690bb640d0606fcdd978c6f9a4a5c52ec1c5c4873d53a444ab63523747c" Jan 01 08:42:52 crc kubenswrapper[4867]: I0101 08:42:52.598407 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8dnck" event={"ID":"ad13b7c6-c3d7-4373-96f9-f235e9b5fed8","Type":"ContainerDied","Data":"dfbcc90c9f9d38baab921a4c2eeb2f94641f263545bad1764e7e6378d84902fc"} Jan 01 08:42:52 crc kubenswrapper[4867]: I0101 08:42:52.635005 4867 scope.go:117] "RemoveContainer" containerID="89158690bb640d0606fcdd978c6f9a4a5c52ec1c5c4873d53a444ab63523747c" Jan 01 08:42:52 crc kubenswrapper[4867]: E0101 08:42:52.635503 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89158690bb640d0606fcdd978c6f9a4a5c52ec1c5c4873d53a444ab63523747c\": container with ID starting with 89158690bb640d0606fcdd978c6f9a4a5c52ec1c5c4873d53a444ab63523747c not found: ID does not exist" containerID="89158690bb640d0606fcdd978c6f9a4a5c52ec1c5c4873d53a444ab63523747c" Jan 01 08:42:52 crc kubenswrapper[4867]: I0101 08:42:52.635545 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89158690bb640d0606fcdd978c6f9a4a5c52ec1c5c4873d53a444ab63523747c"} err="failed to get container status \"89158690bb640d0606fcdd978c6f9a4a5c52ec1c5c4873d53a444ab63523747c\": rpc error: code = NotFound desc = could not find container \"89158690bb640d0606fcdd978c6f9a4a5c52ec1c5c4873d53a444ab63523747c\": container with ID starting with 89158690bb640d0606fcdd978c6f9a4a5c52ec1c5c4873d53a444ab63523747c not found: ID does not exist" Jan 01 08:42:52 crc kubenswrapper[4867]: I0101 08:42:52.644681 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-7657r" podStartSLOduration=1.221953457 podStartE2EDuration="1.644651334s" podCreationTimestamp="2026-01-01 08:42:51 +0000 UTC" firstStartedPulling="2026-01-01 08:42:51.742428547 +0000 UTC m=+980.877697356" lastFinishedPulling="2026-01-01 08:42:52.165126464 +0000 UTC m=+981.300395233" observedRunningTime="2026-01-01 08:42:52.631395623 +0000 UTC m=+981.766664412" watchObservedRunningTime="2026-01-01 08:42:52.644651334 +0000 UTC m=+981.779920143" Jan 01 08:42:52 crc kubenswrapper[4867]: I0101 08:42:52.662181 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-8dnck"] Jan 01 08:42:52 crc kubenswrapper[4867]: I0101 08:42:52.672127 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-8dnck"] Jan 01 08:42:53 crc kubenswrapper[4867]: I0101 08:42:53.147690 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad13b7c6-c3d7-4373-96f9-f235e9b5fed8" path="/var/lib/kubelet/pods/ad13b7c6-c3d7-4373-96f9-f235e9b5fed8/volumes" Jan 01 08:43:01 crc kubenswrapper[4867]: I0101 08:43:01.492833 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-7657r" Jan 01 08:43:01 crc kubenswrapper[4867]: I0101 08:43:01.496167 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-7657r" Jan 01 08:43:01 crc kubenswrapper[4867]: I0101 08:43:01.530972 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-7657r" Jan 01 08:43:01 crc kubenswrapper[4867]: I0101 08:43:01.718806 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-7657r" Jan 01 08:43:03 crc kubenswrapper[4867]: I0101 08:43:03.235929 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/de70fa16a7ca5622188f18febf39673d50b3bc4dd3ef258c154a3707ddch6ph"] Jan 01 08:43:03 crc kubenswrapper[4867]: E0101 08:43:03.236170 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad13b7c6-c3d7-4373-96f9-f235e9b5fed8" containerName="registry-server" Jan 01 08:43:03 crc kubenswrapper[4867]: I0101 08:43:03.236182 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad13b7c6-c3d7-4373-96f9-f235e9b5fed8" containerName="registry-server" Jan 01 08:43:03 crc kubenswrapper[4867]: I0101 08:43:03.236279 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad13b7c6-c3d7-4373-96f9-f235e9b5fed8" containerName="registry-server" Jan 01 08:43:03 crc kubenswrapper[4867]: I0101 08:43:03.237844 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/de70fa16a7ca5622188f18febf39673d50b3bc4dd3ef258c154a3707ddch6ph" Jan 01 08:43:03 crc kubenswrapper[4867]: I0101 08:43:03.243226 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-mrtjj" Jan 01 08:43:03 crc kubenswrapper[4867]: I0101 08:43:03.251641 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/de70fa16a7ca5622188f18febf39673d50b3bc4dd3ef258c154a3707ddch6ph"] Jan 01 08:43:03 crc kubenswrapper[4867]: I0101 08:43:03.361527 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2aeaad03-1c14-49f8-b417-5e4d6470db87-util\") pod \"de70fa16a7ca5622188f18febf39673d50b3bc4dd3ef258c154a3707ddch6ph\" (UID: \"2aeaad03-1c14-49f8-b417-5e4d6470db87\") " pod="openstack-operators/de70fa16a7ca5622188f18febf39673d50b3bc4dd3ef258c154a3707ddch6ph" Jan 01 08:43:03 crc kubenswrapper[4867]: I0101 08:43:03.362371 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6std\" (UniqueName: \"kubernetes.io/projected/2aeaad03-1c14-49f8-b417-5e4d6470db87-kube-api-access-x6std\") pod \"de70fa16a7ca5622188f18febf39673d50b3bc4dd3ef258c154a3707ddch6ph\" (UID: \"2aeaad03-1c14-49f8-b417-5e4d6470db87\") " pod="openstack-operators/de70fa16a7ca5622188f18febf39673d50b3bc4dd3ef258c154a3707ddch6ph" Jan 01 08:43:03 crc kubenswrapper[4867]: I0101 08:43:03.362625 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2aeaad03-1c14-49f8-b417-5e4d6470db87-bundle\") pod \"de70fa16a7ca5622188f18febf39673d50b3bc4dd3ef258c154a3707ddch6ph\" (UID: \"2aeaad03-1c14-49f8-b417-5e4d6470db87\") " pod="openstack-operators/de70fa16a7ca5622188f18febf39673d50b3bc4dd3ef258c154a3707ddch6ph" Jan 01 08:43:03 crc kubenswrapper[4867]: I0101 08:43:03.464027 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2aeaad03-1c14-49f8-b417-5e4d6470db87-bundle\") pod \"de70fa16a7ca5622188f18febf39673d50b3bc4dd3ef258c154a3707ddch6ph\" (UID: \"2aeaad03-1c14-49f8-b417-5e4d6470db87\") " pod="openstack-operators/de70fa16a7ca5622188f18febf39673d50b3bc4dd3ef258c154a3707ddch6ph" Jan 01 08:43:03 crc kubenswrapper[4867]: I0101 08:43:03.464148 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2aeaad03-1c14-49f8-b417-5e4d6470db87-util\") pod \"de70fa16a7ca5622188f18febf39673d50b3bc4dd3ef258c154a3707ddch6ph\" (UID: \"2aeaad03-1c14-49f8-b417-5e4d6470db87\") " pod="openstack-operators/de70fa16a7ca5622188f18febf39673d50b3bc4dd3ef258c154a3707ddch6ph" Jan 01 08:43:03 crc kubenswrapper[4867]: I0101 08:43:03.464234 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6std\" (UniqueName: \"kubernetes.io/projected/2aeaad03-1c14-49f8-b417-5e4d6470db87-kube-api-access-x6std\") pod \"de70fa16a7ca5622188f18febf39673d50b3bc4dd3ef258c154a3707ddch6ph\" (UID: \"2aeaad03-1c14-49f8-b417-5e4d6470db87\") " pod="openstack-operators/de70fa16a7ca5622188f18febf39673d50b3bc4dd3ef258c154a3707ddch6ph" Jan 01 08:43:03 crc kubenswrapper[4867]: I0101 08:43:03.465216 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2aeaad03-1c14-49f8-b417-5e4d6470db87-bundle\") pod \"de70fa16a7ca5622188f18febf39673d50b3bc4dd3ef258c154a3707ddch6ph\" (UID: \"2aeaad03-1c14-49f8-b417-5e4d6470db87\") " pod="openstack-operators/de70fa16a7ca5622188f18febf39673d50b3bc4dd3ef258c154a3707ddch6ph" Jan 01 08:43:03 crc kubenswrapper[4867]: I0101 08:43:03.465305 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2aeaad03-1c14-49f8-b417-5e4d6470db87-util\") pod \"de70fa16a7ca5622188f18febf39673d50b3bc4dd3ef258c154a3707ddch6ph\" (UID: \"2aeaad03-1c14-49f8-b417-5e4d6470db87\") " pod="openstack-operators/de70fa16a7ca5622188f18febf39673d50b3bc4dd3ef258c154a3707ddch6ph" Jan 01 08:43:03 crc kubenswrapper[4867]: I0101 08:43:03.501656 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6std\" (UniqueName: \"kubernetes.io/projected/2aeaad03-1c14-49f8-b417-5e4d6470db87-kube-api-access-x6std\") pod \"de70fa16a7ca5622188f18febf39673d50b3bc4dd3ef258c154a3707ddch6ph\" (UID: \"2aeaad03-1c14-49f8-b417-5e4d6470db87\") " pod="openstack-operators/de70fa16a7ca5622188f18febf39673d50b3bc4dd3ef258c154a3707ddch6ph" Jan 01 08:43:03 crc kubenswrapper[4867]: I0101 08:43:03.565997 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/de70fa16a7ca5622188f18febf39673d50b3bc4dd3ef258c154a3707ddch6ph" Jan 01 08:43:03 crc kubenswrapper[4867]: I0101 08:43:03.853561 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/de70fa16a7ca5622188f18febf39673d50b3bc4dd3ef258c154a3707ddch6ph"] Jan 01 08:43:04 crc kubenswrapper[4867]: I0101 08:43:04.714853 4867 generic.go:334] "Generic (PLEG): container finished" podID="2aeaad03-1c14-49f8-b417-5e4d6470db87" containerID="37d20259d07019b2eda2d1aed50e4a8864dbbb40f48f158a0c4a652be76dcfdf" exitCode=0 Jan 01 08:43:04 crc kubenswrapper[4867]: I0101 08:43:04.714954 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/de70fa16a7ca5622188f18febf39673d50b3bc4dd3ef258c154a3707ddch6ph" event={"ID":"2aeaad03-1c14-49f8-b417-5e4d6470db87","Type":"ContainerDied","Data":"37d20259d07019b2eda2d1aed50e4a8864dbbb40f48f158a0c4a652be76dcfdf"} Jan 01 08:43:04 crc kubenswrapper[4867]: I0101 08:43:04.715290 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/de70fa16a7ca5622188f18febf39673d50b3bc4dd3ef258c154a3707ddch6ph" event={"ID":"2aeaad03-1c14-49f8-b417-5e4d6470db87","Type":"ContainerStarted","Data":"68e79e662bbc7230068a298155a11ce85e399a61f4ff2bde1f1ce8339750fc04"} Jan 01 08:43:05 crc kubenswrapper[4867]: I0101 08:43:05.726830 4867 generic.go:334] "Generic (PLEG): container finished" podID="2aeaad03-1c14-49f8-b417-5e4d6470db87" containerID="398e33e75246e4e932d555b78cab15230cdeefd5fc9b89d200ca56108fabd660" exitCode=0 Jan 01 08:43:05 crc kubenswrapper[4867]: I0101 08:43:05.726905 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/de70fa16a7ca5622188f18febf39673d50b3bc4dd3ef258c154a3707ddch6ph" event={"ID":"2aeaad03-1c14-49f8-b417-5e4d6470db87","Type":"ContainerDied","Data":"398e33e75246e4e932d555b78cab15230cdeefd5fc9b89d200ca56108fabd660"} Jan 01 08:43:06 crc kubenswrapper[4867]: I0101 08:43:06.743360 4867 generic.go:334] "Generic (PLEG): container finished" podID="2aeaad03-1c14-49f8-b417-5e4d6470db87" containerID="15cc8320440ba83e89dcc5eb77cf8936c99ab730b46b473b6430b0ee96800422" exitCode=0 Jan 01 08:43:06 crc kubenswrapper[4867]: I0101 08:43:06.743491 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/de70fa16a7ca5622188f18febf39673d50b3bc4dd3ef258c154a3707ddch6ph" event={"ID":"2aeaad03-1c14-49f8-b417-5e4d6470db87","Type":"ContainerDied","Data":"15cc8320440ba83e89dcc5eb77cf8936c99ab730b46b473b6430b0ee96800422"} Jan 01 08:43:08 crc kubenswrapper[4867]: I0101 08:43:08.128584 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/de70fa16a7ca5622188f18febf39673d50b3bc4dd3ef258c154a3707ddch6ph" Jan 01 08:43:08 crc kubenswrapper[4867]: I0101 08:43:08.235251 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6std\" (UniqueName: \"kubernetes.io/projected/2aeaad03-1c14-49f8-b417-5e4d6470db87-kube-api-access-x6std\") pod \"2aeaad03-1c14-49f8-b417-5e4d6470db87\" (UID: \"2aeaad03-1c14-49f8-b417-5e4d6470db87\") " Jan 01 08:43:08 crc kubenswrapper[4867]: I0101 08:43:08.235439 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2aeaad03-1c14-49f8-b417-5e4d6470db87-bundle\") pod \"2aeaad03-1c14-49f8-b417-5e4d6470db87\" (UID: \"2aeaad03-1c14-49f8-b417-5e4d6470db87\") " Jan 01 08:43:08 crc kubenswrapper[4867]: I0101 08:43:08.235650 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2aeaad03-1c14-49f8-b417-5e4d6470db87-util\") pod \"2aeaad03-1c14-49f8-b417-5e4d6470db87\" (UID: \"2aeaad03-1c14-49f8-b417-5e4d6470db87\") " Jan 01 08:43:08 crc kubenswrapper[4867]: I0101 08:43:08.236381 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2aeaad03-1c14-49f8-b417-5e4d6470db87-bundle" (OuterVolumeSpecName: "bundle") pod "2aeaad03-1c14-49f8-b417-5e4d6470db87" (UID: "2aeaad03-1c14-49f8-b417-5e4d6470db87"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:43:08 crc kubenswrapper[4867]: I0101 08:43:08.237389 4867 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2aeaad03-1c14-49f8-b417-5e4d6470db87-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:43:08 crc kubenswrapper[4867]: I0101 08:43:08.245530 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aeaad03-1c14-49f8-b417-5e4d6470db87-kube-api-access-x6std" (OuterVolumeSpecName: "kube-api-access-x6std") pod "2aeaad03-1c14-49f8-b417-5e4d6470db87" (UID: "2aeaad03-1c14-49f8-b417-5e4d6470db87"). InnerVolumeSpecName "kube-api-access-x6std". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:43:08 crc kubenswrapper[4867]: I0101 08:43:08.268370 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2aeaad03-1c14-49f8-b417-5e4d6470db87-util" (OuterVolumeSpecName: "util") pod "2aeaad03-1c14-49f8-b417-5e4d6470db87" (UID: "2aeaad03-1c14-49f8-b417-5e4d6470db87"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:43:08 crc kubenswrapper[4867]: I0101 08:43:08.338760 4867 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2aeaad03-1c14-49f8-b417-5e4d6470db87-util\") on node \"crc\" DevicePath \"\"" Jan 01 08:43:08 crc kubenswrapper[4867]: I0101 08:43:08.338811 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6std\" (UniqueName: \"kubernetes.io/projected/2aeaad03-1c14-49f8-b417-5e4d6470db87-kube-api-access-x6std\") on node \"crc\" DevicePath \"\"" Jan 01 08:43:08 crc kubenswrapper[4867]: I0101 08:43:08.768206 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/de70fa16a7ca5622188f18febf39673d50b3bc4dd3ef258c154a3707ddch6ph" event={"ID":"2aeaad03-1c14-49f8-b417-5e4d6470db87","Type":"ContainerDied","Data":"68e79e662bbc7230068a298155a11ce85e399a61f4ff2bde1f1ce8339750fc04"} Jan 01 08:43:08 crc kubenswrapper[4867]: I0101 08:43:08.768265 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68e79e662bbc7230068a298155a11ce85e399a61f4ff2bde1f1ce8339750fc04" Jan 01 08:43:08 crc kubenswrapper[4867]: I0101 08:43:08.768409 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/de70fa16a7ca5622188f18febf39673d50b3bc4dd3ef258c154a3707ddch6ph" Jan 01 08:43:16 crc kubenswrapper[4867]: I0101 08:43:16.187014 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6879547b79-cctq6"] Jan 01 08:43:16 crc kubenswrapper[4867]: E0101 08:43:16.187710 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aeaad03-1c14-49f8-b417-5e4d6470db87" containerName="util" Jan 01 08:43:16 crc kubenswrapper[4867]: I0101 08:43:16.187722 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aeaad03-1c14-49f8-b417-5e4d6470db87" containerName="util" Jan 01 08:43:16 crc kubenswrapper[4867]: E0101 08:43:16.187736 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aeaad03-1c14-49f8-b417-5e4d6470db87" containerName="pull" Jan 01 08:43:16 crc kubenswrapper[4867]: I0101 08:43:16.187742 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aeaad03-1c14-49f8-b417-5e4d6470db87" containerName="pull" Jan 01 08:43:16 crc kubenswrapper[4867]: E0101 08:43:16.187751 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aeaad03-1c14-49f8-b417-5e4d6470db87" containerName="extract" Jan 01 08:43:16 crc kubenswrapper[4867]: I0101 08:43:16.187757 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aeaad03-1c14-49f8-b417-5e4d6470db87" containerName="extract" Jan 01 08:43:16 crc kubenswrapper[4867]: I0101 08:43:16.187865 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aeaad03-1c14-49f8-b417-5e4d6470db87" containerName="extract" Jan 01 08:43:16 crc kubenswrapper[4867]: I0101 08:43:16.188243 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-6879547b79-cctq6" Jan 01 08:43:16 crc kubenswrapper[4867]: I0101 08:43:16.190067 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-sl8lt" Jan 01 08:43:16 crc kubenswrapper[4867]: I0101 08:43:16.210073 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6879547b79-cctq6"] Jan 01 08:43:16 crc kubenswrapper[4867]: I0101 08:43:16.246118 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndsvm\" (UniqueName: \"kubernetes.io/projected/ad4e0e83-de60-433b-a688-7e1bf4bd2c76-kube-api-access-ndsvm\") pod \"openstack-operator-controller-operator-6879547b79-cctq6\" (UID: \"ad4e0e83-de60-433b-a688-7e1bf4bd2c76\") " pod="openstack-operators/openstack-operator-controller-operator-6879547b79-cctq6" Jan 01 08:43:16 crc kubenswrapper[4867]: I0101 08:43:16.347902 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndsvm\" (UniqueName: \"kubernetes.io/projected/ad4e0e83-de60-433b-a688-7e1bf4bd2c76-kube-api-access-ndsvm\") pod \"openstack-operator-controller-operator-6879547b79-cctq6\" (UID: \"ad4e0e83-de60-433b-a688-7e1bf4bd2c76\") " pod="openstack-operators/openstack-operator-controller-operator-6879547b79-cctq6" Jan 01 08:43:16 crc kubenswrapper[4867]: I0101 08:43:16.366490 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndsvm\" (UniqueName: \"kubernetes.io/projected/ad4e0e83-de60-433b-a688-7e1bf4bd2c76-kube-api-access-ndsvm\") pod \"openstack-operator-controller-operator-6879547b79-cctq6\" (UID: \"ad4e0e83-de60-433b-a688-7e1bf4bd2c76\") " pod="openstack-operators/openstack-operator-controller-operator-6879547b79-cctq6" Jan 01 08:43:16 crc kubenswrapper[4867]: I0101 08:43:16.506122 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-6879547b79-cctq6" Jan 01 08:43:17 crc kubenswrapper[4867]: I0101 08:43:17.076642 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6879547b79-cctq6"] Jan 01 08:43:17 crc kubenswrapper[4867]: I0101 08:43:17.847214 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6879547b79-cctq6" event={"ID":"ad4e0e83-de60-433b-a688-7e1bf4bd2c76","Type":"ContainerStarted","Data":"c6d7057031993e6c90fcadb4092db920ae5a2866cae1a35c34745c33acddb70a"} Jan 01 08:43:21 crc kubenswrapper[4867]: I0101 08:43:21.877472 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6879547b79-cctq6" event={"ID":"ad4e0e83-de60-433b-a688-7e1bf4bd2c76","Type":"ContainerStarted","Data":"e9c1a8d9404a974fff95db73d4d1913f9e4e98d517dd670e5ca0e0e35f7396f8"} Jan 01 08:43:21 crc kubenswrapper[4867]: I0101 08:43:21.878047 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-6879547b79-cctq6" Jan 01 08:43:21 crc kubenswrapper[4867]: I0101 08:43:21.907254 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-6879547b79-cctq6" podStartSLOduration=1.6348441569999999 podStartE2EDuration="5.907237049s" podCreationTimestamp="2026-01-01 08:43:16 +0000 UTC" firstStartedPulling="2026-01-01 08:43:17.082071416 +0000 UTC m=+1006.217340175" lastFinishedPulling="2026-01-01 08:43:21.354464258 +0000 UTC m=+1010.489733067" observedRunningTime="2026-01-01 08:43:21.902679762 +0000 UTC m=+1011.037948551" watchObservedRunningTime="2026-01-01 08:43:21.907237049 +0000 UTC m=+1011.042505818" Jan 01 08:43:26 crc kubenswrapper[4867]: I0101 08:43:26.511162 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-6879547b79-cctq6" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.471646 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-f6f74d6db-k6qkg"] Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.473325 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-k6qkg" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.480541 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-vtlbv" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.488862 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-f6f74d6db-k6qkg"] Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.604724 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-78979fc445-mxm97"] Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.605525 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-mxm97" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.609065 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-lsrlt" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.610229 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzjl2\" (UniqueName: \"kubernetes.io/projected/80077b2f-5e6e-49f7-9d98-8c1004ab2cd4-kube-api-access-wzjl2\") pod \"barbican-operator-controller-manager-f6f74d6db-k6qkg\" (UID: \"80077b2f-5e6e-49f7-9d98-8c1004ab2cd4\") " pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-k6qkg" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.619581 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-78979fc445-mxm97"] Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.626011 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-vp4t2"] Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.626797 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-vp4t2" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.628741 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-84p2n" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.631635 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-vp4t2"] Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.640325 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-7b549fc966-8h6dx"] Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.641184 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-8h6dx" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.644660 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7b549fc966-8h6dx"] Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.645056 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-28wqj" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.651510 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-hn42f"] Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.652428 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hn42f" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.655632 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-bbfr4" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.658191 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-hn42f"] Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.663821 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-mvsxj"] Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.664648 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-mvsxj" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.668072 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-5bcqt" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.673942 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-mvsxj"] Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.681967 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f99f54bc8-5jhrr"] Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.682712 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-5jhrr" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.687253 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-z2pxt" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.696658 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-6d99759cf-vcnt9"] Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.697542 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-vcnt9" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.700805 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-tz27h" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.700977 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.710799 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-568985c78-njm56"] Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.711540 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-568985c78-njm56" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.712071 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz55b\" (UniqueName: \"kubernetes.io/projected/89e16415-08c2-45fe-8a85-b1f12d047cde-kube-api-access-pz55b\") pod \"cinder-operator-controller-manager-78979fc445-mxm97\" (UID: \"89e16415-08c2-45fe-8a85-b1f12d047cde\") " pod="openstack-operators/cinder-operator-controller-manager-78979fc445-mxm97" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.712136 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnvsv\" (UniqueName: \"kubernetes.io/projected/795d3985-4592-42e4-aa83-aaebb35bcc6d-kube-api-access-hnvsv\") pod \"designate-operator-controller-manager-66f8b87655-vp4t2\" (UID: \"795d3985-4592-42e4-aa83-aaebb35bcc6d\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-vp4t2" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.712161 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzjl2\" (UniqueName: \"kubernetes.io/projected/80077b2f-5e6e-49f7-9d98-8c1004ab2cd4-kube-api-access-wzjl2\") pod \"barbican-operator-controller-manager-f6f74d6db-k6qkg\" (UID: \"80077b2f-5e6e-49f7-9d98-8c1004ab2cd4\") " pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-k6qkg" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.717328 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-m8bgb" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.718940 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-598945d5b8-gbrnj"] Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.719697 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-gbrnj" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.730796 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f99f54bc8-5jhrr"] Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.738932 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6d99759cf-vcnt9"] Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.739175 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-zl2d9" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.745009 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-598945d5b8-gbrnj"] Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.785516 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-svlz4"] Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.786867 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-svlz4" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.788751 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-dpcld" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.799521 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzjl2\" (UniqueName: \"kubernetes.io/projected/80077b2f-5e6e-49f7-9d98-8c1004ab2cd4-kube-api-access-wzjl2\") pod \"barbican-operator-controller-manager-f6f74d6db-k6qkg\" (UID: \"80077b2f-5e6e-49f7-9d98-8c1004ab2cd4\") " pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-k6qkg" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.813552 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxmz6\" (UniqueName: \"kubernetes.io/projected/dbc4e740-aa10-4b7b-88db-7c172dae38f9-kube-api-access-zxmz6\") pod \"ironic-operator-controller-manager-f99f54bc8-5jhrr\" (UID: \"dbc4e740-aa10-4b7b-88db-7c172dae38f9\") " pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-5jhrr" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.814027 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnvsv\" (UniqueName: \"kubernetes.io/projected/795d3985-4592-42e4-aa83-aaebb35bcc6d-kube-api-access-hnvsv\") pod \"designate-operator-controller-manager-66f8b87655-vp4t2\" (UID: \"795d3985-4592-42e4-aa83-aaebb35bcc6d\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-vp4t2" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.814117 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w267n\" (UniqueName: \"kubernetes.io/projected/3b2fcdd1-2278-4f3e-b3aa-570321fafee8-kube-api-access-w267n\") pod \"heat-operator-controller-manager-658dd65b86-hn42f\" (UID: \"3b2fcdd1-2278-4f3e-b3aa-570321fafee8\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hn42f" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.814282 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc4lg\" (UniqueName: \"kubernetes.io/projected/7cee2279-7f63-4416-bf5f-42e1ca8bd334-kube-api-access-jc4lg\") pod \"glance-operator-controller-manager-7b549fc966-8h6dx\" (UID: \"7cee2279-7f63-4416-bf5f-42e1ca8bd334\") " pod="openstack-operators/glance-operator-controller-manager-7b549fc966-8h6dx" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.814380 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dmfp\" (UniqueName: \"kubernetes.io/projected/a8217738-2a7f-41ee-9d06-a329d7c8dbfc-kube-api-access-8dmfp\") pod \"manila-operator-controller-manager-598945d5b8-gbrnj\" (UID: \"a8217738-2a7f-41ee-9d06-a329d7c8dbfc\") " pod="openstack-operators/manila-operator-controller-manager-598945d5b8-gbrnj" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.814487 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz55b\" (UniqueName: \"kubernetes.io/projected/89e16415-08c2-45fe-8a85-b1f12d047cde-kube-api-access-pz55b\") pod \"cinder-operator-controller-manager-78979fc445-mxm97\" (UID: \"89e16415-08c2-45fe-8a85-b1f12d047cde\") " pod="openstack-operators/cinder-operator-controller-manager-78979fc445-mxm97" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.814661 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8e2aeeaf-c653-49dc-9165-fc5445bb7aaf-cert\") pod \"infra-operator-controller-manager-6d99759cf-vcnt9\" (UID: \"8e2aeeaf-c653-49dc-9165-fc5445bb7aaf\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-vcnt9" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.814747 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkzdf\" (UniqueName: \"kubernetes.io/projected/bbd5645a-1a67-44ef-8aa2-25fa40566538-kube-api-access-wkzdf\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-mvsxj\" (UID: \"bbd5645a-1a67-44ef-8aa2-25fa40566538\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-mvsxj" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.814830 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzfbs\" (UniqueName: \"kubernetes.io/projected/8e2aeeaf-c653-49dc-9165-fc5445bb7aaf-kube-api-access-fzfbs\") pod \"infra-operator-controller-manager-6d99759cf-vcnt9\" (UID: \"8e2aeeaf-c653-49dc-9165-fc5445bb7aaf\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-vcnt9" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.814926 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj7vm\" (UniqueName: \"kubernetes.io/projected/44a94a32-c18a-4e1a-8b8a-461a002ab55c-kube-api-access-zj7vm\") pod \"keystone-operator-controller-manager-568985c78-njm56\" (UID: \"44a94a32-c18a-4e1a-8b8a-461a002ab55c\") " pod="openstack-operators/keystone-operator-controller-manager-568985c78-njm56" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.822838 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-m4l4l"] Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.826715 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-svlz4"] Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.826830 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-m4l4l" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.832575 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-vfccz" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.833632 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-568985c78-njm56"] Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.838528 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnvsv\" (UniqueName: \"kubernetes.io/projected/795d3985-4592-42e4-aa83-aaebb35bcc6d-kube-api-access-hnvsv\") pod \"designate-operator-controller-manager-66f8b87655-vp4t2\" (UID: \"795d3985-4592-42e4-aa83-aaebb35bcc6d\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-vp4t2" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.838694 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz55b\" (UniqueName: \"kubernetes.io/projected/89e16415-08c2-45fe-8a85-b1f12d047cde-kube-api-access-pz55b\") pod \"cinder-operator-controller-manager-78979fc445-mxm97\" (UID: \"89e16415-08c2-45fe-8a85-b1f12d047cde\") " pod="openstack-operators/cinder-operator-controller-manager-78979fc445-mxm97" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.872182 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-276zz"] Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.873062 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-276zz" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.875415 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-vcjkr" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.878927 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-m4l4l"] Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.882935 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-59zpg"] Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.889987 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-59zpg" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.892006 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-276zz"] Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.892451 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-mdkcc" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.896266 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-59zpg"] Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.907221 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5c4776bcc5q4862"] Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.908042 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c4776bcc5q4862" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.910206 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.910432 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-jzbzx" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.912926 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-k6qkg" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.915464 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxmz6\" (UniqueName: \"kubernetes.io/projected/dbc4e740-aa10-4b7b-88db-7c172dae38f9-kube-api-access-zxmz6\") pod \"ironic-operator-controller-manager-f99f54bc8-5jhrr\" (UID: \"dbc4e740-aa10-4b7b-88db-7c172dae38f9\") " pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-5jhrr" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.915491 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w267n\" (UniqueName: \"kubernetes.io/projected/3b2fcdd1-2278-4f3e-b3aa-570321fafee8-kube-api-access-w267n\") pod \"heat-operator-controller-manager-658dd65b86-hn42f\" (UID: \"3b2fcdd1-2278-4f3e-b3aa-570321fafee8\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hn42f" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.915522 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc4lg\" (UniqueName: \"kubernetes.io/projected/7cee2279-7f63-4416-bf5f-42e1ca8bd334-kube-api-access-jc4lg\") pod \"glance-operator-controller-manager-7b549fc966-8h6dx\" (UID: \"7cee2279-7f63-4416-bf5f-42e1ca8bd334\") " pod="openstack-operators/glance-operator-controller-manager-7b549fc966-8h6dx" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.915548 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dmfp\" (UniqueName: \"kubernetes.io/projected/a8217738-2a7f-41ee-9d06-a329d7c8dbfc-kube-api-access-8dmfp\") pod \"manila-operator-controller-manager-598945d5b8-gbrnj\" (UID: \"a8217738-2a7f-41ee-9d06-a329d7c8dbfc\") " pod="openstack-operators/manila-operator-controller-manager-598945d5b8-gbrnj" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.915572 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8e2aeeaf-c653-49dc-9165-fc5445bb7aaf-cert\") pod \"infra-operator-controller-manager-6d99759cf-vcnt9\" (UID: \"8e2aeeaf-c653-49dc-9165-fc5445bb7aaf\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-vcnt9" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.915591 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkzdf\" (UniqueName: \"kubernetes.io/projected/bbd5645a-1a67-44ef-8aa2-25fa40566538-kube-api-access-wkzdf\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-mvsxj\" (UID: \"bbd5645a-1a67-44ef-8aa2-25fa40566538\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-mvsxj" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.915612 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzfbs\" (UniqueName: \"kubernetes.io/projected/8e2aeeaf-c653-49dc-9165-fc5445bb7aaf-kube-api-access-fzfbs\") pod \"infra-operator-controller-manager-6d99759cf-vcnt9\" (UID: \"8e2aeeaf-c653-49dc-9165-fc5445bb7aaf\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-vcnt9" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.915631 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj7vm\" (UniqueName: \"kubernetes.io/projected/44a94a32-c18a-4e1a-8b8a-461a002ab55c-kube-api-access-zj7vm\") pod \"keystone-operator-controller-manager-568985c78-njm56\" (UID: \"44a94a32-c18a-4e1a-8b8a-461a002ab55c\") " pod="openstack-operators/keystone-operator-controller-manager-568985c78-njm56" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.915661 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-989nf\" (UniqueName: \"kubernetes.io/projected/06f61537-0f61-41ee-a049-10540e971c9d-kube-api-access-989nf\") pod \"mariadb-operator-controller-manager-7b88bfc995-svlz4\" (UID: \"06f61537-0f61-41ee-a049-10540e971c9d\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-svlz4" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.915694 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltjcb\" (UniqueName: \"kubernetes.io/projected/2c4f0e99-7a5c-4d99-8b15-3ddd97d6b6b0-kube-api-access-ltjcb\") pod \"neutron-operator-controller-manager-7cd87b778f-m4l4l\" (UID: \"2c4f0e99-7a5c-4d99-8b15-3ddd97d6b6b0\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-m4l4l" Jan 01 08:43:45 crc kubenswrapper[4867]: E0101 08:43:45.916270 4867 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 01 08:43:45 crc kubenswrapper[4867]: E0101 08:43:45.916313 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e2aeeaf-c653-49dc-9165-fc5445bb7aaf-cert podName:8e2aeeaf-c653-49dc-9165-fc5445bb7aaf nodeName:}" failed. No retries permitted until 2026-01-01 08:43:46.416298345 +0000 UTC m=+1035.551567114 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8e2aeeaf-c653-49dc-9165-fc5445bb7aaf-cert") pod "infra-operator-controller-manager-6d99759cf-vcnt9" (UID: "8e2aeeaf-c653-49dc-9165-fc5445bb7aaf") : secret "infra-operator-webhook-server-cert" not found Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.916731 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-4x7b9"] Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.917784 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-4x7b9" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.919490 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5c4776bcc5q4862"] Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.925349 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-wg5nh" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.925536 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-4x7b9"] Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.927263 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-mxm97" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.935416 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-9b6f8f78c-2tl66"] Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.936207 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-2tl66" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.939548 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-9b6f8f78c-2tl66"] Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.944406 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-bb586bbf4-tkjs4"] Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.945111 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-4dzmb" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.945281 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-vp4t2" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.946741 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dmfp\" (UniqueName: \"kubernetes.io/projected/a8217738-2a7f-41ee-9d06-a329d7c8dbfc-kube-api-access-8dmfp\") pod \"manila-operator-controller-manager-598945d5b8-gbrnj\" (UID: \"a8217738-2a7f-41ee-9d06-a329d7c8dbfc\") " pod="openstack-operators/manila-operator-controller-manager-598945d5b8-gbrnj" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.947526 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-tkjs4" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.951484 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-djw9v" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.953341 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-bb586bbf4-tkjs4"] Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.955736 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w267n\" (UniqueName: \"kubernetes.io/projected/3b2fcdd1-2278-4f3e-b3aa-570321fafee8-kube-api-access-w267n\") pod \"heat-operator-controller-manager-658dd65b86-hn42f\" (UID: \"3b2fcdd1-2278-4f3e-b3aa-570321fafee8\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hn42f" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.959454 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxmz6\" (UniqueName: \"kubernetes.io/projected/dbc4e740-aa10-4b7b-88db-7c172dae38f9-kube-api-access-zxmz6\") pod \"ironic-operator-controller-manager-f99f54bc8-5jhrr\" (UID: \"dbc4e740-aa10-4b7b-88db-7c172dae38f9\") " pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-5jhrr" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.963662 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc4lg\" (UniqueName: \"kubernetes.io/projected/7cee2279-7f63-4416-bf5f-42e1ca8bd334-kube-api-access-jc4lg\") pod \"glance-operator-controller-manager-7b549fc966-8h6dx\" (UID: \"7cee2279-7f63-4416-bf5f-42e1ca8bd334\") " pod="openstack-operators/glance-operator-controller-manager-7b549fc966-8h6dx" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.965987 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-8h6dx" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.967029 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj7vm\" (UniqueName: \"kubernetes.io/projected/44a94a32-c18a-4e1a-8b8a-461a002ab55c-kube-api-access-zj7vm\") pod \"keystone-operator-controller-manager-568985c78-njm56\" (UID: \"44a94a32-c18a-4e1a-8b8a-461a002ab55c\") " pod="openstack-operators/keystone-operator-controller-manager-568985c78-njm56" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.976926 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzfbs\" (UniqueName: \"kubernetes.io/projected/8e2aeeaf-c653-49dc-9165-fc5445bb7aaf-kube-api-access-fzfbs\") pod \"infra-operator-controller-manager-6d99759cf-vcnt9\" (UID: \"8e2aeeaf-c653-49dc-9165-fc5445bb7aaf\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-vcnt9" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.982115 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-68d988df55-q2rv6"] Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.983283 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-q2rv6" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.986666 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-68d988df55-q2rv6"] Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.989345 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-x58x5" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.995258 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkzdf\" (UniqueName: \"kubernetes.io/projected/bbd5645a-1a67-44ef-8aa2-25fa40566538-kube-api-access-wkzdf\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-mvsxj\" (UID: \"bbd5645a-1a67-44ef-8aa2-25fa40566538\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-mvsxj" Jan 01 08:43:45 crc kubenswrapper[4867]: I0101 08:43:45.995541 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hn42f" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.002296 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-mvsxj" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.007339 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-5jhrr" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.017041 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd9cw\" (UniqueName: \"kubernetes.io/projected/2e2a7fff-5652-4f64-9660-59b811de1346-kube-api-access-qd9cw\") pod \"swift-operator-controller-manager-bb586bbf4-tkjs4\" (UID: \"2e2a7fff-5652-4f64-9660-59b811de1346\") " pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-tkjs4" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.017132 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv927\" (UniqueName: \"kubernetes.io/projected/6834a8b5-22a8-4a7c-b03f-633599137bd2-kube-api-access-pv927\") pod \"ovn-operator-controller-manager-bf6d4f946-4x7b9\" (UID: \"6834a8b5-22a8-4a7c-b03f-633599137bd2\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-4x7b9" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.017169 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shwn4\" (UniqueName: \"kubernetes.io/projected/26964f50-c878-4612-b298-634abc246f6a-kube-api-access-shwn4\") pod \"placement-operator-controller-manager-9b6f8f78c-2tl66\" (UID: \"26964f50-c878-4612-b298-634abc246f6a\") " pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-2tl66" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.017197 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d721206b-841d-4c5c-9d94-202fff6b8838-cert\") pod \"openstack-baremetal-operator-controller-manager-5c4776bcc5q4862\" (UID: \"d721206b-841d-4c5c-9d94-202fff6b8838\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c4776bcc5q4862" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.017247 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cspcv\" (UniqueName: \"kubernetes.io/projected/59732ee6-32d1-48be-9e3f-a9989be15bbc-kube-api-access-cspcv\") pod \"nova-operator-controller-manager-5fbbf8b6cc-59zpg\" (UID: \"59732ee6-32d1-48be-9e3f-a9989be15bbc\") " pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-59zpg" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.017368 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-989nf\" (UniqueName: \"kubernetes.io/projected/06f61537-0f61-41ee-a049-10540e971c9d-kube-api-access-989nf\") pod \"mariadb-operator-controller-manager-7b88bfc995-svlz4\" (UID: \"06f61537-0f61-41ee-a049-10540e971c9d\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-svlz4" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.017478 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltjcb\" (UniqueName: \"kubernetes.io/projected/2c4f0e99-7a5c-4d99-8b15-3ddd97d6b6b0-kube-api-access-ltjcb\") pod \"neutron-operator-controller-manager-7cd87b778f-m4l4l\" (UID: \"2c4f0e99-7a5c-4d99-8b15-3ddd97d6b6b0\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-m4l4l" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.017585 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7mv5\" (UniqueName: \"kubernetes.io/projected/cd6e1e20-2735-40b9-a1c2-313e2845ffc8-kube-api-access-p7mv5\") pod \"octavia-operator-controller-manager-68c649d9d-276zz\" (UID: \"cd6e1e20-2735-40b9-a1c2-313e2845ffc8\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-276zz" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.017640 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5kll\" (UniqueName: \"kubernetes.io/projected/d721206b-841d-4c5c-9d94-202fff6b8838-kube-api-access-z5kll\") pod \"openstack-baremetal-operator-controller-manager-5c4776bcc5q4862\" (UID: \"d721206b-841d-4c5c-9d94-202fff6b8838\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c4776bcc5q4862" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.020765 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-7h2r7"] Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.021573 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7h2r7" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.024327 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-5kvg8" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.060831 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-989nf\" (UniqueName: \"kubernetes.io/projected/06f61537-0f61-41ee-a049-10540e971c9d-kube-api-access-989nf\") pod \"mariadb-operator-controller-manager-7b88bfc995-svlz4\" (UID: \"06f61537-0f61-41ee-a049-10540e971c9d\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-svlz4" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.066946 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-7h2r7"] Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.083116 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltjcb\" (UniqueName: \"kubernetes.io/projected/2c4f0e99-7a5c-4d99-8b15-3ddd97d6b6b0-kube-api-access-ltjcb\") pod \"neutron-operator-controller-manager-7cd87b778f-m4l4l\" (UID: \"2c4f0e99-7a5c-4d99-8b15-3ddd97d6b6b0\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-m4l4l" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.086596 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-gbrnj" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.087512 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-568985c78-njm56" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.124492 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd9cw\" (UniqueName: \"kubernetes.io/projected/2e2a7fff-5652-4f64-9660-59b811de1346-kube-api-access-qd9cw\") pod \"swift-operator-controller-manager-bb586bbf4-tkjs4\" (UID: \"2e2a7fff-5652-4f64-9660-59b811de1346\") " pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-tkjs4" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.124773 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv927\" (UniqueName: \"kubernetes.io/projected/6834a8b5-22a8-4a7c-b03f-633599137bd2-kube-api-access-pv927\") pod \"ovn-operator-controller-manager-bf6d4f946-4x7b9\" (UID: \"6834a8b5-22a8-4a7c-b03f-633599137bd2\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-4x7b9" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.124801 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shwn4\" (UniqueName: \"kubernetes.io/projected/26964f50-c878-4612-b298-634abc246f6a-kube-api-access-shwn4\") pod \"placement-operator-controller-manager-9b6f8f78c-2tl66\" (UID: \"26964f50-c878-4612-b298-634abc246f6a\") " pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-2tl66" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.124841 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d721206b-841d-4c5c-9d94-202fff6b8838-cert\") pod \"openstack-baremetal-operator-controller-manager-5c4776bcc5q4862\" (UID: \"d721206b-841d-4c5c-9d94-202fff6b8838\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c4776bcc5q4862" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.124900 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cspcv\" (UniqueName: \"kubernetes.io/projected/59732ee6-32d1-48be-9e3f-a9989be15bbc-kube-api-access-cspcv\") pod \"nova-operator-controller-manager-5fbbf8b6cc-59zpg\" (UID: \"59732ee6-32d1-48be-9e3f-a9989be15bbc\") " pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-59zpg" Jan 01 08:43:46 crc kubenswrapper[4867]: E0101 08:43:46.124928 4867 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.124940 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wmxp\" (UniqueName: \"kubernetes.io/projected/96cea6bb-e017-4e6c-9298-aaf07b775dff-kube-api-access-4wmxp\") pod \"test-operator-controller-manager-6c866cfdcb-7h2r7\" (UID: \"96cea6bb-e017-4e6c-9298-aaf07b775dff\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7h2r7" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.124960 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2kd5\" (UniqueName: \"kubernetes.io/projected/8801d693-c818-4666-bda5-93d9db1d46a0-kube-api-access-c2kd5\") pod \"telemetry-operator-controller-manager-68d988df55-q2rv6\" (UID: \"8801d693-c818-4666-bda5-93d9db1d46a0\") " pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-q2rv6" Jan 01 08:43:46 crc kubenswrapper[4867]: E0101 08:43:46.124979 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d721206b-841d-4c5c-9d94-202fff6b8838-cert podName:d721206b-841d-4c5c-9d94-202fff6b8838 nodeName:}" failed. No retries permitted until 2026-01-01 08:43:46.624962799 +0000 UTC m=+1035.760231568 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d721206b-841d-4c5c-9d94-202fff6b8838-cert") pod "openstack-baremetal-operator-controller-manager-5c4776bcc5q4862" (UID: "d721206b-841d-4c5c-9d94-202fff6b8838") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.125094 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7mv5\" (UniqueName: \"kubernetes.io/projected/cd6e1e20-2735-40b9-a1c2-313e2845ffc8-kube-api-access-p7mv5\") pod \"octavia-operator-controller-manager-68c649d9d-276zz\" (UID: \"cd6e1e20-2735-40b9-a1c2-313e2845ffc8\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-276zz" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.125157 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5kll\" (UniqueName: \"kubernetes.io/projected/d721206b-841d-4c5c-9d94-202fff6b8838-kube-api-access-z5kll\") pod \"openstack-baremetal-operator-controller-manager-5c4776bcc5q4862\" (UID: \"d721206b-841d-4c5c-9d94-202fff6b8838\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c4776bcc5q4862" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.131939 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-svlz4" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.165972 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7mv5\" (UniqueName: \"kubernetes.io/projected/cd6e1e20-2735-40b9-a1c2-313e2845ffc8-kube-api-access-p7mv5\") pod \"octavia-operator-controller-manager-68c649d9d-276zz\" (UID: \"cd6e1e20-2735-40b9-a1c2-313e2845ffc8\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-276zz" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.166623 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv927\" (UniqueName: \"kubernetes.io/projected/6834a8b5-22a8-4a7c-b03f-633599137bd2-kube-api-access-pv927\") pod \"ovn-operator-controller-manager-bf6d4f946-4x7b9\" (UID: \"6834a8b5-22a8-4a7c-b03f-633599137bd2\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-4x7b9" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.169502 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shwn4\" (UniqueName: \"kubernetes.io/projected/26964f50-c878-4612-b298-634abc246f6a-kube-api-access-shwn4\") pod \"placement-operator-controller-manager-9b6f8f78c-2tl66\" (UID: \"26964f50-c878-4612-b298-634abc246f6a\") " pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-2tl66" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.172092 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-m4l4l" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.179742 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5kll\" (UniqueName: \"kubernetes.io/projected/d721206b-841d-4c5c-9d94-202fff6b8838-kube-api-access-z5kll\") pod \"openstack-baremetal-operator-controller-manager-5c4776bcc5q4862\" (UID: \"d721206b-841d-4c5c-9d94-202fff6b8838\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c4776bcc5q4862" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.185750 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd9cw\" (UniqueName: \"kubernetes.io/projected/2e2a7fff-5652-4f64-9660-59b811de1346-kube-api-access-qd9cw\") pod \"swift-operator-controller-manager-bb586bbf4-tkjs4\" (UID: \"2e2a7fff-5652-4f64-9660-59b811de1346\") " pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-tkjs4" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.186545 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cspcv\" (UniqueName: \"kubernetes.io/projected/59732ee6-32d1-48be-9e3f-a9989be15bbc-kube-api-access-cspcv\") pod \"nova-operator-controller-manager-5fbbf8b6cc-59zpg\" (UID: \"59732ee6-32d1-48be-9e3f-a9989be15bbc\") " pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-59zpg" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.189915 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9dbdf6486-jcqkc"] Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.190626 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-276zz" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.190794 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-jcqkc" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.194916 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-nbskg" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.204525 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-4x7b9" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.206025 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-59zpg" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.224018 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9dbdf6486-jcqkc"] Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.227743 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wmxp\" (UniqueName: \"kubernetes.io/projected/96cea6bb-e017-4e6c-9298-aaf07b775dff-kube-api-access-4wmxp\") pod \"test-operator-controller-manager-6c866cfdcb-7h2r7\" (UID: \"96cea6bb-e017-4e6c-9298-aaf07b775dff\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7h2r7" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.227779 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2kd5\" (UniqueName: \"kubernetes.io/projected/8801d693-c818-4666-bda5-93d9db1d46a0-kube-api-access-c2kd5\") pod \"telemetry-operator-controller-manager-68d988df55-q2rv6\" (UID: \"8801d693-c818-4666-bda5-93d9db1d46a0\") " pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-q2rv6" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.239696 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-2tl66" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.247358 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-tkjs4" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.250010 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7df7568dd6-9drs7"] Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.251629 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7df7568dd6-9drs7" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.253695 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.253711 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2kd5\" (UniqueName: \"kubernetes.io/projected/8801d693-c818-4666-bda5-93d9db1d46a0-kube-api-access-c2kd5\") pod \"telemetry-operator-controller-manager-68d988df55-q2rv6\" (UID: \"8801d693-c818-4666-bda5-93d9db1d46a0\") " pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-q2rv6" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.253743 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wmxp\" (UniqueName: \"kubernetes.io/projected/96cea6bb-e017-4e6c-9298-aaf07b775dff-kube-api-access-4wmxp\") pod \"test-operator-controller-manager-6c866cfdcb-7h2r7\" (UID: \"96cea6bb-e017-4e6c-9298-aaf07b775dff\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7h2r7" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.254003 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.254059 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-snfrd" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.267403 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-q2rv6" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.273617 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7df7568dd6-9drs7"] Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.290410 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7h2r7" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.296069 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mvr5b"] Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.297287 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mvr5b" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.299063 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-bhm7z" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.304432 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mvr5b"] Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.329073 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt5m6\" (UniqueName: \"kubernetes.io/projected/1e5669c2-43cd-4d20-9d76-67e4dee53753-kube-api-access-nt5m6\") pod \"openstack-operator-controller-manager-7df7568dd6-9drs7\" (UID: \"1e5669c2-43cd-4d20-9d76-67e4dee53753\") " pod="openstack-operators/openstack-operator-controller-manager-7df7568dd6-9drs7" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.329150 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-webhook-certs\") pod \"openstack-operator-controller-manager-7df7568dd6-9drs7\" (UID: \"1e5669c2-43cd-4d20-9d76-67e4dee53753\") " pod="openstack-operators/openstack-operator-controller-manager-7df7568dd6-9drs7" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.329202 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-metrics-certs\") pod \"openstack-operator-controller-manager-7df7568dd6-9drs7\" (UID: \"1e5669c2-43cd-4d20-9d76-67e4dee53753\") " pod="openstack-operators/openstack-operator-controller-manager-7df7568dd6-9drs7" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.329226 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxkhb\" (UniqueName: \"kubernetes.io/projected/57ffbe9b-99b1-433d-86fa-e61435d99318-kube-api-access-gxkhb\") pod \"watcher-operator-controller-manager-9dbdf6486-jcqkc\" (UID: \"57ffbe9b-99b1-433d-86fa-e61435d99318\") " pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-jcqkc" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.346903 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-f6f74d6db-k6qkg"] Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.429866 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt5m6\" (UniqueName: \"kubernetes.io/projected/1e5669c2-43cd-4d20-9d76-67e4dee53753-kube-api-access-nt5m6\") pod \"openstack-operator-controller-manager-7df7568dd6-9drs7\" (UID: \"1e5669c2-43cd-4d20-9d76-67e4dee53753\") " pod="openstack-operators/openstack-operator-controller-manager-7df7568dd6-9drs7" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.429929 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22vtj\" (UniqueName: \"kubernetes.io/projected/065f64f0-26e4-4b68-8dfa-1bf17f20e99b-kube-api-access-22vtj\") pod \"rabbitmq-cluster-operator-manager-668c99d594-mvr5b\" (UID: \"065f64f0-26e4-4b68-8dfa-1bf17f20e99b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mvr5b" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.429966 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-webhook-certs\") pod \"openstack-operator-controller-manager-7df7568dd6-9drs7\" (UID: \"1e5669c2-43cd-4d20-9d76-67e4dee53753\") " pod="openstack-operators/openstack-operator-controller-manager-7df7568dd6-9drs7" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.430021 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-metrics-certs\") pod \"openstack-operator-controller-manager-7df7568dd6-9drs7\" (UID: \"1e5669c2-43cd-4d20-9d76-67e4dee53753\") " pod="openstack-operators/openstack-operator-controller-manager-7df7568dd6-9drs7" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.430045 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxkhb\" (UniqueName: \"kubernetes.io/projected/57ffbe9b-99b1-433d-86fa-e61435d99318-kube-api-access-gxkhb\") pod \"watcher-operator-controller-manager-9dbdf6486-jcqkc\" (UID: \"57ffbe9b-99b1-433d-86fa-e61435d99318\") " pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-jcqkc" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.430098 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8e2aeeaf-c653-49dc-9165-fc5445bb7aaf-cert\") pod \"infra-operator-controller-manager-6d99759cf-vcnt9\" (UID: \"8e2aeeaf-c653-49dc-9165-fc5445bb7aaf\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-vcnt9" Jan 01 08:43:46 crc kubenswrapper[4867]: E0101 08:43:46.430211 4867 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 01 08:43:46 crc kubenswrapper[4867]: E0101 08:43:46.430263 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e2aeeaf-c653-49dc-9165-fc5445bb7aaf-cert podName:8e2aeeaf-c653-49dc-9165-fc5445bb7aaf nodeName:}" failed. No retries permitted until 2026-01-01 08:43:47.430245159 +0000 UTC m=+1036.565513928 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8e2aeeaf-c653-49dc-9165-fc5445bb7aaf-cert") pod "infra-operator-controller-manager-6d99759cf-vcnt9" (UID: "8e2aeeaf-c653-49dc-9165-fc5445bb7aaf") : secret "infra-operator-webhook-server-cert" not found Jan 01 08:43:46 crc kubenswrapper[4867]: E0101 08:43:46.430623 4867 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 01 08:43:46 crc kubenswrapper[4867]: E0101 08:43:46.430650 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-metrics-certs podName:1e5669c2-43cd-4d20-9d76-67e4dee53753 nodeName:}" failed. No retries permitted until 2026-01-01 08:43:46.93064166 +0000 UTC m=+1036.065910429 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-metrics-certs") pod "openstack-operator-controller-manager-7df7568dd6-9drs7" (UID: "1e5669c2-43cd-4d20-9d76-67e4dee53753") : secret "metrics-server-cert" not found Jan 01 08:43:46 crc kubenswrapper[4867]: E0101 08:43:46.430848 4867 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 01 08:43:46 crc kubenswrapper[4867]: E0101 08:43:46.430870 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-webhook-certs podName:1e5669c2-43cd-4d20-9d76-67e4dee53753 nodeName:}" failed. No retries permitted until 2026-01-01 08:43:46.930863606 +0000 UTC m=+1036.066132375 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-webhook-certs") pod "openstack-operator-controller-manager-7df7568dd6-9drs7" (UID: "1e5669c2-43cd-4d20-9d76-67e4dee53753") : secret "webhook-server-cert" not found Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.446969 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxkhb\" (UniqueName: \"kubernetes.io/projected/57ffbe9b-99b1-433d-86fa-e61435d99318-kube-api-access-gxkhb\") pod \"watcher-operator-controller-manager-9dbdf6486-jcqkc\" (UID: \"57ffbe9b-99b1-433d-86fa-e61435d99318\") " pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-jcqkc" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.448674 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt5m6\" (UniqueName: \"kubernetes.io/projected/1e5669c2-43cd-4d20-9d76-67e4dee53753-kube-api-access-nt5m6\") pod \"openstack-operator-controller-manager-7df7568dd6-9drs7\" (UID: \"1e5669c2-43cd-4d20-9d76-67e4dee53753\") " pod="openstack-operators/openstack-operator-controller-manager-7df7568dd6-9drs7" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.532603 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22vtj\" (UniqueName: \"kubernetes.io/projected/065f64f0-26e4-4b68-8dfa-1bf17f20e99b-kube-api-access-22vtj\") pod \"rabbitmq-cluster-operator-manager-668c99d594-mvr5b\" (UID: \"065f64f0-26e4-4b68-8dfa-1bf17f20e99b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mvr5b" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.550723 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22vtj\" (UniqueName: \"kubernetes.io/projected/065f64f0-26e4-4b68-8dfa-1bf17f20e99b-kube-api-access-22vtj\") pod \"rabbitmq-cluster-operator-manager-668c99d594-mvr5b\" (UID: \"065f64f0-26e4-4b68-8dfa-1bf17f20e99b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mvr5b" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.603342 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-jcqkc" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.607614 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-78979fc445-mxm97"] Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.633868 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d721206b-841d-4c5c-9d94-202fff6b8838-cert\") pod \"openstack-baremetal-operator-controller-manager-5c4776bcc5q4862\" (UID: \"d721206b-841d-4c5c-9d94-202fff6b8838\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c4776bcc5q4862" Jan 01 08:43:46 crc kubenswrapper[4867]: E0101 08:43:46.634054 4867 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 01 08:43:46 crc kubenswrapper[4867]: E0101 08:43:46.634104 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d721206b-841d-4c5c-9d94-202fff6b8838-cert podName:d721206b-841d-4c5c-9d94-202fff6b8838 nodeName:}" failed. No retries permitted until 2026-01-01 08:43:47.634089788 +0000 UTC m=+1036.769358557 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d721206b-841d-4c5c-9d94-202fff6b8838-cert") pod "openstack-baremetal-operator-controller-manager-5c4776bcc5q4862" (UID: "d721206b-841d-4c5c-9d94-202fff6b8838") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.651165 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mvr5b" Jan 01 08:43:46 crc kubenswrapper[4867]: W0101 08:43:46.673730 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89e16415_08c2_45fe_8a85_b1f12d047cde.slice/crio-aac06de36f3272fb30a37446fae644dd041d78a00c7b9edf5b3622095c71f865 WatchSource:0}: Error finding container aac06de36f3272fb30a37446fae644dd041d78a00c7b9edf5b3622095c71f865: Status 404 returned error can't find the container with id aac06de36f3272fb30a37446fae644dd041d78a00c7b9edf5b3622095c71f865 Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.696844 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-vp4t2"] Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.805093 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-hn42f"] Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.815418 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7b549fc966-8h6dx"] Jan 01 08:43:46 crc kubenswrapper[4867]: W0101 08:43:46.826659 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b2fcdd1_2278_4f3e_b3aa_570321fafee8.slice/crio-1a816711b70b70d5c564d502769bd7a9565dccd8e86c6f1578bb970f142ee09d WatchSource:0}: Error finding container 1a816711b70b70d5c564d502769bd7a9565dccd8e86c6f1578bb970f142ee09d: Status 404 returned error can't find the container with id 1a816711b70b70d5c564d502769bd7a9565dccd8e86c6f1578bb970f142ee09d Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.839139 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-598945d5b8-gbrnj"] Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.939537 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-webhook-certs\") pod \"openstack-operator-controller-manager-7df7568dd6-9drs7\" (UID: \"1e5669c2-43cd-4d20-9d76-67e4dee53753\") " pod="openstack-operators/openstack-operator-controller-manager-7df7568dd6-9drs7" Jan 01 08:43:46 crc kubenswrapper[4867]: I0101 08:43:46.939626 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-metrics-certs\") pod \"openstack-operator-controller-manager-7df7568dd6-9drs7\" (UID: \"1e5669c2-43cd-4d20-9d76-67e4dee53753\") " pod="openstack-operators/openstack-operator-controller-manager-7df7568dd6-9drs7" Jan 01 08:43:46 crc kubenswrapper[4867]: E0101 08:43:46.939681 4867 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 01 08:43:46 crc kubenswrapper[4867]: E0101 08:43:46.939732 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-webhook-certs podName:1e5669c2-43cd-4d20-9d76-67e4dee53753 nodeName:}" failed. No retries permitted until 2026-01-01 08:43:47.939716807 +0000 UTC m=+1037.074985576 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-webhook-certs") pod "openstack-operator-controller-manager-7df7568dd6-9drs7" (UID: "1e5669c2-43cd-4d20-9d76-67e4dee53753") : secret "webhook-server-cert" not found Jan 01 08:43:46 crc kubenswrapper[4867]: E0101 08:43:46.939734 4867 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 01 08:43:46 crc kubenswrapper[4867]: E0101 08:43:46.939772 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-metrics-certs podName:1e5669c2-43cd-4d20-9d76-67e4dee53753 nodeName:}" failed. No retries permitted until 2026-01-01 08:43:47.939760599 +0000 UTC m=+1037.075029368 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-metrics-certs") pod "openstack-operator-controller-manager-7df7568dd6-9drs7" (UID: "1e5669c2-43cd-4d20-9d76-67e4dee53753") : secret "metrics-server-cert" not found Jan 01 08:43:47 crc kubenswrapper[4867]: I0101 08:43:47.003513 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-mvsxj"] Jan 01 08:43:47 crc kubenswrapper[4867]: I0101 08:43:47.020732 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f99f54bc8-5jhrr"] Jan 01 08:43:47 crc kubenswrapper[4867]: I0101 08:43:47.026658 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-568985c78-njm56"] Jan 01 08:43:47 crc kubenswrapper[4867]: I0101 08:43:47.034111 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-svlz4"] Jan 01 08:43:47 crc kubenswrapper[4867]: I0101 08:43:47.091469 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hn42f" event={"ID":"3b2fcdd1-2278-4f3e-b3aa-570321fafee8","Type":"ContainerStarted","Data":"1a816711b70b70d5c564d502769bd7a9565dccd8e86c6f1578bb970f142ee09d"} Jan 01 08:43:47 crc kubenswrapper[4867]: I0101 08:43:47.092464 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-mxm97" event={"ID":"89e16415-08c2-45fe-8a85-b1f12d047cde","Type":"ContainerStarted","Data":"aac06de36f3272fb30a37446fae644dd041d78a00c7b9edf5b3622095c71f865"} Jan 01 08:43:47 crc kubenswrapper[4867]: I0101 08:43:47.093538 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-8h6dx" event={"ID":"7cee2279-7f63-4416-bf5f-42e1ca8bd334","Type":"ContainerStarted","Data":"a2fe8317c3586ce9867e2c5d1dfa3a1949857b2d7cfed9e4da4590eaf61d3e52"} Jan 01 08:43:47 crc kubenswrapper[4867]: I0101 08:43:47.094591 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-mvsxj" event={"ID":"bbd5645a-1a67-44ef-8aa2-25fa40566538","Type":"ContainerStarted","Data":"e67fe48a6d64ab39cd68ffccf0a94f057be0ff3070a44e27472640b17afbbe00"} Jan 01 08:43:47 crc kubenswrapper[4867]: I0101 08:43:47.095537 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-568985c78-njm56" event={"ID":"44a94a32-c18a-4e1a-8b8a-461a002ab55c","Type":"ContainerStarted","Data":"06b05d21d8c2b3452617315f4ddfafa26ca6c8fb45d7304f3b33bda942064b7c"} Jan 01 08:43:47 crc kubenswrapper[4867]: I0101 08:43:47.096361 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-vp4t2" event={"ID":"795d3985-4592-42e4-aa83-aaebb35bcc6d","Type":"ContainerStarted","Data":"e6eb9dee0511899c300905b29908b82472a24c92783b15510951c67dd41fe3bd"} Jan 01 08:43:47 crc kubenswrapper[4867]: I0101 08:43:47.097095 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-5jhrr" event={"ID":"dbc4e740-aa10-4b7b-88db-7c172dae38f9","Type":"ContainerStarted","Data":"0dfc76897d75bbbf8984e6eaac59ffd040b506912ccd5cfc9eb06feb608d87e9"} Jan 01 08:43:47 crc kubenswrapper[4867]: I0101 08:43:47.098853 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-svlz4" event={"ID":"06f61537-0f61-41ee-a049-10540e971c9d","Type":"ContainerStarted","Data":"947b5e5d4c3b9422756fa7022b9d1570c364f913d227e871d1440149e2024f8c"} Jan 01 08:43:47 crc kubenswrapper[4867]: I0101 08:43:47.099640 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-gbrnj" event={"ID":"a8217738-2a7f-41ee-9d06-a329d7c8dbfc","Type":"ContainerStarted","Data":"3677f6921a05cbabbcf45e1109f656f295c71e6635b1615f3b6a50c8f9f5ce57"} Jan 01 08:43:47 crc kubenswrapper[4867]: I0101 08:43:47.100692 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-k6qkg" event={"ID":"80077b2f-5e6e-49f7-9d98-8c1004ab2cd4","Type":"ContainerStarted","Data":"7c826a202fd8cbd7a6450f7abd4224b40b60961f6fc4df350035b18dbc617402"} Jan 01 08:43:47 crc kubenswrapper[4867]: I0101 08:43:47.154314 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-68d988df55-q2rv6"] Jan 01 08:43:47 crc kubenswrapper[4867]: I0101 08:43:47.162841 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-59zpg"] Jan 01 08:43:47 crc kubenswrapper[4867]: W0101 08:43:47.172418 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8801d693_c818_4666_bda5_93d9db1d46a0.slice/crio-60751adddd1c8b25226ecdc46d8d666d6ea88586068b60dc35c4871de96f967e WatchSource:0}: Error finding container 60751adddd1c8b25226ecdc46d8d666d6ea88586068b60dc35c4871de96f967e: Status 404 returned error can't find the container with id 60751adddd1c8b25226ecdc46d8d666d6ea88586068b60dc35c4871de96f967e Jan 01 08:43:47 crc kubenswrapper[4867]: I0101 08:43:47.185120 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-7h2r7"] Jan 01 08:43:47 crc kubenswrapper[4867]: W0101 08:43:47.195450 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96cea6bb_e017_4e6c_9298_aaf07b775dff.slice/crio-0bc1e892b524001556852e26b7030547bc45ce2e7bac0088a21643907b485c4a WatchSource:0}: Error finding container 0bc1e892b524001556852e26b7030547bc45ce2e7bac0088a21643907b485c4a: Status 404 returned error can't find the container with id 0bc1e892b524001556852e26b7030547bc45ce2e7bac0088a21643907b485c4a Jan 01 08:43:47 crc kubenswrapper[4867]: W0101 08:43:47.205304 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59732ee6_32d1_48be_9e3f_a9989be15bbc.slice/crio-532288a9dffb712aa5d1d1e6f86669122c20a8c478330297c94fdc53c807726d WatchSource:0}: Error finding container 532288a9dffb712aa5d1d1e6f86669122c20a8c478330297c94fdc53c807726d: Status 404 returned error can't find the container with id 532288a9dffb712aa5d1d1e6f86669122c20a8c478330297c94fdc53c807726d Jan 01 08:43:47 crc kubenswrapper[4867]: I0101 08:43:47.227731 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-9b6f8f78c-2tl66"] Jan 01 08:43:47 crc kubenswrapper[4867]: E0101 08:43:47.246545 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:1b684c4ca525a279deee45980140d895e264526c5c7e0a6981d6fae6cbcaa420,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-shwn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-9b6f8f78c-2tl66_openstack-operators(26964f50-c878-4612-b298-634abc246f6a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 01 08:43:47 crc kubenswrapper[4867]: E0101 08:43:47.247054 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pv927,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-bf6d4f946-4x7b9_openstack-operators(6834a8b5-22a8-4a7c-b03f-633599137bd2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 01 08:43:47 crc kubenswrapper[4867]: I0101 08:43:47.247407 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-4x7b9"] Jan 01 08:43:47 crc kubenswrapper[4867]: E0101 08:43:47.247765 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-2tl66" podUID="26964f50-c878-4612-b298-634abc246f6a" Jan 01 08:43:47 crc kubenswrapper[4867]: E0101 08:43:47.248331 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-4x7b9" podUID="6834a8b5-22a8-4a7c-b03f-633599137bd2" Jan 01 08:43:47 crc kubenswrapper[4867]: I0101 08:43:47.259591 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-bb586bbf4-tkjs4"] Jan 01 08:43:47 crc kubenswrapper[4867]: W0101 08:43:47.259762 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e2a7fff_5652_4f64_9660_59b811de1346.slice/crio-39520b82ccaaeaf809b10e46bc2e17a33fd0d016a92db244c02119a758db647f WatchSource:0}: Error finding container 39520b82ccaaeaf809b10e46bc2e17a33fd0d016a92db244c02119a758db647f: Status 404 returned error can't find the container with id 39520b82ccaaeaf809b10e46bc2e17a33fd0d016a92db244c02119a758db647f Jan 01 08:43:47 crc kubenswrapper[4867]: E0101 08:43:47.263237 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ltjcb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7cd87b778f-m4l4l_openstack-operators(2c4f0e99-7a5c-4d99-8b15-3ddd97d6b6b0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 01 08:43:47 crc kubenswrapper[4867]: E0101 08:43:47.263387 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:df69e4193043476bc71d0e06ac8bc7bbd17f7b624d495aae6b7c5e5b40c9e1e7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qd9cw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-bb586bbf4-tkjs4_openstack-operators(2e2a7fff-5652-4f64-9660-59b811de1346): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 01 08:43:47 crc kubenswrapper[4867]: E0101 08:43:47.263642 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p7mv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-68c649d9d-276zz_openstack-operators(cd6e1e20-2735-40b9-a1c2-313e2845ffc8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 01 08:43:47 crc kubenswrapper[4867]: E0101 08:43:47.264844 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-m4l4l" podUID="2c4f0e99-7a5c-4d99-8b15-3ddd97d6b6b0" Jan 01 08:43:47 crc kubenswrapper[4867]: E0101 08:43:47.264853 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-276zz" podUID="cd6e1e20-2735-40b9-a1c2-313e2845ffc8" Jan 01 08:43:47 crc kubenswrapper[4867]: E0101 08:43:47.264868 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-tkjs4" podUID="2e2a7fff-5652-4f64-9660-59b811de1346" Jan 01 08:43:47 crc kubenswrapper[4867]: I0101 08:43:47.267280 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-m4l4l"] Jan 01 08:43:47 crc kubenswrapper[4867]: I0101 08:43:47.281733 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-276zz"] Jan 01 08:43:47 crc kubenswrapper[4867]: I0101 08:43:47.385656 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9dbdf6486-jcqkc"] Jan 01 08:43:47 crc kubenswrapper[4867]: I0101 08:43:47.389786 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mvr5b"] Jan 01 08:43:47 crc kubenswrapper[4867]: W0101 08:43:47.391065 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57ffbe9b_99b1_433d_86fa_e61435d99318.slice/crio-d49947bc0fd45cab53a04ea012ff653040ff0a520ff8a8c3be184aea04c13474 WatchSource:0}: Error finding container d49947bc0fd45cab53a04ea012ff653040ff0a520ff8a8c3be184aea04c13474: Status 404 returned error can't find the container with id d49947bc0fd45cab53a04ea012ff653040ff0a520ff8a8c3be184aea04c13474 Jan 01 08:43:47 crc kubenswrapper[4867]: E0101 08:43:47.397977 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-22vtj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-mvr5b_openstack-operators(065f64f0-26e4-4b68-8dfa-1bf17f20e99b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 01 08:43:47 crc kubenswrapper[4867]: E0101 08:43:47.399417 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mvr5b" podUID="065f64f0-26e4-4b68-8dfa-1bf17f20e99b" Jan 01 08:43:47 crc kubenswrapper[4867]: I0101 08:43:47.448930 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8e2aeeaf-c653-49dc-9165-fc5445bb7aaf-cert\") pod \"infra-operator-controller-manager-6d99759cf-vcnt9\" (UID: \"8e2aeeaf-c653-49dc-9165-fc5445bb7aaf\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-vcnt9" Jan 01 08:43:47 crc kubenswrapper[4867]: E0101 08:43:47.449119 4867 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 01 08:43:47 crc kubenswrapper[4867]: E0101 08:43:47.449188 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e2aeeaf-c653-49dc-9165-fc5445bb7aaf-cert podName:8e2aeeaf-c653-49dc-9165-fc5445bb7aaf nodeName:}" failed. No retries permitted until 2026-01-01 08:43:49.449168485 +0000 UTC m=+1038.584437264 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8e2aeeaf-c653-49dc-9165-fc5445bb7aaf-cert") pod "infra-operator-controller-manager-6d99759cf-vcnt9" (UID: "8e2aeeaf-c653-49dc-9165-fc5445bb7aaf") : secret "infra-operator-webhook-server-cert" not found Jan 01 08:43:47 crc kubenswrapper[4867]: I0101 08:43:47.651534 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d721206b-841d-4c5c-9d94-202fff6b8838-cert\") pod \"openstack-baremetal-operator-controller-manager-5c4776bcc5q4862\" (UID: \"d721206b-841d-4c5c-9d94-202fff6b8838\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c4776bcc5q4862" Jan 01 08:43:47 crc kubenswrapper[4867]: E0101 08:43:47.651763 4867 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 01 08:43:47 crc kubenswrapper[4867]: E0101 08:43:47.651857 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d721206b-841d-4c5c-9d94-202fff6b8838-cert podName:d721206b-841d-4c5c-9d94-202fff6b8838 nodeName:}" failed. No retries permitted until 2026-01-01 08:43:49.651833711 +0000 UTC m=+1038.787102490 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d721206b-841d-4c5c-9d94-202fff6b8838-cert") pod "openstack-baremetal-operator-controller-manager-5c4776bcc5q4862" (UID: "d721206b-841d-4c5c-9d94-202fff6b8838") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 01 08:43:47 crc kubenswrapper[4867]: I0101 08:43:47.955012 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-webhook-certs\") pod \"openstack-operator-controller-manager-7df7568dd6-9drs7\" (UID: \"1e5669c2-43cd-4d20-9d76-67e4dee53753\") " pod="openstack-operators/openstack-operator-controller-manager-7df7568dd6-9drs7" Jan 01 08:43:47 crc kubenswrapper[4867]: I0101 08:43:47.955345 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-metrics-certs\") pod \"openstack-operator-controller-manager-7df7568dd6-9drs7\" (UID: \"1e5669c2-43cd-4d20-9d76-67e4dee53753\") " pod="openstack-operators/openstack-operator-controller-manager-7df7568dd6-9drs7" Jan 01 08:43:47 crc kubenswrapper[4867]: E0101 08:43:47.955513 4867 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 01 08:43:47 crc kubenswrapper[4867]: E0101 08:43:47.955562 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-metrics-certs podName:1e5669c2-43cd-4d20-9d76-67e4dee53753 nodeName:}" failed. No retries permitted until 2026-01-01 08:43:49.955549077 +0000 UTC m=+1039.090817846 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-metrics-certs") pod "openstack-operator-controller-manager-7df7568dd6-9drs7" (UID: "1e5669c2-43cd-4d20-9d76-67e4dee53753") : secret "metrics-server-cert" not found Jan 01 08:43:47 crc kubenswrapper[4867]: E0101 08:43:47.955977 4867 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 01 08:43:47 crc kubenswrapper[4867]: E0101 08:43:47.956009 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-webhook-certs podName:1e5669c2-43cd-4d20-9d76-67e4dee53753 nodeName:}" failed. No retries permitted until 2026-01-01 08:43:49.95599894 +0000 UTC m=+1039.091267709 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-webhook-certs") pod "openstack-operator-controller-manager-7df7568dd6-9drs7" (UID: "1e5669c2-43cd-4d20-9d76-67e4dee53753") : secret "webhook-server-cert" not found Jan 01 08:43:48 crc kubenswrapper[4867]: I0101 08:43:48.113900 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-jcqkc" event={"ID":"57ffbe9b-99b1-433d-86fa-e61435d99318","Type":"ContainerStarted","Data":"d49947bc0fd45cab53a04ea012ff653040ff0a520ff8a8c3be184aea04c13474"} Jan 01 08:43:48 crc kubenswrapper[4867]: I0101 08:43:48.115515 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-tkjs4" event={"ID":"2e2a7fff-5652-4f64-9660-59b811de1346","Type":"ContainerStarted","Data":"39520b82ccaaeaf809b10e46bc2e17a33fd0d016a92db244c02119a758db647f"} Jan 01 08:43:48 crc kubenswrapper[4867]: I0101 08:43:48.117593 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-4x7b9" event={"ID":"6834a8b5-22a8-4a7c-b03f-633599137bd2","Type":"ContainerStarted","Data":"b427f52b0c6ceac7b03dd0f90352cb0a95651fa568a571dae7cb7f7cad90dce9"} Jan 01 08:43:48 crc kubenswrapper[4867]: E0101 08:43:48.118158 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:df69e4193043476bc71d0e06ac8bc7bbd17f7b624d495aae6b7c5e5b40c9e1e7\\\"\"" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-tkjs4" podUID="2e2a7fff-5652-4f64-9660-59b811de1346" Jan 01 08:43:48 crc kubenswrapper[4867]: E0101 08:43:48.118673 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-4x7b9" podUID="6834a8b5-22a8-4a7c-b03f-633599137bd2" Jan 01 08:43:48 crc kubenswrapper[4867]: I0101 08:43:48.146478 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7h2r7" event={"ID":"96cea6bb-e017-4e6c-9298-aaf07b775dff","Type":"ContainerStarted","Data":"0bc1e892b524001556852e26b7030547bc45ce2e7bac0088a21643907b485c4a"} Jan 01 08:43:48 crc kubenswrapper[4867]: I0101 08:43:48.150589 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-q2rv6" event={"ID":"8801d693-c818-4666-bda5-93d9db1d46a0","Type":"ContainerStarted","Data":"60751adddd1c8b25226ecdc46d8d666d6ea88586068b60dc35c4871de96f967e"} Jan 01 08:43:48 crc kubenswrapper[4867]: I0101 08:43:48.151781 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mvr5b" event={"ID":"065f64f0-26e4-4b68-8dfa-1bf17f20e99b","Type":"ContainerStarted","Data":"8b4fa6bb1416620ec409471460bc14b2be625242e58ac2ef84a5c8ccec6d203c"} Jan 01 08:43:48 crc kubenswrapper[4867]: E0101 08:43:48.153879 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mvr5b" podUID="065f64f0-26e4-4b68-8dfa-1bf17f20e99b" Jan 01 08:43:48 crc kubenswrapper[4867]: I0101 08:43:48.158153 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-m4l4l" event={"ID":"2c4f0e99-7a5c-4d99-8b15-3ddd97d6b6b0","Type":"ContainerStarted","Data":"a0c0704320e3f3359b35a7eeaa2a4859954e176f6bb84909fcfd6908b58b88d7"} Jan 01 08:43:48 crc kubenswrapper[4867]: E0101 08:43:48.159751 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-m4l4l" podUID="2c4f0e99-7a5c-4d99-8b15-3ddd97d6b6b0" Jan 01 08:43:48 crc kubenswrapper[4867]: I0101 08:43:48.167409 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-276zz" event={"ID":"cd6e1e20-2735-40b9-a1c2-313e2845ffc8","Type":"ContainerStarted","Data":"bcdebbb0fb6949027305407a4a5c12c832333b19e7efb2496988926099499a47"} Jan 01 08:43:48 crc kubenswrapper[4867]: E0101 08:43:48.170928 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-276zz" podUID="cd6e1e20-2735-40b9-a1c2-313e2845ffc8" Jan 01 08:43:48 crc kubenswrapper[4867]: I0101 08:43:48.175069 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-2tl66" event={"ID":"26964f50-c878-4612-b298-634abc246f6a","Type":"ContainerStarted","Data":"45a6f1e8db019bf4e22d84d85919c5526ca212f2506e6d0ba2cee03a5813dc78"} Jan 01 08:43:48 crc kubenswrapper[4867]: E0101 08:43:48.178990 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:1b684c4ca525a279deee45980140d895e264526c5c7e0a6981d6fae6cbcaa420\\\"\"" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-2tl66" podUID="26964f50-c878-4612-b298-634abc246f6a" Jan 01 08:43:48 crc kubenswrapper[4867]: I0101 08:43:48.183458 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-59zpg" event={"ID":"59732ee6-32d1-48be-9e3f-a9989be15bbc","Type":"ContainerStarted","Data":"532288a9dffb712aa5d1d1e6f86669122c20a8c478330297c94fdc53c807726d"} Jan 01 08:43:49 crc kubenswrapper[4867]: E0101 08:43:49.202913 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-276zz" podUID="cd6e1e20-2735-40b9-a1c2-313e2845ffc8" Jan 01 08:43:49 crc kubenswrapper[4867]: E0101 08:43:49.203936 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:df69e4193043476bc71d0e06ac8bc7bbd17f7b624d495aae6b7c5e5b40c9e1e7\\\"\"" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-tkjs4" podUID="2e2a7fff-5652-4f64-9660-59b811de1346" Jan 01 08:43:49 crc kubenswrapper[4867]: E0101 08:43:49.203974 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-4x7b9" podUID="6834a8b5-22a8-4a7c-b03f-633599137bd2" Jan 01 08:43:49 crc kubenswrapper[4867]: E0101 08:43:49.204016 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mvr5b" podUID="065f64f0-26e4-4b68-8dfa-1bf17f20e99b" Jan 01 08:43:49 crc kubenswrapper[4867]: E0101 08:43:49.204039 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:1b684c4ca525a279deee45980140d895e264526c5c7e0a6981d6fae6cbcaa420\\\"\"" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-2tl66" podUID="26964f50-c878-4612-b298-634abc246f6a" Jan 01 08:43:49 crc kubenswrapper[4867]: E0101 08:43:49.206872 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-m4l4l" podUID="2c4f0e99-7a5c-4d99-8b15-3ddd97d6b6b0" Jan 01 08:43:49 crc kubenswrapper[4867]: I0101 08:43:49.484192 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8e2aeeaf-c653-49dc-9165-fc5445bb7aaf-cert\") pod \"infra-operator-controller-manager-6d99759cf-vcnt9\" (UID: \"8e2aeeaf-c653-49dc-9165-fc5445bb7aaf\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-vcnt9" Jan 01 08:43:49 crc kubenswrapper[4867]: E0101 08:43:49.484385 4867 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 01 08:43:49 crc kubenswrapper[4867]: E0101 08:43:49.484472 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e2aeeaf-c653-49dc-9165-fc5445bb7aaf-cert podName:8e2aeeaf-c653-49dc-9165-fc5445bb7aaf nodeName:}" failed. No retries permitted until 2026-01-01 08:43:53.484446555 +0000 UTC m=+1042.619715324 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8e2aeeaf-c653-49dc-9165-fc5445bb7aaf-cert") pod "infra-operator-controller-manager-6d99759cf-vcnt9" (UID: "8e2aeeaf-c653-49dc-9165-fc5445bb7aaf") : secret "infra-operator-webhook-server-cert" not found Jan 01 08:43:49 crc kubenswrapper[4867]: I0101 08:43:49.686523 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d721206b-841d-4c5c-9d94-202fff6b8838-cert\") pod \"openstack-baremetal-operator-controller-manager-5c4776bcc5q4862\" (UID: \"d721206b-841d-4c5c-9d94-202fff6b8838\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c4776bcc5q4862" Jan 01 08:43:49 crc kubenswrapper[4867]: E0101 08:43:49.686693 4867 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 01 08:43:49 crc kubenswrapper[4867]: E0101 08:43:49.686745 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d721206b-841d-4c5c-9d94-202fff6b8838-cert podName:d721206b-841d-4c5c-9d94-202fff6b8838 nodeName:}" failed. No retries permitted until 2026-01-01 08:43:53.68673009 +0000 UTC m=+1042.821998859 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d721206b-841d-4c5c-9d94-202fff6b8838-cert") pod "openstack-baremetal-operator-controller-manager-5c4776bcc5q4862" (UID: "d721206b-841d-4c5c-9d94-202fff6b8838") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 01 08:43:49 crc kubenswrapper[4867]: I0101 08:43:49.991689 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-webhook-certs\") pod \"openstack-operator-controller-manager-7df7568dd6-9drs7\" (UID: \"1e5669c2-43cd-4d20-9d76-67e4dee53753\") " pod="openstack-operators/openstack-operator-controller-manager-7df7568dd6-9drs7" Jan 01 08:43:49 crc kubenswrapper[4867]: I0101 08:43:49.992006 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-metrics-certs\") pod \"openstack-operator-controller-manager-7df7568dd6-9drs7\" (UID: \"1e5669c2-43cd-4d20-9d76-67e4dee53753\") " pod="openstack-operators/openstack-operator-controller-manager-7df7568dd6-9drs7" Jan 01 08:43:49 crc kubenswrapper[4867]: E0101 08:43:49.992202 4867 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 01 08:43:49 crc kubenswrapper[4867]: E0101 08:43:49.992229 4867 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 01 08:43:49 crc kubenswrapper[4867]: E0101 08:43:49.992252 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-metrics-certs podName:1e5669c2-43cd-4d20-9d76-67e4dee53753 nodeName:}" failed. No retries permitted until 2026-01-01 08:43:53.992236156 +0000 UTC m=+1043.127504925 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-metrics-certs") pod "openstack-operator-controller-manager-7df7568dd6-9drs7" (UID: "1e5669c2-43cd-4d20-9d76-67e4dee53753") : secret "metrics-server-cert" not found Jan 01 08:43:49 crc kubenswrapper[4867]: E0101 08:43:49.992354 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-webhook-certs podName:1e5669c2-43cd-4d20-9d76-67e4dee53753 nodeName:}" failed. No retries permitted until 2026-01-01 08:43:53.992325668 +0000 UTC m=+1043.127594497 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-webhook-certs") pod "openstack-operator-controller-manager-7df7568dd6-9drs7" (UID: "1e5669c2-43cd-4d20-9d76-67e4dee53753") : secret "webhook-server-cert" not found Jan 01 08:43:53 crc kubenswrapper[4867]: I0101 08:43:53.545501 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8e2aeeaf-c653-49dc-9165-fc5445bb7aaf-cert\") pod \"infra-operator-controller-manager-6d99759cf-vcnt9\" (UID: \"8e2aeeaf-c653-49dc-9165-fc5445bb7aaf\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-vcnt9" Jan 01 08:43:53 crc kubenswrapper[4867]: E0101 08:43:53.545671 4867 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 01 08:43:53 crc kubenswrapper[4867]: E0101 08:43:53.546052 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e2aeeaf-c653-49dc-9165-fc5445bb7aaf-cert podName:8e2aeeaf-c653-49dc-9165-fc5445bb7aaf nodeName:}" failed. No retries permitted until 2026-01-01 08:44:01.546031723 +0000 UTC m=+1050.681300492 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8e2aeeaf-c653-49dc-9165-fc5445bb7aaf-cert") pod "infra-operator-controller-manager-6d99759cf-vcnt9" (UID: "8e2aeeaf-c653-49dc-9165-fc5445bb7aaf") : secret "infra-operator-webhook-server-cert" not found Jan 01 08:43:53 crc kubenswrapper[4867]: I0101 08:43:53.748830 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d721206b-841d-4c5c-9d94-202fff6b8838-cert\") pod \"openstack-baremetal-operator-controller-manager-5c4776bcc5q4862\" (UID: \"d721206b-841d-4c5c-9d94-202fff6b8838\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c4776bcc5q4862" Jan 01 08:43:53 crc kubenswrapper[4867]: E0101 08:43:53.749052 4867 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 01 08:43:53 crc kubenswrapper[4867]: E0101 08:43:53.749136 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d721206b-841d-4c5c-9d94-202fff6b8838-cert podName:d721206b-841d-4c5c-9d94-202fff6b8838 nodeName:}" failed. No retries permitted until 2026-01-01 08:44:01.749116761 +0000 UTC m=+1050.884385530 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d721206b-841d-4c5c-9d94-202fff6b8838-cert") pod "openstack-baremetal-operator-controller-manager-5c4776bcc5q4862" (UID: "d721206b-841d-4c5c-9d94-202fff6b8838") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 01 08:43:54 crc kubenswrapper[4867]: I0101 08:43:54.052260 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-webhook-certs\") pod \"openstack-operator-controller-manager-7df7568dd6-9drs7\" (UID: \"1e5669c2-43cd-4d20-9d76-67e4dee53753\") " pod="openstack-operators/openstack-operator-controller-manager-7df7568dd6-9drs7" Jan 01 08:43:54 crc kubenswrapper[4867]: I0101 08:43:54.052344 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-metrics-certs\") pod \"openstack-operator-controller-manager-7df7568dd6-9drs7\" (UID: \"1e5669c2-43cd-4d20-9d76-67e4dee53753\") " pod="openstack-operators/openstack-operator-controller-manager-7df7568dd6-9drs7" Jan 01 08:43:54 crc kubenswrapper[4867]: E0101 08:43:54.052382 4867 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 01 08:43:54 crc kubenswrapper[4867]: E0101 08:43:54.052470 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-webhook-certs podName:1e5669c2-43cd-4d20-9d76-67e4dee53753 nodeName:}" failed. No retries permitted until 2026-01-01 08:44:02.052452306 +0000 UTC m=+1051.187721075 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-webhook-certs") pod "openstack-operator-controller-manager-7df7568dd6-9drs7" (UID: "1e5669c2-43cd-4d20-9d76-67e4dee53753") : secret "webhook-server-cert" not found Jan 01 08:43:54 crc kubenswrapper[4867]: E0101 08:43:54.052471 4867 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 01 08:43:54 crc kubenswrapper[4867]: E0101 08:43:54.052545 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-metrics-certs podName:1e5669c2-43cd-4d20-9d76-67e4dee53753 nodeName:}" failed. No retries permitted until 2026-01-01 08:44:02.052526788 +0000 UTC m=+1051.187795557 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-metrics-certs") pod "openstack-operator-controller-manager-7df7568dd6-9drs7" (UID: "1e5669c2-43cd-4d20-9d76-67e4dee53753") : secret "metrics-server-cert" not found Jan 01 08:44:00 crc kubenswrapper[4867]: E0101 08:44:00.386871 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Jan 01 08:44:00 crc kubenswrapper[4867]: E0101 08:44:00.388429 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cspcv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-5fbbf8b6cc-59zpg_openstack-operators(59732ee6-32d1-48be-9e3f-a9989be15bbc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 01 08:44:00 crc kubenswrapper[4867]: E0101 08:44:00.389894 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-59zpg" podUID="59732ee6-32d1-48be-9e3f-a9989be15bbc" Jan 01 08:44:01 crc kubenswrapper[4867]: E0101 08:44:01.071341 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:879d3d679b58ae84419b7907ad092ad4d24bcc9222ce621ce464fd0fea347b0c" Jan 01 08:44:01 crc kubenswrapper[4867]: E0101 08:44:01.071732 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:879d3d679b58ae84419b7907ad092ad4d24bcc9222ce621ce464fd0fea347b0c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zj7vm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-568985c78-njm56_openstack-operators(44a94a32-c18a-4e1a-8b8a-461a002ab55c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 01 08:44:01 crc kubenswrapper[4867]: E0101 08:44:01.073384 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-568985c78-njm56" podUID="44a94a32-c18a-4e1a-8b8a-461a002ab55c" Jan 01 08:44:01 crc kubenswrapper[4867]: E0101 08:44:01.315401 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-59zpg" podUID="59732ee6-32d1-48be-9e3f-a9989be15bbc" Jan 01 08:44:01 crc kubenswrapper[4867]: E0101 08:44:01.315868 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:879d3d679b58ae84419b7907ad092ad4d24bcc9222ce621ce464fd0fea347b0c\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-568985c78-njm56" podUID="44a94a32-c18a-4e1a-8b8a-461a002ab55c" Jan 01 08:44:01 crc kubenswrapper[4867]: I0101 08:44:01.577359 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8e2aeeaf-c653-49dc-9165-fc5445bb7aaf-cert\") pod \"infra-operator-controller-manager-6d99759cf-vcnt9\" (UID: \"8e2aeeaf-c653-49dc-9165-fc5445bb7aaf\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-vcnt9" Jan 01 08:44:01 crc kubenswrapper[4867]: I0101 08:44:01.599237 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8e2aeeaf-c653-49dc-9165-fc5445bb7aaf-cert\") pod \"infra-operator-controller-manager-6d99759cf-vcnt9\" (UID: \"8e2aeeaf-c653-49dc-9165-fc5445bb7aaf\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-vcnt9" Jan 01 08:44:01 crc kubenswrapper[4867]: I0101 08:44:01.621140 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-vcnt9" Jan 01 08:44:01 crc kubenswrapper[4867]: I0101 08:44:01.779958 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d721206b-841d-4c5c-9d94-202fff6b8838-cert\") pod \"openstack-baremetal-operator-controller-manager-5c4776bcc5q4862\" (UID: \"d721206b-841d-4c5c-9d94-202fff6b8838\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c4776bcc5q4862" Jan 01 08:44:01 crc kubenswrapper[4867]: E0101 08:44:01.780198 4867 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 01 08:44:01 crc kubenswrapper[4867]: E0101 08:44:01.780301 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d721206b-841d-4c5c-9d94-202fff6b8838-cert podName:d721206b-841d-4c5c-9d94-202fff6b8838 nodeName:}" failed. No retries permitted until 2026-01-01 08:44:17.780283622 +0000 UTC m=+1066.915552381 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d721206b-841d-4c5c-9d94-202fff6b8838-cert") pod "openstack-baremetal-operator-controller-manager-5c4776bcc5q4862" (UID: "d721206b-841d-4c5c-9d94-202fff6b8838") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.093542 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-metrics-certs\") pod \"openstack-operator-controller-manager-7df7568dd6-9drs7\" (UID: \"1e5669c2-43cd-4d20-9d76-67e4dee53753\") " pod="openstack-operators/openstack-operator-controller-manager-7df7568dd6-9drs7" Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.093855 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-webhook-certs\") pod \"openstack-operator-controller-manager-7df7568dd6-9drs7\" (UID: \"1e5669c2-43cd-4d20-9d76-67e4dee53753\") " pod="openstack-operators/openstack-operator-controller-manager-7df7568dd6-9drs7" Jan 01 08:44:02 crc kubenswrapper[4867]: E0101 08:44:02.093755 4867 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 01 08:44:02 crc kubenswrapper[4867]: E0101 08:44:02.093984 4867 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 01 08:44:02 crc kubenswrapper[4867]: E0101 08:44:02.094032 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-metrics-certs podName:1e5669c2-43cd-4d20-9d76-67e4dee53753 nodeName:}" failed. No retries permitted until 2026-01-01 08:44:18.094018288 +0000 UTC m=+1067.229287047 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-metrics-certs") pod "openstack-operator-controller-manager-7df7568dd6-9drs7" (UID: "1e5669c2-43cd-4d20-9d76-67e4dee53753") : secret "metrics-server-cert" not found Jan 01 08:44:02 crc kubenswrapper[4867]: E0101 08:44:02.094072 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-webhook-certs podName:1e5669c2-43cd-4d20-9d76-67e4dee53753 nodeName:}" failed. No retries permitted until 2026-01-01 08:44:18.09406156 +0000 UTC m=+1067.229330329 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-webhook-certs") pod "openstack-operator-controller-manager-7df7568dd6-9drs7" (UID: "1e5669c2-43cd-4d20-9d76-67e4dee53753") : secret "webhook-server-cert" not found Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.115357 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6d99759cf-vcnt9"] Jan 01 08:44:02 crc kubenswrapper[4867]: W0101 08:44:02.131997 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e2aeeaf_c653_49dc_9165_fc5445bb7aaf.slice/crio-b4aeb7ec70949d86f4d498118130a12c6a186a09c7ffe536082ad8ae4d951a85 WatchSource:0}: Error finding container b4aeb7ec70949d86f4d498118130a12c6a186a09c7ffe536082ad8ae4d951a85: Status 404 returned error can't find the container with id b4aeb7ec70949d86f4d498118130a12c6a186a09c7ffe536082ad8ae4d951a85 Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.338455 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-svlz4" event={"ID":"06f61537-0f61-41ee-a049-10540e971c9d","Type":"ContainerStarted","Data":"a4300a7a0892d73e3fd9e5b9dc04eb9c04b1fdd16428370cb7939e35883e6596"} Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.338809 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-svlz4" Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.342523 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-vcnt9" event={"ID":"8e2aeeaf-c653-49dc-9165-fc5445bb7aaf","Type":"ContainerStarted","Data":"b4aeb7ec70949d86f4d498118130a12c6a186a09c7ffe536082ad8ae4d951a85"} Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.345379 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-mvsxj" event={"ID":"bbd5645a-1a67-44ef-8aa2-25fa40566538","Type":"ContainerStarted","Data":"b48c4a4e64e52828a62de54641797b6a95a0372d28a7a493026ee9d793fea217"} Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.345983 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-mvsxj" Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.348092 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hn42f" event={"ID":"3b2fcdd1-2278-4f3e-b3aa-570321fafee8","Type":"ContainerStarted","Data":"cd7e8b0f27d29143dfad82a3e52f3c9d900c58a82349cbb3fa7fbdf4c6a2799e"} Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.348499 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hn42f" Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.368468 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-k6qkg" event={"ID":"80077b2f-5e6e-49f7-9d98-8c1004ab2cd4","Type":"ContainerStarted","Data":"d9f2a5dbbddf9acbba705779c9b3c0d18fb24f1957d2b9f9e37f3113089b87c6"} Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.368527 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-k6qkg" Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.385314 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-jcqkc" event={"ID":"57ffbe9b-99b1-433d-86fa-e61435d99318","Type":"ContainerStarted","Data":"e31e268fc322c1cbb62d2ce9550a4d9008fee38b6b4125d674c9e1d7e3cbd06a"} Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.385972 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-jcqkc" Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.391606 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-svlz4" podStartSLOduration=3.34769546 podStartE2EDuration="17.391589622s" podCreationTimestamp="2026-01-01 08:43:45 +0000 UTC" firstStartedPulling="2026-01-01 08:43:47.029024669 +0000 UTC m=+1036.164293438" lastFinishedPulling="2026-01-01 08:44:01.072918831 +0000 UTC m=+1050.208187600" observedRunningTime="2026-01-01 08:44:02.365148282 +0000 UTC m=+1051.500417051" watchObservedRunningTime="2026-01-01 08:44:02.391589622 +0000 UTC m=+1051.526858391" Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.409428 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-mxm97" event={"ID":"89e16415-08c2-45fe-8a85-b1f12d047cde","Type":"ContainerStarted","Data":"b44849f942159ea0c1aa3fd94771b4d4292853d8ca64f4cb843ba5708f5f2091"} Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.409549 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-mxm97" Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.415769 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hn42f" podStartSLOduration=3.176139625 podStartE2EDuration="17.415749459s" podCreationTimestamp="2026-01-01 08:43:45 +0000 UTC" firstStartedPulling="2026-01-01 08:43:46.838349348 +0000 UTC m=+1035.973618117" lastFinishedPulling="2026-01-01 08:44:01.077959182 +0000 UTC m=+1050.213227951" observedRunningTime="2026-01-01 08:44:02.391941102 +0000 UTC m=+1051.527209871" watchObservedRunningTime="2026-01-01 08:44:02.415749459 +0000 UTC m=+1051.551018228" Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.419427 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-mvsxj" podStartSLOduration=3.391318611 podStartE2EDuration="17.419415021s" podCreationTimestamp="2026-01-01 08:43:45 +0000 UTC" firstStartedPulling="2026-01-01 08:43:47.015242243 +0000 UTC m=+1036.150511012" lastFinishedPulling="2026-01-01 08:44:01.043338663 +0000 UTC m=+1050.178607422" observedRunningTime="2026-01-01 08:44:02.414414431 +0000 UTC m=+1051.549683210" watchObservedRunningTime="2026-01-01 08:44:02.419415021 +0000 UTC m=+1051.554683790" Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.420943 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-8h6dx" event={"ID":"7cee2279-7f63-4416-bf5f-42e1ca8bd334","Type":"ContainerStarted","Data":"98c6cfed9258f4c4d9ca2034198c0c6d3f86d071960735f9108f6c6e4a4f8ea3"} Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.421576 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-8h6dx" Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.426012 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-vp4t2" event={"ID":"795d3985-4592-42e4-aa83-aaebb35bcc6d","Type":"ContainerStarted","Data":"ed99c7a6522c54e664942d654c0cbe6ff15ab2bb2d01dd81f6d39af60d6db64c"} Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.426775 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-vp4t2" Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.432456 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-5jhrr" event={"ID":"dbc4e740-aa10-4b7b-88db-7c172dae38f9","Type":"ContainerStarted","Data":"d16a47518e479f2a2e54cc1971caa1577186edafab064436d8235e14f442b188"} Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.432974 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-5jhrr" Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.435280 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-q2rv6" event={"ID":"8801d693-c818-4666-bda5-93d9db1d46a0","Type":"ContainerStarted","Data":"54e1e480f31d923a0a4255a38fe42692ba4fb1ac35d75f9f4de6b8a1acf229ea"} Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.435737 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-q2rv6" Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.440318 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-gbrnj" event={"ID":"a8217738-2a7f-41ee-9d06-a329d7c8dbfc","Type":"ContainerStarted","Data":"2b165f09c6004e6c47411aedbae2d3bc10d4466552ae126f56866e167c5dd2b8"} Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.440989 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-gbrnj" Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.442174 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-k6qkg" podStartSLOduration=2.816535914 podStartE2EDuration="17.442154538s" podCreationTimestamp="2026-01-01 08:43:45 +0000 UTC" firstStartedPulling="2026-01-01 08:43:46.446476924 +0000 UTC m=+1035.581745693" lastFinishedPulling="2026-01-01 08:44:01.072095548 +0000 UTC m=+1050.207364317" observedRunningTime="2026-01-01 08:44:02.438828805 +0000 UTC m=+1051.574097594" watchObservedRunningTime="2026-01-01 08:44:02.442154538 +0000 UTC m=+1051.577423307" Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.445160 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7h2r7" event={"ID":"96cea6bb-e017-4e6c-9298-aaf07b775dff","Type":"ContainerStarted","Data":"dc4f5bc8a93b45cdfae9532ad588a8d41559c7ad6e4ec1b62542ed7fb3089fc7"} Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.445332 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7h2r7" Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.460673 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-mxm97" podStartSLOduration=3.104065217 podStartE2EDuration="17.460657297s" podCreationTimestamp="2026-01-01 08:43:45 +0000 UTC" firstStartedPulling="2026-01-01 08:43:46.686370902 +0000 UTC m=+1035.821639671" lastFinishedPulling="2026-01-01 08:44:01.042962982 +0000 UTC m=+1050.178231751" observedRunningTime="2026-01-01 08:44:02.459306679 +0000 UTC m=+1051.594575448" watchObservedRunningTime="2026-01-01 08:44:02.460657297 +0000 UTC m=+1051.595926066" Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.496144 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-jcqkc" podStartSLOduration=2.7869041599999997 podStartE2EDuration="16.49612665s" podCreationTimestamp="2026-01-01 08:43:46 +0000 UTC" firstStartedPulling="2026-01-01 08:43:47.397759475 +0000 UTC m=+1036.533028244" lastFinishedPulling="2026-01-01 08:44:01.106981965 +0000 UTC m=+1050.242250734" observedRunningTime="2026-01-01 08:44:02.491324025 +0000 UTC m=+1051.626592814" watchObservedRunningTime="2026-01-01 08:44:02.49612665 +0000 UTC m=+1051.631395419" Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.524688 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7h2r7" podStartSLOduration=3.645786087 podStartE2EDuration="17.524670279s" podCreationTimestamp="2026-01-01 08:43:45 +0000 UTC" firstStartedPulling="2026-01-01 08:43:47.19976451 +0000 UTC m=+1036.335033279" lastFinishedPulling="2026-01-01 08:44:01.078648702 +0000 UTC m=+1050.213917471" observedRunningTime="2026-01-01 08:44:02.51614343 +0000 UTC m=+1051.651412189" watchObservedRunningTime="2026-01-01 08:44:02.524670279 +0000 UTC m=+1051.659939048" Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.539625 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-q2rv6" podStartSLOduration=3.619848802 podStartE2EDuration="17.539609508s" podCreationTimestamp="2026-01-01 08:43:45 +0000 UTC" firstStartedPulling="2026-01-01 08:43:47.177578429 +0000 UTC m=+1036.312847188" lastFinishedPulling="2026-01-01 08:44:01.097339125 +0000 UTC m=+1050.232607894" observedRunningTime="2026-01-01 08:44:02.535123202 +0000 UTC m=+1051.670391971" watchObservedRunningTime="2026-01-01 08:44:02.539609508 +0000 UTC m=+1051.674878277" Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.559208 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-8h6dx" podStartSLOduration=3.340537759 podStartE2EDuration="17.559189756s" podCreationTimestamp="2026-01-01 08:43:45 +0000 UTC" firstStartedPulling="2026-01-01 08:43:46.853547404 +0000 UTC m=+1035.988816173" lastFinishedPulling="2026-01-01 08:44:01.072199401 +0000 UTC m=+1050.207468170" observedRunningTime="2026-01-01 08:44:02.557834458 +0000 UTC m=+1051.693103227" watchObservedRunningTime="2026-01-01 08:44:02.559189756 +0000 UTC m=+1051.694458525" Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.587615 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-5jhrr" podStartSLOduration=3.53622858 podStartE2EDuration="17.587598762s" podCreationTimestamp="2026-01-01 08:43:45 +0000 UTC" firstStartedPulling="2026-01-01 08:43:47.022611569 +0000 UTC m=+1036.157880338" lastFinishedPulling="2026-01-01 08:44:01.073981751 +0000 UTC m=+1050.209250520" observedRunningTime="2026-01-01 08:44:02.584259298 +0000 UTC m=+1051.719528067" watchObservedRunningTime="2026-01-01 08:44:02.587598762 +0000 UTC m=+1051.722867531" Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.610481 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-gbrnj" podStartSLOduration=3.423495313 podStartE2EDuration="17.610447072s" podCreationTimestamp="2026-01-01 08:43:45 +0000 UTC" firstStartedPulling="2026-01-01 08:43:46.855965622 +0000 UTC m=+1035.991234391" lastFinishedPulling="2026-01-01 08:44:01.042917391 +0000 UTC m=+1050.178186150" observedRunningTime="2026-01-01 08:44:02.607648283 +0000 UTC m=+1051.742917052" watchObservedRunningTime="2026-01-01 08:44:02.610447072 +0000 UTC m=+1051.745715841" Jan 01 08:44:02 crc kubenswrapper[4867]: I0101 08:44:02.626430 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-vp4t2" podStartSLOduration=3.2640964390000002 podStartE2EDuration="17.626413919s" podCreationTimestamp="2026-01-01 08:43:45 +0000 UTC" firstStartedPulling="2026-01-01 08:43:46.733533543 +0000 UTC m=+1035.868802312" lastFinishedPulling="2026-01-01 08:44:01.095851023 +0000 UTC m=+1050.231119792" observedRunningTime="2026-01-01 08:44:02.622039126 +0000 UTC m=+1051.757307905" watchObservedRunningTime="2026-01-01 08:44:02.626413919 +0000 UTC m=+1051.761682688" Jan 01 08:44:06 crc kubenswrapper[4867]: I0101 08:44:06.006054 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-mvsxj" Jan 01 08:44:06 crc kubenswrapper[4867]: I0101 08:44:06.014008 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-5jhrr" Jan 01 08:44:06 crc kubenswrapper[4867]: I0101 08:44:06.091667 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-gbrnj" Jan 01 08:44:06 crc kubenswrapper[4867]: I0101 08:44:06.135122 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-svlz4" Jan 01 08:44:06 crc kubenswrapper[4867]: I0101 08:44:06.270411 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-q2rv6" Jan 01 08:44:06 crc kubenswrapper[4867]: I0101 08:44:06.295337 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7h2r7" Jan 01 08:44:06 crc kubenswrapper[4867]: I0101 08:44:06.607564 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-jcqkc" Jan 01 08:44:12 crc kubenswrapper[4867]: I0101 08:44:12.528791 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-tkjs4" event={"ID":"2e2a7fff-5652-4f64-9660-59b811de1346","Type":"ContainerStarted","Data":"187960cd2edc2b164f040bf98414a670b1cb4b37ccbb027ffa8a2c12133b8a74"} Jan 01 08:44:12 crc kubenswrapper[4867]: I0101 08:44:12.529467 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-tkjs4" Jan 01 08:44:12 crc kubenswrapper[4867]: I0101 08:44:12.530190 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-4x7b9" event={"ID":"6834a8b5-22a8-4a7c-b03f-633599137bd2","Type":"ContainerStarted","Data":"2ae9f03684697b0c664045ba69bcac50e59a8070388b46fe32b9384707574703"} Jan 01 08:44:12 crc kubenswrapper[4867]: I0101 08:44:12.530349 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-4x7b9" Jan 01 08:44:12 crc kubenswrapper[4867]: I0101 08:44:12.531611 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-276zz" event={"ID":"cd6e1e20-2735-40b9-a1c2-313e2845ffc8","Type":"ContainerStarted","Data":"bb194acc94dc626fc6afe6b0df3600c0a34fe727c8bd376f1d7c01e73c571c56"} Jan 01 08:44:12 crc kubenswrapper[4867]: I0101 08:44:12.531934 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-276zz" Jan 01 08:44:12 crc kubenswrapper[4867]: I0101 08:44:12.533117 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-vcnt9" event={"ID":"8e2aeeaf-c653-49dc-9165-fc5445bb7aaf","Type":"ContainerStarted","Data":"06efe8e4253a164f0285bf2ff4e25a544f5f51fc0ba4f2ede770a117c7a83b3b"} Jan 01 08:44:12 crc kubenswrapper[4867]: I0101 08:44:12.533192 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-vcnt9" Jan 01 08:44:12 crc kubenswrapper[4867]: I0101 08:44:12.534507 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-2tl66" event={"ID":"26964f50-c878-4612-b298-634abc246f6a","Type":"ContainerStarted","Data":"8e82e13efebc594f2c44e790b4ca0eb0f3914ca11ba296bf3d7137919fd40d02"} Jan 01 08:44:12 crc kubenswrapper[4867]: I0101 08:44:12.534800 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-2tl66" Jan 01 08:44:12 crc kubenswrapper[4867]: I0101 08:44:12.535701 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mvr5b" event={"ID":"065f64f0-26e4-4b68-8dfa-1bf17f20e99b","Type":"ContainerStarted","Data":"50c31d001fcb7574398c0201ee600de534ed33a6c40ffe635c0bf33a8574cc75"} Jan 01 08:44:12 crc kubenswrapper[4867]: I0101 08:44:12.537179 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-m4l4l" event={"ID":"2c4f0e99-7a5c-4d99-8b15-3ddd97d6b6b0","Type":"ContainerStarted","Data":"2c7db06c9527a42ba0b298dbfc3c2931d91645ade2952570c268ae75fcd6db44"} Jan 01 08:44:12 crc kubenswrapper[4867]: I0101 08:44:12.537457 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-m4l4l" Jan 01 08:44:12 crc kubenswrapper[4867]: I0101 08:44:12.550583 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-tkjs4" podStartSLOduration=3.253878733 podStartE2EDuration="27.550559104s" podCreationTimestamp="2026-01-01 08:43:45 +0000 UTC" firstStartedPulling="2026-01-01 08:43:47.263260099 +0000 UTC m=+1036.398528858" lastFinishedPulling="2026-01-01 08:44:11.55994046 +0000 UTC m=+1060.695209229" observedRunningTime="2026-01-01 08:44:12.542967631 +0000 UTC m=+1061.678236420" watchObservedRunningTime="2026-01-01 08:44:12.550559104 +0000 UTC m=+1061.685827883" Jan 01 08:44:12 crc kubenswrapper[4867]: I0101 08:44:12.580372 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-4x7b9" podStartSLOduration=3.26737508 podStartE2EDuration="27.580348958s" podCreationTimestamp="2026-01-01 08:43:45 +0000 UTC" firstStartedPulling="2026-01-01 08:43:47.246950272 +0000 UTC m=+1036.382219041" lastFinishedPulling="2026-01-01 08:44:11.55992415 +0000 UTC m=+1060.695192919" observedRunningTime="2026-01-01 08:44:12.574434732 +0000 UTC m=+1061.709703511" watchObservedRunningTime="2026-01-01 08:44:12.580348958 +0000 UTC m=+1061.715617727" Jan 01 08:44:12 crc kubenswrapper[4867]: I0101 08:44:12.591096 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-vcnt9" podStartSLOduration=18.16778603 podStartE2EDuration="27.591079608s" podCreationTimestamp="2026-01-01 08:43:45 +0000 UTC" firstStartedPulling="2026-01-01 08:44:02.135770038 +0000 UTC m=+1051.271038807" lastFinishedPulling="2026-01-01 08:44:11.559063616 +0000 UTC m=+1060.694332385" observedRunningTime="2026-01-01 08:44:12.588878327 +0000 UTC m=+1061.724147096" watchObservedRunningTime="2026-01-01 08:44:12.591079608 +0000 UTC m=+1061.726348377" Jan 01 08:44:12 crc kubenswrapper[4867]: I0101 08:44:12.610177 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-276zz" podStartSLOduration=3.228768799 podStartE2EDuration="27.610156633s" podCreationTimestamp="2026-01-01 08:43:45 +0000 UTC" firstStartedPulling="2026-01-01 08:43:47.263572737 +0000 UTC m=+1036.398841496" lastFinishedPulling="2026-01-01 08:44:11.644960541 +0000 UTC m=+1060.780229330" observedRunningTime="2026-01-01 08:44:12.609794583 +0000 UTC m=+1061.745063352" watchObservedRunningTime="2026-01-01 08:44:12.610156633 +0000 UTC m=+1061.745425402" Jan 01 08:44:12 crc kubenswrapper[4867]: I0101 08:44:12.626035 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-2tl66" podStartSLOduration=3.3141576600000002 podStartE2EDuration="27.626016877s" podCreationTimestamp="2026-01-01 08:43:45 +0000 UTC" firstStartedPulling="2026-01-01 08:43:47.246348005 +0000 UTC m=+1036.381616774" lastFinishedPulling="2026-01-01 08:44:11.558207212 +0000 UTC m=+1060.693475991" observedRunningTime="2026-01-01 08:44:12.622308963 +0000 UTC m=+1061.757577732" watchObservedRunningTime="2026-01-01 08:44:12.626016877 +0000 UTC m=+1061.761285646" Jan 01 08:44:12 crc kubenswrapper[4867]: I0101 08:44:12.644879 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mvr5b" podStartSLOduration=2.437618287 podStartE2EDuration="26.644852524s" podCreationTimestamp="2026-01-01 08:43:46 +0000 UTC" firstStartedPulling="2026-01-01 08:43:47.397786536 +0000 UTC m=+1036.533055315" lastFinishedPulling="2026-01-01 08:44:11.605020773 +0000 UTC m=+1060.740289552" observedRunningTime="2026-01-01 08:44:12.633965069 +0000 UTC m=+1061.769233868" watchObservedRunningTime="2026-01-01 08:44:12.644852524 +0000 UTC m=+1061.780121303" Jan 01 08:44:12 crc kubenswrapper[4867]: I0101 08:44:12.654829 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-m4l4l" podStartSLOduration=3.356597159 podStartE2EDuration="27.654812323s" podCreationTimestamp="2026-01-01 08:43:45 +0000 UTC" firstStartedPulling="2026-01-01 08:43:47.263093754 +0000 UTC m=+1036.398362523" lastFinishedPulling="2026-01-01 08:44:11.561308918 +0000 UTC m=+1060.696577687" observedRunningTime="2026-01-01 08:44:12.649893696 +0000 UTC m=+1061.785162485" watchObservedRunningTime="2026-01-01 08:44:12.654812323 +0000 UTC m=+1061.790081092" Jan 01 08:44:14 crc kubenswrapper[4867]: I0101 08:44:14.553498 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-59zpg" event={"ID":"59732ee6-32d1-48be-9e3f-a9989be15bbc","Type":"ContainerStarted","Data":"b3ee579ced7398ac8023826afcc13dc2073f36de3294e6cf99bc7c8a45d3da59"} Jan 01 08:44:14 crc kubenswrapper[4867]: I0101 08:44:14.555000 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-59zpg" Jan 01 08:44:14 crc kubenswrapper[4867]: I0101 08:44:14.555927 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-568985c78-njm56" event={"ID":"44a94a32-c18a-4e1a-8b8a-461a002ab55c","Type":"ContainerStarted","Data":"0ee8f4aed9257dc6569bc94cfc9709201e43d7400f1f1d91d5debab8395a3201"} Jan 01 08:44:14 crc kubenswrapper[4867]: I0101 08:44:14.556220 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-568985c78-njm56" Jan 01 08:44:14 crc kubenswrapper[4867]: I0101 08:44:14.613003 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-568985c78-njm56" podStartSLOduration=3.080637199 podStartE2EDuration="29.612973743s" podCreationTimestamp="2026-01-01 08:43:45 +0000 UTC" firstStartedPulling="2026-01-01 08:43:47.029057529 +0000 UTC m=+1036.164326298" lastFinishedPulling="2026-01-01 08:44:13.561394063 +0000 UTC m=+1062.696662842" observedRunningTime="2026-01-01 08:44:14.612847649 +0000 UTC m=+1063.748116428" watchObservedRunningTime="2026-01-01 08:44:14.612973743 +0000 UTC m=+1063.748242552" Jan 01 08:44:14 crc kubenswrapper[4867]: I0101 08:44:14.620462 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-59zpg" podStartSLOduration=3.183617673 podStartE2EDuration="29.620443942s" podCreationTimestamp="2026-01-01 08:43:45 +0000 UTC" firstStartedPulling="2026-01-01 08:43:47.208159445 +0000 UTC m=+1036.343428214" lastFinishedPulling="2026-01-01 08:44:13.644985714 +0000 UTC m=+1062.780254483" observedRunningTime="2026-01-01 08:44:14.590523034 +0000 UTC m=+1063.725791833" watchObservedRunningTime="2026-01-01 08:44:14.620443942 +0000 UTC m=+1063.755712741" Jan 01 08:44:15 crc kubenswrapper[4867]: I0101 08:44:15.915633 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-k6qkg" Jan 01 08:44:15 crc kubenswrapper[4867]: I0101 08:44:15.929966 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-mxm97" Jan 01 08:44:15 crc kubenswrapper[4867]: I0101 08:44:15.948589 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-vp4t2" Jan 01 08:44:15 crc kubenswrapper[4867]: I0101 08:44:15.975040 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-8h6dx" Jan 01 08:44:16 crc kubenswrapper[4867]: I0101 08:44:16.001575 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hn42f" Jan 01 08:44:16 crc kubenswrapper[4867]: I0101 08:44:16.175922 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-m4l4l" Jan 01 08:44:16 crc kubenswrapper[4867]: I0101 08:44:16.193429 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-276zz" Jan 01 08:44:16 crc kubenswrapper[4867]: I0101 08:44:16.208301 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-4x7b9" Jan 01 08:44:16 crc kubenswrapper[4867]: I0101 08:44:16.244139 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-2tl66" Jan 01 08:44:16 crc kubenswrapper[4867]: I0101 08:44:16.258627 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-tkjs4" Jan 01 08:44:17 crc kubenswrapper[4867]: I0101 08:44:17.850489 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d721206b-841d-4c5c-9d94-202fff6b8838-cert\") pod \"openstack-baremetal-operator-controller-manager-5c4776bcc5q4862\" (UID: \"d721206b-841d-4c5c-9d94-202fff6b8838\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c4776bcc5q4862" Jan 01 08:44:17 crc kubenswrapper[4867]: I0101 08:44:17.860436 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d721206b-841d-4c5c-9d94-202fff6b8838-cert\") pod \"openstack-baremetal-operator-controller-manager-5c4776bcc5q4862\" (UID: \"d721206b-841d-4c5c-9d94-202fff6b8838\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c4776bcc5q4862" Jan 01 08:44:18 crc kubenswrapper[4867]: I0101 08:44:18.044517 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c4776bcc5q4862" Jan 01 08:44:18 crc kubenswrapper[4867]: I0101 08:44:18.155175 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-webhook-certs\") pod \"openstack-operator-controller-manager-7df7568dd6-9drs7\" (UID: \"1e5669c2-43cd-4d20-9d76-67e4dee53753\") " pod="openstack-operators/openstack-operator-controller-manager-7df7568dd6-9drs7" Jan 01 08:44:18 crc kubenswrapper[4867]: I0101 08:44:18.155461 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-metrics-certs\") pod \"openstack-operator-controller-manager-7df7568dd6-9drs7\" (UID: \"1e5669c2-43cd-4d20-9d76-67e4dee53753\") " pod="openstack-operators/openstack-operator-controller-manager-7df7568dd6-9drs7" Jan 01 08:44:18 crc kubenswrapper[4867]: I0101 08:44:18.159392 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-webhook-certs\") pod \"openstack-operator-controller-manager-7df7568dd6-9drs7\" (UID: \"1e5669c2-43cd-4d20-9d76-67e4dee53753\") " pod="openstack-operators/openstack-operator-controller-manager-7df7568dd6-9drs7" Jan 01 08:44:18 crc kubenswrapper[4867]: I0101 08:44:18.167469 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e5669c2-43cd-4d20-9d76-67e4dee53753-metrics-certs\") pod \"openstack-operator-controller-manager-7df7568dd6-9drs7\" (UID: \"1e5669c2-43cd-4d20-9d76-67e4dee53753\") " pod="openstack-operators/openstack-operator-controller-manager-7df7568dd6-9drs7" Jan 01 08:44:18 crc kubenswrapper[4867]: I0101 08:44:18.346051 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5c4776bcc5q4862"] Jan 01 08:44:18 crc kubenswrapper[4867]: W0101 08:44:18.349668 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd721206b_841d_4c5c_9d94_202fff6b8838.slice/crio-80fc7b4251effd3e69cf74423cc567aeeadc60befa4c9947c0711f1702cbb909 WatchSource:0}: Error finding container 80fc7b4251effd3e69cf74423cc567aeeadc60befa4c9947c0711f1702cbb909: Status 404 returned error can't find the container with id 80fc7b4251effd3e69cf74423cc567aeeadc60befa4c9947c0711f1702cbb909 Jan 01 08:44:18 crc kubenswrapper[4867]: I0101 08:44:18.429181 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7df7568dd6-9drs7" Jan 01 08:44:18 crc kubenswrapper[4867]: I0101 08:44:18.585813 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c4776bcc5q4862" event={"ID":"d721206b-841d-4c5c-9d94-202fff6b8838","Type":"ContainerStarted","Data":"80fc7b4251effd3e69cf74423cc567aeeadc60befa4c9947c0711f1702cbb909"} Jan 01 08:44:18 crc kubenswrapper[4867]: I0101 08:44:18.681688 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7df7568dd6-9drs7"] Jan 01 08:44:19 crc kubenswrapper[4867]: I0101 08:44:19.599517 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7df7568dd6-9drs7" event={"ID":"1e5669c2-43cd-4d20-9d76-67e4dee53753","Type":"ContainerStarted","Data":"564ef7620afd3e466e2c603dd812671fba3c55394b43dd20b5b97168d72b4676"} Jan 01 08:44:21 crc kubenswrapper[4867]: I0101 08:44:21.632048 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-vcnt9" Jan 01 08:44:25 crc kubenswrapper[4867]: I0101 08:44:25.641671 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7df7568dd6-9drs7" event={"ID":"1e5669c2-43cd-4d20-9d76-67e4dee53753","Type":"ContainerStarted","Data":"99a305378975c3fc4891bd2817a3e0b5fb9490c5fce9d84a3bb32cf04610795f"} Jan 01 08:44:26 crc kubenswrapper[4867]: I0101 08:44:26.095279 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-568985c78-njm56" Jan 01 08:44:26 crc kubenswrapper[4867]: I0101 08:44:26.209602 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-59zpg" Jan 01 08:44:26 crc kubenswrapper[4867]: I0101 08:44:26.648569 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7df7568dd6-9drs7" Jan 01 08:44:26 crc kubenswrapper[4867]: I0101 08:44:26.682825 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7df7568dd6-9drs7" podStartSLOduration=40.682805161 podStartE2EDuration="40.682805161s" podCreationTimestamp="2026-01-01 08:43:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:44:26.672232695 +0000 UTC m=+1075.807501474" watchObservedRunningTime="2026-01-01 08:44:26.682805161 +0000 UTC m=+1075.818073950" Jan 01 08:44:26 crc kubenswrapper[4867]: E0101 08:44:26.910972 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 01 08:44:27 crc kubenswrapper[4867]: I0101 08:44:27.659164 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c4776bcc5q4862" event={"ID":"d721206b-841d-4c5c-9d94-202fff6b8838","Type":"ContainerStarted","Data":"8816829bffbec1f3bbeb7163f4a7c79231e12431a8580d01c72831983d3ea8b6"} Jan 01 08:44:27 crc kubenswrapper[4867]: I0101 08:44:27.659615 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c4776bcc5q4862" Jan 01 08:44:27 crc kubenswrapper[4867]: I0101 08:44:27.695549 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c4776bcc5q4862" podStartSLOduration=33.72629222 podStartE2EDuration="42.695530393s" podCreationTimestamp="2026-01-01 08:43:45 +0000 UTC" firstStartedPulling="2026-01-01 08:44:18.352496992 +0000 UTC m=+1067.487765761" lastFinishedPulling="2026-01-01 08:44:27.321735165 +0000 UTC m=+1076.457003934" observedRunningTime="2026-01-01 08:44:27.691410548 +0000 UTC m=+1076.826679387" watchObservedRunningTime="2026-01-01 08:44:27.695530393 +0000 UTC m=+1076.830799172" Jan 01 08:44:38 crc kubenswrapper[4867]: I0101 08:44:38.056616 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c4776bcc5q4862" Jan 01 08:44:38 crc kubenswrapper[4867]: I0101 08:44:38.437920 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7df7568dd6-9drs7" Jan 01 08:44:51 crc kubenswrapper[4867]: I0101 08:44:51.330858 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 08:44:51 crc kubenswrapper[4867]: I0101 08:44:51.331735 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 08:44:54 crc kubenswrapper[4867]: I0101 08:44:54.741641 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-dtzxj"] Jan 01 08:44:54 crc kubenswrapper[4867]: I0101 08:44:54.743366 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-dtzxj" Jan 01 08:44:54 crc kubenswrapper[4867]: I0101 08:44:54.745473 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 01 08:44:54 crc kubenswrapper[4867]: I0101 08:44:54.746143 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 01 08:44:54 crc kubenswrapper[4867]: I0101 08:44:54.746461 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 01 08:44:54 crc kubenswrapper[4867]: I0101 08:44:54.751545 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-x5rtq" Jan 01 08:44:54 crc kubenswrapper[4867]: I0101 08:44:54.801217 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-dtzxj"] Jan 01 08:44:54 crc kubenswrapper[4867]: I0101 08:44:54.810633 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-lj4mw"] Jan 01 08:44:54 crc kubenswrapper[4867]: I0101 08:44:54.811985 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-lj4mw" Jan 01 08:44:54 crc kubenswrapper[4867]: I0101 08:44:54.819010 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 01 08:44:54 crc kubenswrapper[4867]: I0101 08:44:54.823602 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-lj4mw"] Jan 01 08:44:54 crc kubenswrapper[4867]: I0101 08:44:54.858426 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5kcj\" (UniqueName: \"kubernetes.io/projected/baa9a732-d77a-4f52-a689-53f88422f10e-kube-api-access-x5kcj\") pod \"dnsmasq-dns-84bb9d8bd9-dtzxj\" (UID: \"baa9a732-d77a-4f52-a689-53f88422f10e\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-dtzxj" Jan 01 08:44:54 crc kubenswrapper[4867]: I0101 08:44:54.858507 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baa9a732-d77a-4f52-a689-53f88422f10e-config\") pod \"dnsmasq-dns-84bb9d8bd9-dtzxj\" (UID: \"baa9a732-d77a-4f52-a689-53f88422f10e\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-dtzxj" Jan 01 08:44:54 crc kubenswrapper[4867]: I0101 08:44:54.959876 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d228478-592e-4cad-976d-cf0365f2d80e-dns-svc\") pod \"dnsmasq-dns-5f854695bc-lj4mw\" (UID: \"6d228478-592e-4cad-976d-cf0365f2d80e\") " pod="openstack/dnsmasq-dns-5f854695bc-lj4mw" Jan 01 08:44:54 crc kubenswrapper[4867]: I0101 08:44:54.959955 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5kcj\" (UniqueName: \"kubernetes.io/projected/baa9a732-d77a-4f52-a689-53f88422f10e-kube-api-access-x5kcj\") pod \"dnsmasq-dns-84bb9d8bd9-dtzxj\" (UID: \"baa9a732-d77a-4f52-a689-53f88422f10e\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-dtzxj" Jan 01 08:44:54 crc kubenswrapper[4867]: I0101 08:44:54.959979 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d228478-592e-4cad-976d-cf0365f2d80e-config\") pod \"dnsmasq-dns-5f854695bc-lj4mw\" (UID: \"6d228478-592e-4cad-976d-cf0365f2d80e\") " pod="openstack/dnsmasq-dns-5f854695bc-lj4mw" Jan 01 08:44:54 crc kubenswrapper[4867]: I0101 08:44:54.960039 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baa9a732-d77a-4f52-a689-53f88422f10e-config\") pod \"dnsmasq-dns-84bb9d8bd9-dtzxj\" (UID: \"baa9a732-d77a-4f52-a689-53f88422f10e\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-dtzxj" Jan 01 08:44:54 crc kubenswrapper[4867]: I0101 08:44:54.960098 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98q2m\" (UniqueName: \"kubernetes.io/projected/6d228478-592e-4cad-976d-cf0365f2d80e-kube-api-access-98q2m\") pod \"dnsmasq-dns-5f854695bc-lj4mw\" (UID: \"6d228478-592e-4cad-976d-cf0365f2d80e\") " pod="openstack/dnsmasq-dns-5f854695bc-lj4mw" Jan 01 08:44:54 crc kubenswrapper[4867]: I0101 08:44:54.960939 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baa9a732-d77a-4f52-a689-53f88422f10e-config\") pod \"dnsmasq-dns-84bb9d8bd9-dtzxj\" (UID: \"baa9a732-d77a-4f52-a689-53f88422f10e\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-dtzxj" Jan 01 08:44:54 crc kubenswrapper[4867]: I0101 08:44:54.980186 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5kcj\" (UniqueName: \"kubernetes.io/projected/baa9a732-d77a-4f52-a689-53f88422f10e-kube-api-access-x5kcj\") pod \"dnsmasq-dns-84bb9d8bd9-dtzxj\" (UID: \"baa9a732-d77a-4f52-a689-53f88422f10e\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-dtzxj" Jan 01 08:44:55 crc kubenswrapper[4867]: I0101 08:44:55.060280 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-dtzxj" Jan 01 08:44:55 crc kubenswrapper[4867]: I0101 08:44:55.060990 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98q2m\" (UniqueName: \"kubernetes.io/projected/6d228478-592e-4cad-976d-cf0365f2d80e-kube-api-access-98q2m\") pod \"dnsmasq-dns-5f854695bc-lj4mw\" (UID: \"6d228478-592e-4cad-976d-cf0365f2d80e\") " pod="openstack/dnsmasq-dns-5f854695bc-lj4mw" Jan 01 08:44:55 crc kubenswrapper[4867]: I0101 08:44:55.061049 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d228478-592e-4cad-976d-cf0365f2d80e-dns-svc\") pod \"dnsmasq-dns-5f854695bc-lj4mw\" (UID: \"6d228478-592e-4cad-976d-cf0365f2d80e\") " pod="openstack/dnsmasq-dns-5f854695bc-lj4mw" Jan 01 08:44:55 crc kubenswrapper[4867]: I0101 08:44:55.061073 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d228478-592e-4cad-976d-cf0365f2d80e-config\") pod \"dnsmasq-dns-5f854695bc-lj4mw\" (UID: \"6d228478-592e-4cad-976d-cf0365f2d80e\") " pod="openstack/dnsmasq-dns-5f854695bc-lj4mw" Jan 01 08:44:55 crc kubenswrapper[4867]: I0101 08:44:55.062043 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d228478-592e-4cad-976d-cf0365f2d80e-config\") pod \"dnsmasq-dns-5f854695bc-lj4mw\" (UID: \"6d228478-592e-4cad-976d-cf0365f2d80e\") " pod="openstack/dnsmasq-dns-5f854695bc-lj4mw" Jan 01 08:44:55 crc kubenswrapper[4867]: I0101 08:44:55.062124 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d228478-592e-4cad-976d-cf0365f2d80e-dns-svc\") pod \"dnsmasq-dns-5f854695bc-lj4mw\" (UID: \"6d228478-592e-4cad-976d-cf0365f2d80e\") " pod="openstack/dnsmasq-dns-5f854695bc-lj4mw" Jan 01 08:44:55 crc kubenswrapper[4867]: I0101 08:44:55.094632 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98q2m\" (UniqueName: \"kubernetes.io/projected/6d228478-592e-4cad-976d-cf0365f2d80e-kube-api-access-98q2m\") pod \"dnsmasq-dns-5f854695bc-lj4mw\" (UID: \"6d228478-592e-4cad-976d-cf0365f2d80e\") " pod="openstack/dnsmasq-dns-5f854695bc-lj4mw" Jan 01 08:44:55 crc kubenswrapper[4867]: I0101 08:44:55.126415 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-lj4mw" Jan 01 08:44:55 crc kubenswrapper[4867]: I0101 08:44:55.534816 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-dtzxj"] Jan 01 08:44:55 crc kubenswrapper[4867]: W0101 08:44:55.543416 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbaa9a732_d77a_4f52_a689_53f88422f10e.slice/crio-c9df0b879d196f3ba6a8185a9a59af76f282b8f9abced41b48cf5347224d936d WatchSource:0}: Error finding container c9df0b879d196f3ba6a8185a9a59af76f282b8f9abced41b48cf5347224d936d: Status 404 returned error can't find the container with id c9df0b879d196f3ba6a8185a9a59af76f282b8f9abced41b48cf5347224d936d Jan 01 08:44:55 crc kubenswrapper[4867]: I0101 08:44:55.545434 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 01 08:44:55 crc kubenswrapper[4867]: I0101 08:44:55.611534 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-lj4mw"] Jan 01 08:44:55 crc kubenswrapper[4867]: I0101 08:44:55.921474 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-dtzxj" event={"ID":"baa9a732-d77a-4f52-a689-53f88422f10e","Type":"ContainerStarted","Data":"c9df0b879d196f3ba6a8185a9a59af76f282b8f9abced41b48cf5347224d936d"} Jan 01 08:44:55 crc kubenswrapper[4867]: I0101 08:44:55.922799 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-lj4mw" event={"ID":"6d228478-592e-4cad-976d-cf0365f2d80e","Type":"ContainerStarted","Data":"39f6cdfe21347aaaad9c545c763919925d827f7f102ac78da97b8f765114c49f"} Jan 01 08:44:56 crc kubenswrapper[4867]: I0101 08:44:56.636424 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-lj4mw"] Jan 01 08:44:56 crc kubenswrapper[4867]: I0101 08:44:56.660922 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-h46zn"] Jan 01 08:44:56 crc kubenswrapper[4867]: I0101 08:44:56.662320 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c7cbb8f79-h46zn" Jan 01 08:44:56 crc kubenswrapper[4867]: I0101 08:44:56.675834 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-h46zn"] Jan 01 08:44:56 crc kubenswrapper[4867]: I0101 08:44:56.792100 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cphwl\" (UniqueName: \"kubernetes.io/projected/2e3af20b-ad60-4621-a781-9fc1721111ef-kube-api-access-cphwl\") pod \"dnsmasq-dns-c7cbb8f79-h46zn\" (UID: \"2e3af20b-ad60-4621-a781-9fc1721111ef\") " pod="openstack/dnsmasq-dns-c7cbb8f79-h46zn" Jan 01 08:44:56 crc kubenswrapper[4867]: I0101 08:44:56.792185 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e3af20b-ad60-4621-a781-9fc1721111ef-config\") pod \"dnsmasq-dns-c7cbb8f79-h46zn\" (UID: \"2e3af20b-ad60-4621-a781-9fc1721111ef\") " pod="openstack/dnsmasq-dns-c7cbb8f79-h46zn" Jan 01 08:44:56 crc kubenswrapper[4867]: I0101 08:44:56.792236 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e3af20b-ad60-4621-a781-9fc1721111ef-dns-svc\") pod \"dnsmasq-dns-c7cbb8f79-h46zn\" (UID: \"2e3af20b-ad60-4621-a781-9fc1721111ef\") " pod="openstack/dnsmasq-dns-c7cbb8f79-h46zn" Jan 01 08:44:56 crc kubenswrapper[4867]: I0101 08:44:56.893005 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e3af20b-ad60-4621-a781-9fc1721111ef-dns-svc\") pod \"dnsmasq-dns-c7cbb8f79-h46zn\" (UID: \"2e3af20b-ad60-4621-a781-9fc1721111ef\") " pod="openstack/dnsmasq-dns-c7cbb8f79-h46zn" Jan 01 08:44:56 crc kubenswrapper[4867]: I0101 08:44:56.893320 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cphwl\" (UniqueName: \"kubernetes.io/projected/2e3af20b-ad60-4621-a781-9fc1721111ef-kube-api-access-cphwl\") pod \"dnsmasq-dns-c7cbb8f79-h46zn\" (UID: \"2e3af20b-ad60-4621-a781-9fc1721111ef\") " pod="openstack/dnsmasq-dns-c7cbb8f79-h46zn" Jan 01 08:44:56 crc kubenswrapper[4867]: I0101 08:44:56.893399 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e3af20b-ad60-4621-a781-9fc1721111ef-config\") pod \"dnsmasq-dns-c7cbb8f79-h46zn\" (UID: \"2e3af20b-ad60-4621-a781-9fc1721111ef\") " pod="openstack/dnsmasq-dns-c7cbb8f79-h46zn" Jan 01 08:44:56 crc kubenswrapper[4867]: I0101 08:44:56.894036 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e3af20b-ad60-4621-a781-9fc1721111ef-dns-svc\") pod \"dnsmasq-dns-c7cbb8f79-h46zn\" (UID: \"2e3af20b-ad60-4621-a781-9fc1721111ef\") " pod="openstack/dnsmasq-dns-c7cbb8f79-h46zn" Jan 01 08:44:56 crc kubenswrapper[4867]: I0101 08:44:56.894205 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e3af20b-ad60-4621-a781-9fc1721111ef-config\") pod \"dnsmasq-dns-c7cbb8f79-h46zn\" (UID: \"2e3af20b-ad60-4621-a781-9fc1721111ef\") " pod="openstack/dnsmasq-dns-c7cbb8f79-h46zn" Jan 01 08:44:56 crc kubenswrapper[4867]: I0101 08:44:56.901190 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-dtzxj"] Jan 01 08:44:56 crc kubenswrapper[4867]: I0101 08:44:56.915906 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cphwl\" (UniqueName: \"kubernetes.io/projected/2e3af20b-ad60-4621-a781-9fc1721111ef-kube-api-access-cphwl\") pod \"dnsmasq-dns-c7cbb8f79-h46zn\" (UID: \"2e3af20b-ad60-4621-a781-9fc1721111ef\") " pod="openstack/dnsmasq-dns-c7cbb8f79-h46zn" Jan 01 08:44:56 crc kubenswrapper[4867]: I0101 08:44:56.931058 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-4brlh"] Jan 01 08:44:56 crc kubenswrapper[4867]: I0101 08:44:56.932125 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-4brlh" Jan 01 08:44:56 crc kubenswrapper[4867]: I0101 08:44:56.983289 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c7cbb8f79-h46zn" Jan 01 08:44:56 crc kubenswrapper[4867]: I0101 08:44:56.985679 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-4brlh"] Jan 01 08:44:57 crc kubenswrapper[4867]: I0101 08:44:57.001551 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70afb24e-9e9b-4c9a-b4bd-991497c43647-dns-svc\") pod \"dnsmasq-dns-95f5f6995-4brlh\" (UID: \"70afb24e-9e9b-4c9a-b4bd-991497c43647\") " pod="openstack/dnsmasq-dns-95f5f6995-4brlh" Jan 01 08:44:57 crc kubenswrapper[4867]: I0101 08:44:57.001667 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkp4v\" (UniqueName: \"kubernetes.io/projected/70afb24e-9e9b-4c9a-b4bd-991497c43647-kube-api-access-fkp4v\") pod \"dnsmasq-dns-95f5f6995-4brlh\" (UID: \"70afb24e-9e9b-4c9a-b4bd-991497c43647\") " pod="openstack/dnsmasq-dns-95f5f6995-4brlh" Jan 01 08:44:57 crc kubenswrapper[4867]: I0101 08:44:57.001699 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70afb24e-9e9b-4c9a-b4bd-991497c43647-config\") pod \"dnsmasq-dns-95f5f6995-4brlh\" (UID: \"70afb24e-9e9b-4c9a-b4bd-991497c43647\") " pod="openstack/dnsmasq-dns-95f5f6995-4brlh" Jan 01 08:44:57 crc kubenswrapper[4867]: I0101 08:44:57.106411 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70afb24e-9e9b-4c9a-b4bd-991497c43647-config\") pod \"dnsmasq-dns-95f5f6995-4brlh\" (UID: \"70afb24e-9e9b-4c9a-b4bd-991497c43647\") " pod="openstack/dnsmasq-dns-95f5f6995-4brlh" Jan 01 08:44:57 crc kubenswrapper[4867]: I0101 08:44:57.106759 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70afb24e-9e9b-4c9a-b4bd-991497c43647-dns-svc\") pod \"dnsmasq-dns-95f5f6995-4brlh\" (UID: \"70afb24e-9e9b-4c9a-b4bd-991497c43647\") " pod="openstack/dnsmasq-dns-95f5f6995-4brlh" Jan 01 08:44:57 crc kubenswrapper[4867]: I0101 08:44:57.107539 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkp4v\" (UniqueName: \"kubernetes.io/projected/70afb24e-9e9b-4c9a-b4bd-991497c43647-kube-api-access-fkp4v\") pod \"dnsmasq-dns-95f5f6995-4brlh\" (UID: \"70afb24e-9e9b-4c9a-b4bd-991497c43647\") " pod="openstack/dnsmasq-dns-95f5f6995-4brlh" Jan 01 08:44:57 crc kubenswrapper[4867]: I0101 08:44:57.107557 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70afb24e-9e9b-4c9a-b4bd-991497c43647-config\") pod \"dnsmasq-dns-95f5f6995-4brlh\" (UID: \"70afb24e-9e9b-4c9a-b4bd-991497c43647\") " pod="openstack/dnsmasq-dns-95f5f6995-4brlh" Jan 01 08:44:57 crc kubenswrapper[4867]: I0101 08:44:57.107619 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70afb24e-9e9b-4c9a-b4bd-991497c43647-dns-svc\") pod \"dnsmasq-dns-95f5f6995-4brlh\" (UID: \"70afb24e-9e9b-4c9a-b4bd-991497c43647\") " pod="openstack/dnsmasq-dns-95f5f6995-4brlh" Jan 01 08:44:57 crc kubenswrapper[4867]: I0101 08:44:57.172559 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkp4v\" (UniqueName: \"kubernetes.io/projected/70afb24e-9e9b-4c9a-b4bd-991497c43647-kube-api-access-fkp4v\") pod \"dnsmasq-dns-95f5f6995-4brlh\" (UID: \"70afb24e-9e9b-4c9a-b4bd-991497c43647\") " pod="openstack/dnsmasq-dns-95f5f6995-4brlh" Jan 01 08:44:57 crc kubenswrapper[4867]: I0101 08:44:57.257547 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-4brlh" Jan 01 08:44:57 crc kubenswrapper[4867]: I0101 08:44:57.392419 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-h46zn"] Jan 01 08:44:57 crc kubenswrapper[4867]: I0101 08:44:57.708590 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-4brlh"] Jan 01 08:44:57 crc kubenswrapper[4867]: W0101 08:44:57.716127 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70afb24e_9e9b_4c9a_b4bd_991497c43647.slice/crio-77af901a2c9127405e730a65746c9b373b94637eb26af63d1cae0a4d7f7e2717 WatchSource:0}: Error finding container 77af901a2c9127405e730a65746c9b373b94637eb26af63d1cae0a4d7f7e2717: Status 404 returned error can't find the container with id 77af901a2c9127405e730a65746c9b373b94637eb26af63d1cae0a4d7f7e2717 Jan 01 08:44:57 crc kubenswrapper[4867]: I0101 08:44:57.938475 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c7cbb8f79-h46zn" event={"ID":"2e3af20b-ad60-4621-a781-9fc1721111ef","Type":"ContainerStarted","Data":"83e396ca0a360f9f88a9ec3e8a1ee0b6e128507f48a1ab62135d3939058ab952"} Jan 01 08:44:57 crc kubenswrapper[4867]: I0101 08:44:57.940271 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-4brlh" event={"ID":"70afb24e-9e9b-4c9a-b4bd-991497c43647","Type":"ContainerStarted","Data":"77af901a2c9127405e730a65746c9b373b94637eb26af63d1cae0a4d7f7e2717"} Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.021177 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.022628 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.024631 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.024919 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.025133 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-9b8r2" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.025189 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.025431 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.025542 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.025607 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.033289 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.097501 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.099562 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.107858 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.152552 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.152743 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.152898 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.153001 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-kpfrz" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.153745 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.154042 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.155112 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.156209 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.156252 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.156272 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.156296 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.156324 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.156338 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.156359 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.156374 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgxj9\" (UniqueName: \"kubernetes.io/projected/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-kube-api-access-fgxj9\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.156395 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.156416 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.156444 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.258851 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/84d7aac6-1073-41c0-acff-169e36ec197d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " pod="openstack/rabbitmq-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.258996 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.259149 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/84d7aac6-1073-41c0-acff-169e36ec197d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " pod="openstack/rabbitmq-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.259203 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/84d7aac6-1073-41c0-acff-169e36ec197d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " pod="openstack/rabbitmq-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.259423 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.259573 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.259616 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.259712 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " pod="openstack/rabbitmq-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.259774 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgxj9\" (UniqueName: \"kubernetes.io/projected/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-kube-api-access-fgxj9\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.259833 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.259864 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/84d7aac6-1073-41c0-acff-169e36ec197d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " pod="openstack/rabbitmq-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.260123 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/84d7aac6-1073-41c0-acff-169e36ec197d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " pod="openstack/rabbitmq-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.260159 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.260231 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.260329 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/84d7aac6-1073-41c0-acff-169e36ec197d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " pod="openstack/rabbitmq-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.260353 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbzfw\" (UniqueName: \"kubernetes.io/projected/84d7aac6-1073-41c0-acff-169e36ec197d-kube-api-access-gbzfw\") pod \"rabbitmq-server-0\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " pod="openstack/rabbitmq-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.260586 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.260624 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/84d7aac6-1073-41c0-acff-169e36ec197d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " pod="openstack/rabbitmq-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.260670 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/84d7aac6-1073-41c0-acff-169e36ec197d-config-data\") pod \"rabbitmq-server-0\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " pod="openstack/rabbitmq-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.260700 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/84d7aac6-1073-41c0-acff-169e36ec197d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " pod="openstack/rabbitmq-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.260772 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.260991 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.263445 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.264217 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.264531 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.264703 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.264764 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.266649 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.269531 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.270168 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.279013 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.282250 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgxj9\" (UniqueName: \"kubernetes.io/projected/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-kube-api-access-fgxj9\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.282941 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.287308 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.351296 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.369649 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/84d7aac6-1073-41c0-acff-169e36ec197d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " pod="openstack/rabbitmq-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.369692 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/84d7aac6-1073-41c0-acff-169e36ec197d-config-data\") pod \"rabbitmq-server-0\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " pod="openstack/rabbitmq-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.369721 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/84d7aac6-1073-41c0-acff-169e36ec197d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " pod="openstack/rabbitmq-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.369754 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/84d7aac6-1073-41c0-acff-169e36ec197d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " pod="openstack/rabbitmq-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.369774 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/84d7aac6-1073-41c0-acff-169e36ec197d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " pod="openstack/rabbitmq-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.369790 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/84d7aac6-1073-41c0-acff-169e36ec197d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " pod="openstack/rabbitmq-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.369824 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " pod="openstack/rabbitmq-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.369851 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/84d7aac6-1073-41c0-acff-169e36ec197d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " pod="openstack/rabbitmq-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.369870 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/84d7aac6-1073-41c0-acff-169e36ec197d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " pod="openstack/rabbitmq-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.369919 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/84d7aac6-1073-41c0-acff-169e36ec197d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " pod="openstack/rabbitmq-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.369934 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbzfw\" (UniqueName: \"kubernetes.io/projected/84d7aac6-1073-41c0-acff-169e36ec197d-kube-api-access-gbzfw\") pod \"rabbitmq-server-0\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " pod="openstack/rabbitmq-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.370328 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/84d7aac6-1073-41c0-acff-169e36ec197d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " pod="openstack/rabbitmq-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.370433 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.370651 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/84d7aac6-1073-41c0-acff-169e36ec197d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " pod="openstack/rabbitmq-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.370843 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/84d7aac6-1073-41c0-acff-169e36ec197d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " pod="openstack/rabbitmq-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.372277 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/84d7aac6-1073-41c0-acff-169e36ec197d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " pod="openstack/rabbitmq-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.372542 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/84d7aac6-1073-41c0-acff-169e36ec197d-config-data\") pod \"rabbitmq-server-0\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " pod="openstack/rabbitmq-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.374575 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/84d7aac6-1073-41c0-acff-169e36ec197d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " pod="openstack/rabbitmq-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.376565 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/84d7aac6-1073-41c0-acff-169e36ec197d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " pod="openstack/rabbitmq-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.377170 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/84d7aac6-1073-41c0-acff-169e36ec197d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " pod="openstack/rabbitmq-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.378265 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/84d7aac6-1073-41c0-acff-169e36ec197d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " pod="openstack/rabbitmq-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.386951 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbzfw\" (UniqueName: \"kubernetes.io/projected/84d7aac6-1073-41c0-acff-169e36ec197d-kube-api-access-gbzfw\") pod \"rabbitmq-server-0\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " pod="openstack/rabbitmq-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.398415 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " pod="openstack/rabbitmq-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.480446 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.895125 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 01 08:44:58 crc kubenswrapper[4867]: I0101 08:44:58.953939 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99","Type":"ContainerStarted","Data":"43ccb89b14f5d6ab3efda5233941517a044e3be038303100a78892f5374fe376"} Jan 01 08:44:59 crc kubenswrapper[4867]: I0101 08:44:59.081135 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 01 08:44:59 crc kubenswrapper[4867]: W0101 08:44:59.085681 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84d7aac6_1073_41c0_acff_169e36ec197d.slice/crio-0a965d6da9ca07e8eb784a68437e692b51772e19780f24a10abeda0c60018d21 WatchSource:0}: Error finding container 0a965d6da9ca07e8eb784a68437e692b51772e19780f24a10abeda0c60018d21: Status 404 returned error can't find the container with id 0a965d6da9ca07e8eb784a68437e692b51772e19780f24a10abeda0c60018d21 Jan 01 08:44:59 crc kubenswrapper[4867]: I0101 08:44:59.677987 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 01 08:44:59 crc kubenswrapper[4867]: I0101 08:44:59.681260 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 01 08:44:59 crc kubenswrapper[4867]: I0101 08:44:59.686070 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 01 08:44:59 crc kubenswrapper[4867]: I0101 08:44:59.686475 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 01 08:44:59 crc kubenswrapper[4867]: I0101 08:44:59.686606 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 01 08:44:59 crc kubenswrapper[4867]: I0101 08:44:59.686730 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-2mj76" Jan 01 08:44:59 crc kubenswrapper[4867]: I0101 08:44:59.691356 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 01 08:44:59 crc kubenswrapper[4867]: I0101 08:44:59.714025 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 01 08:44:59 crc kubenswrapper[4867]: I0101 08:44:59.796771 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\") " pod="openstack/openstack-galera-0" Jan 01 08:44:59 crc kubenswrapper[4867]: I0101 08:44:59.796825 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-config-data-default\") pod \"openstack-galera-0\" (UID: \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\") " pod="openstack/openstack-galera-0" Jan 01 08:44:59 crc kubenswrapper[4867]: I0101 08:44:59.796860 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\") " pod="openstack/openstack-galera-0" Jan 01 08:44:59 crc kubenswrapper[4867]: I0101 08:44:59.796942 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\") " pod="openstack/openstack-galera-0" Jan 01 08:44:59 crc kubenswrapper[4867]: I0101 08:44:59.796984 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\") " pod="openstack/openstack-galera-0" Jan 01 08:44:59 crc kubenswrapper[4867]: I0101 08:44:59.797009 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\") " pod="openstack/openstack-galera-0" Jan 01 08:44:59 crc kubenswrapper[4867]: I0101 08:44:59.797033 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x94zz\" (UniqueName: \"kubernetes.io/projected/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-kube-api-access-x94zz\") pod \"openstack-galera-0\" (UID: \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\") " pod="openstack/openstack-galera-0" Jan 01 08:44:59 crc kubenswrapper[4867]: I0101 08:44:59.797051 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-kolla-config\") pod \"openstack-galera-0\" (UID: \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\") " pod="openstack/openstack-galera-0" Jan 01 08:45:00 crc kubenswrapper[4867]: I0101 08:44:59.899996 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\") " pod="openstack/openstack-galera-0" Jan 01 08:45:00 crc kubenswrapper[4867]: I0101 08:44:59.900057 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\") " pod="openstack/openstack-galera-0" Jan 01 08:45:00 crc kubenswrapper[4867]: I0101 08:44:59.900086 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\") " pod="openstack/openstack-galera-0" Jan 01 08:45:00 crc kubenswrapper[4867]: I0101 08:44:59.900109 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x94zz\" (UniqueName: \"kubernetes.io/projected/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-kube-api-access-x94zz\") pod \"openstack-galera-0\" (UID: \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\") " pod="openstack/openstack-galera-0" Jan 01 08:45:00 crc kubenswrapper[4867]: I0101 08:44:59.900161 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-kolla-config\") pod \"openstack-galera-0\" (UID: \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\") " pod="openstack/openstack-galera-0" Jan 01 08:45:00 crc kubenswrapper[4867]: I0101 08:44:59.900198 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\") " pod="openstack/openstack-galera-0" Jan 01 08:45:00 crc kubenswrapper[4867]: I0101 08:44:59.900223 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-config-data-default\") pod \"openstack-galera-0\" (UID: \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\") " pod="openstack/openstack-galera-0" Jan 01 08:45:00 crc kubenswrapper[4867]: I0101 08:44:59.900281 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\") " pod="openstack/openstack-galera-0" Jan 01 08:45:00 crc kubenswrapper[4867]: I0101 08:44:59.900899 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\") " pod="openstack/openstack-galera-0" Jan 01 08:45:00 crc kubenswrapper[4867]: I0101 08:44:59.900974 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-galera-0" Jan 01 08:45:00 crc kubenswrapper[4867]: I0101 08:44:59.902922 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\") " pod="openstack/openstack-galera-0" Jan 01 08:45:00 crc kubenswrapper[4867]: I0101 08:44:59.903593 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-config-data-default\") pod \"openstack-galera-0\" (UID: \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\") " pod="openstack/openstack-galera-0" Jan 01 08:45:00 crc kubenswrapper[4867]: I0101 08:44:59.904031 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-kolla-config\") pod \"openstack-galera-0\" (UID: \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\") " pod="openstack/openstack-galera-0" Jan 01 08:45:00 crc kubenswrapper[4867]: I0101 08:44:59.906506 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\") " pod="openstack/openstack-galera-0" Jan 01 08:45:00 crc kubenswrapper[4867]: I0101 08:44:59.915662 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x94zz\" (UniqueName: \"kubernetes.io/projected/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-kube-api-access-x94zz\") pod \"openstack-galera-0\" (UID: \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\") " pod="openstack/openstack-galera-0" Jan 01 08:45:00 crc kubenswrapper[4867]: I0101 08:44:59.923540 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\") " pod="openstack/openstack-galera-0" Jan 01 08:45:00 crc kubenswrapper[4867]: I0101 08:44:59.933419 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\") " pod="openstack/openstack-galera-0" Jan 01 08:45:00 crc kubenswrapper[4867]: I0101 08:44:59.973207 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"84d7aac6-1073-41c0-acff-169e36ec197d","Type":"ContainerStarted","Data":"0a965d6da9ca07e8eb784a68437e692b51772e19780f24a10abeda0c60018d21"} Jan 01 08:45:00 crc kubenswrapper[4867]: I0101 08:45:00.008309 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 01 08:45:00 crc kubenswrapper[4867]: I0101 08:45:00.152873 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29454285-kqkct"] Jan 01 08:45:00 crc kubenswrapper[4867]: I0101 08:45:00.156125 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29454285-kqkct" Jan 01 08:45:00 crc kubenswrapper[4867]: I0101 08:45:00.159056 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 01 08:45:00 crc kubenswrapper[4867]: I0101 08:45:00.159115 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 01 08:45:00 crc kubenswrapper[4867]: I0101 08:45:00.171303 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29454285-kqkct"] Jan 01 08:45:00 crc kubenswrapper[4867]: I0101 08:45:00.306693 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46-secret-volume\") pod \"collect-profiles-29454285-kqkct\" (UID: \"32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454285-kqkct" Jan 01 08:45:00 crc kubenswrapper[4867]: I0101 08:45:00.306793 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46-config-volume\") pod \"collect-profiles-29454285-kqkct\" (UID: \"32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454285-kqkct" Jan 01 08:45:00 crc kubenswrapper[4867]: I0101 08:45:00.306864 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbjdw\" (UniqueName: \"kubernetes.io/projected/32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46-kube-api-access-pbjdw\") pod \"collect-profiles-29454285-kqkct\" (UID: \"32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454285-kqkct" Jan 01 08:45:00 crc kubenswrapper[4867]: I0101 08:45:00.408477 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46-config-volume\") pod \"collect-profiles-29454285-kqkct\" (UID: \"32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454285-kqkct" Jan 01 08:45:00 crc kubenswrapper[4867]: I0101 08:45:00.408573 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbjdw\" (UniqueName: \"kubernetes.io/projected/32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46-kube-api-access-pbjdw\") pod \"collect-profiles-29454285-kqkct\" (UID: \"32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454285-kqkct" Jan 01 08:45:00 crc kubenswrapper[4867]: I0101 08:45:00.408640 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46-secret-volume\") pod \"collect-profiles-29454285-kqkct\" (UID: \"32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454285-kqkct" Jan 01 08:45:00 crc kubenswrapper[4867]: I0101 08:45:00.412799 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46-config-volume\") pod \"collect-profiles-29454285-kqkct\" (UID: \"32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454285-kqkct" Jan 01 08:45:00 crc kubenswrapper[4867]: I0101 08:45:00.412976 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46-secret-volume\") pod \"collect-profiles-29454285-kqkct\" (UID: \"32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454285-kqkct" Jan 01 08:45:00 crc kubenswrapper[4867]: I0101 08:45:00.433612 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbjdw\" (UniqueName: \"kubernetes.io/projected/32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46-kube-api-access-pbjdw\") pod \"collect-profiles-29454285-kqkct\" (UID: \"32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454285-kqkct" Jan 01 08:45:00 crc kubenswrapper[4867]: I0101 08:45:00.480492 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29454285-kqkct" Jan 01 08:45:00 crc kubenswrapper[4867]: I0101 08:45:00.514849 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 01 08:45:00 crc kubenswrapper[4867]: I0101 08:45:00.982216 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29454285-kqkct"] Jan 01 08:45:00 crc kubenswrapper[4867]: I0101 08:45:00.984584 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb","Type":"ContainerStarted","Data":"2a41ae11feae684b800d38b921b0c7c3b2a86837e000d38cf362797dc6c9f5c4"} Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.062002 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.065476 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.069060 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.069135 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-59kbz" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.069184 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.069240 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.071527 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.119005 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d2662702-83ed-4457-a630-e8a6d07ffb8b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d2662702-83ed-4457-a630-e8a6d07ffb8b\") " pod="openstack/openstack-cell1-galera-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.119059 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d2662702-83ed-4457-a630-e8a6d07ffb8b\") " pod="openstack/openstack-cell1-galera-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.119100 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2662702-83ed-4457-a630-e8a6d07ffb8b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d2662702-83ed-4457-a630-e8a6d07ffb8b\") " pod="openstack/openstack-cell1-galera-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.119128 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d2662702-83ed-4457-a630-e8a6d07ffb8b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d2662702-83ed-4457-a630-e8a6d07ffb8b\") " pod="openstack/openstack-cell1-galera-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.119147 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2662702-83ed-4457-a630-e8a6d07ffb8b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d2662702-83ed-4457-a630-e8a6d07ffb8b\") " pod="openstack/openstack-cell1-galera-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.119170 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d2662702-83ed-4457-a630-e8a6d07ffb8b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d2662702-83ed-4457-a630-e8a6d07ffb8b\") " pod="openstack/openstack-cell1-galera-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.119201 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpxhd\" (UniqueName: \"kubernetes.io/projected/d2662702-83ed-4457-a630-e8a6d07ffb8b-kube-api-access-dpxhd\") pod \"openstack-cell1-galera-0\" (UID: \"d2662702-83ed-4457-a630-e8a6d07ffb8b\") " pod="openstack/openstack-cell1-galera-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.119230 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2662702-83ed-4457-a630-e8a6d07ffb8b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d2662702-83ed-4457-a630-e8a6d07ffb8b\") " pod="openstack/openstack-cell1-galera-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.221584 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d2662702-83ed-4457-a630-e8a6d07ffb8b\") " pod="openstack/openstack-cell1-galera-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.221664 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2662702-83ed-4457-a630-e8a6d07ffb8b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d2662702-83ed-4457-a630-e8a6d07ffb8b\") " pod="openstack/openstack-cell1-galera-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.221701 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d2662702-83ed-4457-a630-e8a6d07ffb8b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d2662702-83ed-4457-a630-e8a6d07ffb8b\") " pod="openstack/openstack-cell1-galera-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.221726 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2662702-83ed-4457-a630-e8a6d07ffb8b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d2662702-83ed-4457-a630-e8a6d07ffb8b\") " pod="openstack/openstack-cell1-galera-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.221758 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d2662702-83ed-4457-a630-e8a6d07ffb8b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d2662702-83ed-4457-a630-e8a6d07ffb8b\") " pod="openstack/openstack-cell1-galera-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.221801 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpxhd\" (UniqueName: \"kubernetes.io/projected/d2662702-83ed-4457-a630-e8a6d07ffb8b-kube-api-access-dpxhd\") pod \"openstack-cell1-galera-0\" (UID: \"d2662702-83ed-4457-a630-e8a6d07ffb8b\") " pod="openstack/openstack-cell1-galera-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.221832 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2662702-83ed-4457-a630-e8a6d07ffb8b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d2662702-83ed-4457-a630-e8a6d07ffb8b\") " pod="openstack/openstack-cell1-galera-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.221897 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d2662702-83ed-4457-a630-e8a6d07ffb8b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d2662702-83ed-4457-a630-e8a6d07ffb8b\") " pod="openstack/openstack-cell1-galera-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.222968 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d2662702-83ed-4457-a630-e8a6d07ffb8b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d2662702-83ed-4457-a630-e8a6d07ffb8b\") " pod="openstack/openstack-cell1-galera-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.223330 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d2662702-83ed-4457-a630-e8a6d07ffb8b\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-cell1-galera-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.224322 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d2662702-83ed-4457-a630-e8a6d07ffb8b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d2662702-83ed-4457-a630-e8a6d07ffb8b\") " pod="openstack/openstack-cell1-galera-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.224444 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d2662702-83ed-4457-a630-e8a6d07ffb8b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d2662702-83ed-4457-a630-e8a6d07ffb8b\") " pod="openstack/openstack-cell1-galera-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.225354 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2662702-83ed-4457-a630-e8a6d07ffb8b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d2662702-83ed-4457-a630-e8a6d07ffb8b\") " pod="openstack/openstack-cell1-galera-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.231252 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2662702-83ed-4457-a630-e8a6d07ffb8b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d2662702-83ed-4457-a630-e8a6d07ffb8b\") " pod="openstack/openstack-cell1-galera-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.245312 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2662702-83ed-4457-a630-e8a6d07ffb8b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d2662702-83ed-4457-a630-e8a6d07ffb8b\") " pod="openstack/openstack-cell1-galera-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.247518 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpxhd\" (UniqueName: \"kubernetes.io/projected/d2662702-83ed-4457-a630-e8a6d07ffb8b-kube-api-access-dpxhd\") pod \"openstack-cell1-galera-0\" (UID: \"d2662702-83ed-4457-a630-e8a6d07ffb8b\") " pod="openstack/openstack-cell1-galera-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.249647 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d2662702-83ed-4457-a630-e8a6d07ffb8b\") " pod="openstack/openstack-cell1-galera-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.395994 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.480982 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.482200 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.501161 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.502453 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-lxqqj" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.502922 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.515730 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.529628 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b43ddff2-67cd-4ab7-84c1-763dd002457c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b43ddff2-67cd-4ab7-84c1-763dd002457c\") " pod="openstack/memcached-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.529722 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b43ddff2-67cd-4ab7-84c1-763dd002457c-config-data\") pod \"memcached-0\" (UID: \"b43ddff2-67cd-4ab7-84c1-763dd002457c\") " pod="openstack/memcached-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.529758 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b43ddff2-67cd-4ab7-84c1-763dd002457c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b43ddff2-67cd-4ab7-84c1-763dd002457c\") " pod="openstack/memcached-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.529827 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b43ddff2-67cd-4ab7-84c1-763dd002457c-kolla-config\") pod \"memcached-0\" (UID: \"b43ddff2-67cd-4ab7-84c1-763dd002457c\") " pod="openstack/memcached-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.529848 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk76d\" (UniqueName: \"kubernetes.io/projected/b43ddff2-67cd-4ab7-84c1-763dd002457c-kube-api-access-lk76d\") pod \"memcached-0\" (UID: \"b43ddff2-67cd-4ab7-84c1-763dd002457c\") " pod="openstack/memcached-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.631265 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b43ddff2-67cd-4ab7-84c1-763dd002457c-config-data\") pod \"memcached-0\" (UID: \"b43ddff2-67cd-4ab7-84c1-763dd002457c\") " pod="openstack/memcached-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.631326 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b43ddff2-67cd-4ab7-84c1-763dd002457c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b43ddff2-67cd-4ab7-84c1-763dd002457c\") " pod="openstack/memcached-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.631407 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b43ddff2-67cd-4ab7-84c1-763dd002457c-kolla-config\") pod \"memcached-0\" (UID: \"b43ddff2-67cd-4ab7-84c1-763dd002457c\") " pod="openstack/memcached-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.631429 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk76d\" (UniqueName: \"kubernetes.io/projected/b43ddff2-67cd-4ab7-84c1-763dd002457c-kube-api-access-lk76d\") pod \"memcached-0\" (UID: \"b43ddff2-67cd-4ab7-84c1-763dd002457c\") " pod="openstack/memcached-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.631479 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b43ddff2-67cd-4ab7-84c1-763dd002457c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b43ddff2-67cd-4ab7-84c1-763dd002457c\") " pod="openstack/memcached-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.634068 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b43ddff2-67cd-4ab7-84c1-763dd002457c-config-data\") pod \"memcached-0\" (UID: \"b43ddff2-67cd-4ab7-84c1-763dd002457c\") " pod="openstack/memcached-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.639383 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b43ddff2-67cd-4ab7-84c1-763dd002457c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b43ddff2-67cd-4ab7-84c1-763dd002457c\") " pod="openstack/memcached-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.639583 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b43ddff2-67cd-4ab7-84c1-763dd002457c-kolla-config\") pod \"memcached-0\" (UID: \"b43ddff2-67cd-4ab7-84c1-763dd002457c\") " pod="openstack/memcached-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.662505 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b43ddff2-67cd-4ab7-84c1-763dd002457c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b43ddff2-67cd-4ab7-84c1-763dd002457c\") " pod="openstack/memcached-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.664454 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk76d\" (UniqueName: \"kubernetes.io/projected/b43ddff2-67cd-4ab7-84c1-763dd002457c-kube-api-access-lk76d\") pod \"memcached-0\" (UID: \"b43ddff2-67cd-4ab7-84c1-763dd002457c\") " pod="openstack/memcached-0" Jan 01 08:45:01 crc kubenswrapper[4867]: I0101 08:45:01.835814 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 01 08:45:03 crc kubenswrapper[4867]: I0101 08:45:03.075679 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 01 08:45:03 crc kubenswrapper[4867]: I0101 08:45:03.077252 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 01 08:45:03 crc kubenswrapper[4867]: I0101 08:45:03.080186 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-vdfp8" Jan 01 08:45:03 crc kubenswrapper[4867]: I0101 08:45:03.084756 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 01 08:45:03 crc kubenswrapper[4867]: I0101 08:45:03.163791 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmdnl\" (UniqueName: \"kubernetes.io/projected/ac91c4fe-e982-4a27-b80f-e7d0d7659cc6-kube-api-access-fmdnl\") pod \"kube-state-metrics-0\" (UID: \"ac91c4fe-e982-4a27-b80f-e7d0d7659cc6\") " pod="openstack/kube-state-metrics-0" Jan 01 08:45:03 crc kubenswrapper[4867]: I0101 08:45:03.264761 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmdnl\" (UniqueName: \"kubernetes.io/projected/ac91c4fe-e982-4a27-b80f-e7d0d7659cc6-kube-api-access-fmdnl\") pod \"kube-state-metrics-0\" (UID: \"ac91c4fe-e982-4a27-b80f-e7d0d7659cc6\") " pod="openstack/kube-state-metrics-0" Jan 01 08:45:03 crc kubenswrapper[4867]: I0101 08:45:03.284817 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmdnl\" (UniqueName: \"kubernetes.io/projected/ac91c4fe-e982-4a27-b80f-e7d0d7659cc6-kube-api-access-fmdnl\") pod \"kube-state-metrics-0\" (UID: \"ac91c4fe-e982-4a27-b80f-e7d0d7659cc6\") " pod="openstack/kube-state-metrics-0" Jan 01 08:45:03 crc kubenswrapper[4867]: I0101 08:45:03.399057 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 01 08:45:04 crc kubenswrapper[4867]: W0101 08:45:04.552520 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32e3945b_5eb9_42ef_b8ce_9a3a3d0cfe46.slice/crio-1ca43f25116a8c49bffe002290ffcccb78d75bf13d769d95781e1486b1d40797 WatchSource:0}: Error finding container 1ca43f25116a8c49bffe002290ffcccb78d75bf13d769d95781e1486b1d40797: Status 404 returned error can't find the container with id 1ca43f25116a8c49bffe002290ffcccb78d75bf13d769d95781e1486b1d40797 Jan 01 08:45:05 crc kubenswrapper[4867]: I0101 08:45:05.027593 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29454285-kqkct" event={"ID":"32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46","Type":"ContainerStarted","Data":"1ca43f25116a8c49bffe002290ffcccb78d75bf13d769d95781e1486b1d40797"} Jan 01 08:45:06 crc kubenswrapper[4867]: I0101 08:45:06.771390 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-8jl6r"] Jan 01 08:45:06 crc kubenswrapper[4867]: I0101 08:45:06.773448 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8jl6r" Jan 01 08:45:06 crc kubenswrapper[4867]: I0101 08:45:06.774634 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8jl6r"] Jan 01 08:45:06 crc kubenswrapper[4867]: I0101 08:45:06.777097 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-rkhj5" Jan 01 08:45:06 crc kubenswrapper[4867]: I0101 08:45:06.779124 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 01 08:45:06 crc kubenswrapper[4867]: I0101 08:45:06.779563 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 01 08:45:06 crc kubenswrapper[4867]: I0101 08:45:06.787056 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-smgl6"] Jan 01 08:45:06 crc kubenswrapper[4867]: I0101 08:45:06.789478 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-smgl6" Jan 01 08:45:06 crc kubenswrapper[4867]: I0101 08:45:06.798550 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-smgl6"] Jan 01 08:45:06 crc kubenswrapper[4867]: I0101 08:45:06.830396 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/02bf5c7d-1674-4308-8bcf-751d6c4a3783-var-log-ovn\") pod \"ovn-controller-8jl6r\" (UID: \"02bf5c7d-1674-4308-8bcf-751d6c4a3783\") " pod="openstack/ovn-controller-8jl6r" Jan 01 08:45:06 crc kubenswrapper[4867]: I0101 08:45:06.830557 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/02bf5c7d-1674-4308-8bcf-751d6c4a3783-var-run\") pod \"ovn-controller-8jl6r\" (UID: \"02bf5c7d-1674-4308-8bcf-751d6c4a3783\") " pod="openstack/ovn-controller-8jl6r" Jan 01 08:45:06 crc kubenswrapper[4867]: I0101 08:45:06.830595 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/02bf5c7d-1674-4308-8bcf-751d6c4a3783-var-run-ovn\") pod \"ovn-controller-8jl6r\" (UID: \"02bf5c7d-1674-4308-8bcf-751d6c4a3783\") " pod="openstack/ovn-controller-8jl6r" Jan 01 08:45:06 crc kubenswrapper[4867]: I0101 08:45:06.830626 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2llt4\" (UniqueName: \"kubernetes.io/projected/02bf5c7d-1674-4308-8bcf-751d6c4a3783-kube-api-access-2llt4\") pod \"ovn-controller-8jl6r\" (UID: \"02bf5c7d-1674-4308-8bcf-751d6c4a3783\") " pod="openstack/ovn-controller-8jl6r" Jan 01 08:45:06 crc kubenswrapper[4867]: I0101 08:45:06.830655 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02bf5c7d-1674-4308-8bcf-751d6c4a3783-scripts\") pod \"ovn-controller-8jl6r\" (UID: \"02bf5c7d-1674-4308-8bcf-751d6c4a3783\") " pod="openstack/ovn-controller-8jl6r" Jan 01 08:45:06 crc kubenswrapper[4867]: I0101 08:45:06.830679 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02bf5c7d-1674-4308-8bcf-751d6c4a3783-combined-ca-bundle\") pod \"ovn-controller-8jl6r\" (UID: \"02bf5c7d-1674-4308-8bcf-751d6c4a3783\") " pod="openstack/ovn-controller-8jl6r" Jan 01 08:45:06 crc kubenswrapper[4867]: I0101 08:45:06.830711 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/02bf5c7d-1674-4308-8bcf-751d6c4a3783-ovn-controller-tls-certs\") pod \"ovn-controller-8jl6r\" (UID: \"02bf5c7d-1674-4308-8bcf-751d6c4a3783\") " pod="openstack/ovn-controller-8jl6r" Jan 01 08:45:06 crc kubenswrapper[4867]: I0101 08:45:06.931688 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8n6v\" (UniqueName: \"kubernetes.io/projected/d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e-kube-api-access-j8n6v\") pod \"ovn-controller-ovs-smgl6\" (UID: \"d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e\") " pod="openstack/ovn-controller-ovs-smgl6" Jan 01 08:45:06 crc kubenswrapper[4867]: I0101 08:45:06.931775 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e-var-log\") pod \"ovn-controller-ovs-smgl6\" (UID: \"d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e\") " pod="openstack/ovn-controller-ovs-smgl6" Jan 01 08:45:06 crc kubenswrapper[4867]: I0101 08:45:06.931803 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e-scripts\") pod \"ovn-controller-ovs-smgl6\" (UID: \"d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e\") " pod="openstack/ovn-controller-ovs-smgl6" Jan 01 08:45:06 crc kubenswrapper[4867]: I0101 08:45:06.931825 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e-var-run\") pod \"ovn-controller-ovs-smgl6\" (UID: \"d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e\") " pod="openstack/ovn-controller-ovs-smgl6" Jan 01 08:45:06 crc kubenswrapper[4867]: I0101 08:45:06.931844 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e-var-lib\") pod \"ovn-controller-ovs-smgl6\" (UID: \"d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e\") " pod="openstack/ovn-controller-ovs-smgl6" Jan 01 08:45:06 crc kubenswrapper[4867]: I0101 08:45:06.931900 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/02bf5c7d-1674-4308-8bcf-751d6c4a3783-var-run\") pod \"ovn-controller-8jl6r\" (UID: \"02bf5c7d-1674-4308-8bcf-751d6c4a3783\") " pod="openstack/ovn-controller-8jl6r" Jan 01 08:45:06 crc kubenswrapper[4867]: I0101 08:45:06.932051 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/02bf5c7d-1674-4308-8bcf-751d6c4a3783-var-run-ovn\") pod \"ovn-controller-8jl6r\" (UID: \"02bf5c7d-1674-4308-8bcf-751d6c4a3783\") " pod="openstack/ovn-controller-8jl6r" Jan 01 08:45:06 crc kubenswrapper[4867]: I0101 08:45:06.932076 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e-etc-ovs\") pod \"ovn-controller-ovs-smgl6\" (UID: \"d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e\") " pod="openstack/ovn-controller-ovs-smgl6" Jan 01 08:45:06 crc kubenswrapper[4867]: I0101 08:45:06.932101 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2llt4\" (UniqueName: \"kubernetes.io/projected/02bf5c7d-1674-4308-8bcf-751d6c4a3783-kube-api-access-2llt4\") pod \"ovn-controller-8jl6r\" (UID: \"02bf5c7d-1674-4308-8bcf-751d6c4a3783\") " pod="openstack/ovn-controller-8jl6r" Jan 01 08:45:06 crc kubenswrapper[4867]: I0101 08:45:06.932124 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02bf5c7d-1674-4308-8bcf-751d6c4a3783-scripts\") pod \"ovn-controller-8jl6r\" (UID: \"02bf5c7d-1674-4308-8bcf-751d6c4a3783\") " pod="openstack/ovn-controller-8jl6r" Jan 01 08:45:06 crc kubenswrapper[4867]: I0101 08:45:06.932146 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02bf5c7d-1674-4308-8bcf-751d6c4a3783-combined-ca-bundle\") pod \"ovn-controller-8jl6r\" (UID: \"02bf5c7d-1674-4308-8bcf-751d6c4a3783\") " pod="openstack/ovn-controller-8jl6r" Jan 01 08:45:06 crc kubenswrapper[4867]: I0101 08:45:06.932176 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/02bf5c7d-1674-4308-8bcf-751d6c4a3783-ovn-controller-tls-certs\") pod \"ovn-controller-8jl6r\" (UID: \"02bf5c7d-1674-4308-8bcf-751d6c4a3783\") " pod="openstack/ovn-controller-8jl6r" Jan 01 08:45:06 crc kubenswrapper[4867]: I0101 08:45:06.932200 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/02bf5c7d-1674-4308-8bcf-751d6c4a3783-var-log-ovn\") pod \"ovn-controller-8jl6r\" (UID: \"02bf5c7d-1674-4308-8bcf-751d6c4a3783\") " pod="openstack/ovn-controller-8jl6r" Jan 01 08:45:06 crc kubenswrapper[4867]: I0101 08:45:06.932432 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/02bf5c7d-1674-4308-8bcf-751d6c4a3783-var-run\") pod \"ovn-controller-8jl6r\" (UID: \"02bf5c7d-1674-4308-8bcf-751d6c4a3783\") " pod="openstack/ovn-controller-8jl6r" Jan 01 08:45:06 crc kubenswrapper[4867]: I0101 08:45:06.932514 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/02bf5c7d-1674-4308-8bcf-751d6c4a3783-var-run-ovn\") pod \"ovn-controller-8jl6r\" (UID: \"02bf5c7d-1674-4308-8bcf-751d6c4a3783\") " pod="openstack/ovn-controller-8jl6r" Jan 01 08:45:06 crc kubenswrapper[4867]: I0101 08:45:06.932532 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/02bf5c7d-1674-4308-8bcf-751d6c4a3783-var-log-ovn\") pod \"ovn-controller-8jl6r\" (UID: \"02bf5c7d-1674-4308-8bcf-751d6c4a3783\") " pod="openstack/ovn-controller-8jl6r" Jan 01 08:45:06 crc kubenswrapper[4867]: I0101 08:45:06.937153 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02bf5c7d-1674-4308-8bcf-751d6c4a3783-scripts\") pod \"ovn-controller-8jl6r\" (UID: \"02bf5c7d-1674-4308-8bcf-751d6c4a3783\") " pod="openstack/ovn-controller-8jl6r" Jan 01 08:45:06 crc kubenswrapper[4867]: I0101 08:45:06.943713 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/02bf5c7d-1674-4308-8bcf-751d6c4a3783-ovn-controller-tls-certs\") pod \"ovn-controller-8jl6r\" (UID: \"02bf5c7d-1674-4308-8bcf-751d6c4a3783\") " pod="openstack/ovn-controller-8jl6r" Jan 01 08:45:06 crc kubenswrapper[4867]: I0101 08:45:06.945189 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02bf5c7d-1674-4308-8bcf-751d6c4a3783-combined-ca-bundle\") pod \"ovn-controller-8jl6r\" (UID: \"02bf5c7d-1674-4308-8bcf-751d6c4a3783\") " pod="openstack/ovn-controller-8jl6r" Jan 01 08:45:06 crc kubenswrapper[4867]: I0101 08:45:06.949301 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2llt4\" (UniqueName: \"kubernetes.io/projected/02bf5c7d-1674-4308-8bcf-751d6c4a3783-kube-api-access-2llt4\") pod \"ovn-controller-8jl6r\" (UID: \"02bf5c7d-1674-4308-8bcf-751d6c4a3783\") " pod="openstack/ovn-controller-8jl6r" Jan 01 08:45:07 crc kubenswrapper[4867]: I0101 08:45:07.033322 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e-etc-ovs\") pod \"ovn-controller-ovs-smgl6\" (UID: \"d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e\") " pod="openstack/ovn-controller-ovs-smgl6" Jan 01 08:45:07 crc kubenswrapper[4867]: I0101 08:45:07.033428 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8n6v\" (UniqueName: \"kubernetes.io/projected/d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e-kube-api-access-j8n6v\") pod \"ovn-controller-ovs-smgl6\" (UID: \"d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e\") " pod="openstack/ovn-controller-ovs-smgl6" Jan 01 08:45:07 crc kubenswrapper[4867]: I0101 08:45:07.033478 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e-var-log\") pod \"ovn-controller-ovs-smgl6\" (UID: \"d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e\") " pod="openstack/ovn-controller-ovs-smgl6" Jan 01 08:45:07 crc kubenswrapper[4867]: I0101 08:45:07.033493 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e-scripts\") pod \"ovn-controller-ovs-smgl6\" (UID: \"d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e\") " pod="openstack/ovn-controller-ovs-smgl6" Jan 01 08:45:07 crc kubenswrapper[4867]: I0101 08:45:07.033507 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e-var-run\") pod \"ovn-controller-ovs-smgl6\" (UID: \"d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e\") " pod="openstack/ovn-controller-ovs-smgl6" Jan 01 08:45:07 crc kubenswrapper[4867]: I0101 08:45:07.033521 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e-var-lib\") pod \"ovn-controller-ovs-smgl6\" (UID: \"d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e\") " pod="openstack/ovn-controller-ovs-smgl6" Jan 01 08:45:07 crc kubenswrapper[4867]: I0101 08:45:07.033763 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e-var-lib\") pod \"ovn-controller-ovs-smgl6\" (UID: \"d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e\") " pod="openstack/ovn-controller-ovs-smgl6" Jan 01 08:45:07 crc kubenswrapper[4867]: I0101 08:45:07.033815 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e-var-run\") pod \"ovn-controller-ovs-smgl6\" (UID: \"d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e\") " pod="openstack/ovn-controller-ovs-smgl6" Jan 01 08:45:07 crc kubenswrapper[4867]: I0101 08:45:07.033953 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e-etc-ovs\") pod \"ovn-controller-ovs-smgl6\" (UID: \"d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e\") " pod="openstack/ovn-controller-ovs-smgl6" Jan 01 08:45:07 crc kubenswrapper[4867]: I0101 08:45:07.036153 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e-scripts\") pod \"ovn-controller-ovs-smgl6\" (UID: \"d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e\") " pod="openstack/ovn-controller-ovs-smgl6" Jan 01 08:45:07 crc kubenswrapper[4867]: I0101 08:45:07.040938 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e-var-log\") pod \"ovn-controller-ovs-smgl6\" (UID: \"d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e\") " pod="openstack/ovn-controller-ovs-smgl6" Jan 01 08:45:07 crc kubenswrapper[4867]: I0101 08:45:07.064378 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8n6v\" (UniqueName: \"kubernetes.io/projected/d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e-kube-api-access-j8n6v\") pod \"ovn-controller-ovs-smgl6\" (UID: \"d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e\") " pod="openstack/ovn-controller-ovs-smgl6" Jan 01 08:45:07 crc kubenswrapper[4867]: I0101 08:45:07.107690 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8jl6r" Jan 01 08:45:07 crc kubenswrapper[4867]: I0101 08:45:07.121401 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-smgl6" Jan 01 08:45:08 crc kubenswrapper[4867]: I0101 08:45:08.669588 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 01 08:45:08 crc kubenswrapper[4867]: I0101 08:45:08.672175 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 01 08:45:08 crc kubenswrapper[4867]: I0101 08:45:08.678622 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 01 08:45:08 crc kubenswrapper[4867]: I0101 08:45:08.678730 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-k5w8q" Jan 01 08:45:08 crc kubenswrapper[4867]: I0101 08:45:08.678823 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 01 08:45:08 crc kubenswrapper[4867]: I0101 08:45:08.679149 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 01 08:45:08 crc kubenswrapper[4867]: I0101 08:45:08.679760 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 01 08:45:08 crc kubenswrapper[4867]: I0101 08:45:08.688661 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 01 08:45:08 crc kubenswrapper[4867]: I0101 08:45:08.779847 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"654c613f-4f96-41f0-8937-d4be9f7897da\") " pod="openstack/ovsdbserver-nb-0" Jan 01 08:45:08 crc kubenswrapper[4867]: I0101 08:45:08.779923 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/654c613f-4f96-41f0-8937-d4be9f7897da-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"654c613f-4f96-41f0-8937-d4be9f7897da\") " pod="openstack/ovsdbserver-nb-0" Jan 01 08:45:08 crc kubenswrapper[4867]: I0101 08:45:08.779948 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/654c613f-4f96-41f0-8937-d4be9f7897da-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"654c613f-4f96-41f0-8937-d4be9f7897da\") " pod="openstack/ovsdbserver-nb-0" Jan 01 08:45:08 crc kubenswrapper[4867]: I0101 08:45:08.779976 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfvjv\" (UniqueName: \"kubernetes.io/projected/654c613f-4f96-41f0-8937-d4be9f7897da-kube-api-access-mfvjv\") pod \"ovsdbserver-nb-0\" (UID: \"654c613f-4f96-41f0-8937-d4be9f7897da\") " pod="openstack/ovsdbserver-nb-0" Jan 01 08:45:08 crc kubenswrapper[4867]: I0101 08:45:08.780001 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/654c613f-4f96-41f0-8937-d4be9f7897da-config\") pod \"ovsdbserver-nb-0\" (UID: \"654c613f-4f96-41f0-8937-d4be9f7897da\") " pod="openstack/ovsdbserver-nb-0" Jan 01 08:45:08 crc kubenswrapper[4867]: I0101 08:45:08.780053 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/654c613f-4f96-41f0-8937-d4be9f7897da-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"654c613f-4f96-41f0-8937-d4be9f7897da\") " pod="openstack/ovsdbserver-nb-0" Jan 01 08:45:08 crc kubenswrapper[4867]: I0101 08:45:08.780145 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/654c613f-4f96-41f0-8937-d4be9f7897da-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"654c613f-4f96-41f0-8937-d4be9f7897da\") " pod="openstack/ovsdbserver-nb-0" Jan 01 08:45:08 crc kubenswrapper[4867]: I0101 08:45:08.780164 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/654c613f-4f96-41f0-8937-d4be9f7897da-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"654c613f-4f96-41f0-8937-d4be9f7897da\") " pod="openstack/ovsdbserver-nb-0" Jan 01 08:45:08 crc kubenswrapper[4867]: I0101 08:45:08.881906 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/654c613f-4f96-41f0-8937-d4be9f7897da-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"654c613f-4f96-41f0-8937-d4be9f7897da\") " pod="openstack/ovsdbserver-nb-0" Jan 01 08:45:08 crc kubenswrapper[4867]: I0101 08:45:08.881953 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/654c613f-4f96-41f0-8937-d4be9f7897da-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"654c613f-4f96-41f0-8937-d4be9f7897da\") " pod="openstack/ovsdbserver-nb-0" Jan 01 08:45:08 crc kubenswrapper[4867]: I0101 08:45:08.882045 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"654c613f-4f96-41f0-8937-d4be9f7897da\") " pod="openstack/ovsdbserver-nb-0" Jan 01 08:45:08 crc kubenswrapper[4867]: I0101 08:45:08.882075 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/654c613f-4f96-41f0-8937-d4be9f7897da-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"654c613f-4f96-41f0-8937-d4be9f7897da\") " pod="openstack/ovsdbserver-nb-0" Jan 01 08:45:08 crc kubenswrapper[4867]: I0101 08:45:08.882096 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/654c613f-4f96-41f0-8937-d4be9f7897da-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"654c613f-4f96-41f0-8937-d4be9f7897da\") " pod="openstack/ovsdbserver-nb-0" Jan 01 08:45:08 crc kubenswrapper[4867]: I0101 08:45:08.882125 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfvjv\" (UniqueName: \"kubernetes.io/projected/654c613f-4f96-41f0-8937-d4be9f7897da-kube-api-access-mfvjv\") pod \"ovsdbserver-nb-0\" (UID: \"654c613f-4f96-41f0-8937-d4be9f7897da\") " pod="openstack/ovsdbserver-nb-0" Jan 01 08:45:08 crc kubenswrapper[4867]: I0101 08:45:08.882148 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/654c613f-4f96-41f0-8937-d4be9f7897da-config\") pod \"ovsdbserver-nb-0\" (UID: \"654c613f-4f96-41f0-8937-d4be9f7897da\") " pod="openstack/ovsdbserver-nb-0" Jan 01 08:45:08 crc kubenswrapper[4867]: I0101 08:45:08.882197 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/654c613f-4f96-41f0-8937-d4be9f7897da-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"654c613f-4f96-41f0-8937-d4be9f7897da\") " pod="openstack/ovsdbserver-nb-0" Jan 01 08:45:08 crc kubenswrapper[4867]: I0101 08:45:08.882323 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"654c613f-4f96-41f0-8937-d4be9f7897da\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-nb-0" Jan 01 08:45:08 crc kubenswrapper[4867]: I0101 08:45:08.882695 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/654c613f-4f96-41f0-8937-d4be9f7897da-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"654c613f-4f96-41f0-8937-d4be9f7897da\") " pod="openstack/ovsdbserver-nb-0" Jan 01 08:45:08 crc kubenswrapper[4867]: I0101 08:45:08.883399 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/654c613f-4f96-41f0-8937-d4be9f7897da-config\") pod \"ovsdbserver-nb-0\" (UID: \"654c613f-4f96-41f0-8937-d4be9f7897da\") " pod="openstack/ovsdbserver-nb-0" Jan 01 08:45:08 crc kubenswrapper[4867]: I0101 08:45:08.887019 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/654c613f-4f96-41f0-8937-d4be9f7897da-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"654c613f-4f96-41f0-8937-d4be9f7897da\") " pod="openstack/ovsdbserver-nb-0" Jan 01 08:45:08 crc kubenswrapper[4867]: I0101 08:45:08.887934 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/654c613f-4f96-41f0-8937-d4be9f7897da-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"654c613f-4f96-41f0-8937-d4be9f7897da\") " pod="openstack/ovsdbserver-nb-0" Jan 01 08:45:08 crc kubenswrapper[4867]: I0101 08:45:08.888191 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/654c613f-4f96-41f0-8937-d4be9f7897da-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"654c613f-4f96-41f0-8937-d4be9f7897da\") " pod="openstack/ovsdbserver-nb-0" Jan 01 08:45:08 crc kubenswrapper[4867]: I0101 08:45:08.897265 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/654c613f-4f96-41f0-8937-d4be9f7897da-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"654c613f-4f96-41f0-8937-d4be9f7897da\") " pod="openstack/ovsdbserver-nb-0" Jan 01 08:45:08 crc kubenswrapper[4867]: I0101 08:45:08.905313 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"654c613f-4f96-41f0-8937-d4be9f7897da\") " pod="openstack/ovsdbserver-nb-0" Jan 01 08:45:08 crc kubenswrapper[4867]: I0101 08:45:08.911442 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfvjv\" (UniqueName: \"kubernetes.io/projected/654c613f-4f96-41f0-8937-d4be9f7897da-kube-api-access-mfvjv\") pod \"ovsdbserver-nb-0\" (UID: \"654c613f-4f96-41f0-8937-d4be9f7897da\") " pod="openstack/ovsdbserver-nb-0" Jan 01 08:45:09 crc kubenswrapper[4867]: I0101 08:45:09.007334 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 01 08:45:10 crc kubenswrapper[4867]: I0101 08:45:10.555370 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 01 08:45:10 crc kubenswrapper[4867]: I0101 08:45:10.568380 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 01 08:45:10 crc kubenswrapper[4867]: I0101 08:45:10.568463 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 01 08:45:10 crc kubenswrapper[4867]: I0101 08:45:10.574507 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 01 08:45:10 crc kubenswrapper[4867]: I0101 08:45:10.574711 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 01 08:45:10 crc kubenswrapper[4867]: I0101 08:45:10.574972 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-swrqh" Jan 01 08:45:10 crc kubenswrapper[4867]: I0101 08:45:10.575362 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 01 08:45:10 crc kubenswrapper[4867]: I0101 08:45:10.708524 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vccg\" (UniqueName: \"kubernetes.io/projected/1620c75e-1129-4850-9b27-7666e4cb8ed5-kube-api-access-8vccg\") pod \"ovsdbserver-sb-0\" (UID: \"1620c75e-1129-4850-9b27-7666e4cb8ed5\") " pod="openstack/ovsdbserver-sb-0" Jan 01 08:45:10 crc kubenswrapper[4867]: I0101 08:45:10.708624 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1620c75e-1129-4850-9b27-7666e4cb8ed5-config\") pod \"ovsdbserver-sb-0\" (UID: \"1620c75e-1129-4850-9b27-7666e4cb8ed5\") " pod="openstack/ovsdbserver-sb-0" Jan 01 08:45:10 crc kubenswrapper[4867]: I0101 08:45:10.708652 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1620c75e-1129-4850-9b27-7666e4cb8ed5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1620c75e-1129-4850-9b27-7666e4cb8ed5\") " pod="openstack/ovsdbserver-sb-0" Jan 01 08:45:10 crc kubenswrapper[4867]: I0101 08:45:10.708674 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1620c75e-1129-4850-9b27-7666e4cb8ed5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1620c75e-1129-4850-9b27-7666e4cb8ed5\") " pod="openstack/ovsdbserver-sb-0" Jan 01 08:45:10 crc kubenswrapper[4867]: I0101 08:45:10.708876 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1620c75e-1129-4850-9b27-7666e4cb8ed5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1620c75e-1129-4850-9b27-7666e4cb8ed5\") " pod="openstack/ovsdbserver-sb-0" Jan 01 08:45:10 crc kubenswrapper[4867]: I0101 08:45:10.708996 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1620c75e-1129-4850-9b27-7666e4cb8ed5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1620c75e-1129-4850-9b27-7666e4cb8ed5\") " pod="openstack/ovsdbserver-sb-0" Jan 01 08:45:10 crc kubenswrapper[4867]: I0101 08:45:10.709079 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1620c75e-1129-4850-9b27-7666e4cb8ed5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1620c75e-1129-4850-9b27-7666e4cb8ed5\") " pod="openstack/ovsdbserver-sb-0" Jan 01 08:45:10 crc kubenswrapper[4867]: I0101 08:45:10.709179 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1620c75e-1129-4850-9b27-7666e4cb8ed5\") " pod="openstack/ovsdbserver-sb-0" Jan 01 08:45:10 crc kubenswrapper[4867]: I0101 08:45:10.811298 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1620c75e-1129-4850-9b27-7666e4cb8ed5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1620c75e-1129-4850-9b27-7666e4cb8ed5\") " pod="openstack/ovsdbserver-sb-0" Jan 01 08:45:10 crc kubenswrapper[4867]: I0101 08:45:10.811384 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1620c75e-1129-4850-9b27-7666e4cb8ed5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1620c75e-1129-4850-9b27-7666e4cb8ed5\") " pod="openstack/ovsdbserver-sb-0" Jan 01 08:45:10 crc kubenswrapper[4867]: I0101 08:45:10.812378 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1620c75e-1129-4850-9b27-7666e4cb8ed5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1620c75e-1129-4850-9b27-7666e4cb8ed5\") " pod="openstack/ovsdbserver-sb-0" Jan 01 08:45:10 crc kubenswrapper[4867]: I0101 08:45:10.812436 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1620c75e-1129-4850-9b27-7666e4cb8ed5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1620c75e-1129-4850-9b27-7666e4cb8ed5\") " pod="openstack/ovsdbserver-sb-0" Jan 01 08:45:10 crc kubenswrapper[4867]: I0101 08:45:10.812451 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1620c75e-1129-4850-9b27-7666e4cb8ed5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1620c75e-1129-4850-9b27-7666e4cb8ed5\") " pod="openstack/ovsdbserver-sb-0" Jan 01 08:45:10 crc kubenswrapper[4867]: I0101 08:45:10.812606 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1620c75e-1129-4850-9b27-7666e4cb8ed5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1620c75e-1129-4850-9b27-7666e4cb8ed5\") " pod="openstack/ovsdbserver-sb-0" Jan 01 08:45:10 crc kubenswrapper[4867]: I0101 08:45:10.813047 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1620c75e-1129-4850-9b27-7666e4cb8ed5\") " pod="openstack/ovsdbserver-sb-0" Jan 01 08:45:10 crc kubenswrapper[4867]: I0101 08:45:10.813139 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vccg\" (UniqueName: \"kubernetes.io/projected/1620c75e-1129-4850-9b27-7666e4cb8ed5-kube-api-access-8vccg\") pod \"ovsdbserver-sb-0\" (UID: \"1620c75e-1129-4850-9b27-7666e4cb8ed5\") " pod="openstack/ovsdbserver-sb-0" Jan 01 08:45:10 crc kubenswrapper[4867]: I0101 08:45:10.813261 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1620c75e-1129-4850-9b27-7666e4cb8ed5-config\") pod \"ovsdbserver-sb-0\" (UID: \"1620c75e-1129-4850-9b27-7666e4cb8ed5\") " pod="openstack/ovsdbserver-sb-0" Jan 01 08:45:10 crc kubenswrapper[4867]: I0101 08:45:10.812901 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1620c75e-1129-4850-9b27-7666e4cb8ed5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1620c75e-1129-4850-9b27-7666e4cb8ed5\") " pod="openstack/ovsdbserver-sb-0" Jan 01 08:45:10 crc kubenswrapper[4867]: I0101 08:45:10.813513 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1620c75e-1129-4850-9b27-7666e4cb8ed5\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-sb-0" Jan 01 08:45:10 crc kubenswrapper[4867]: I0101 08:45:10.814489 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1620c75e-1129-4850-9b27-7666e4cb8ed5-config\") pod \"ovsdbserver-sb-0\" (UID: \"1620c75e-1129-4850-9b27-7666e4cb8ed5\") " pod="openstack/ovsdbserver-sb-0" Jan 01 08:45:10 crc kubenswrapper[4867]: I0101 08:45:10.817111 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1620c75e-1129-4850-9b27-7666e4cb8ed5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1620c75e-1129-4850-9b27-7666e4cb8ed5\") " pod="openstack/ovsdbserver-sb-0" Jan 01 08:45:10 crc kubenswrapper[4867]: I0101 08:45:10.817111 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1620c75e-1129-4850-9b27-7666e4cb8ed5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1620c75e-1129-4850-9b27-7666e4cb8ed5\") " pod="openstack/ovsdbserver-sb-0" Jan 01 08:45:10 crc kubenswrapper[4867]: I0101 08:45:10.819505 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1620c75e-1129-4850-9b27-7666e4cb8ed5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1620c75e-1129-4850-9b27-7666e4cb8ed5\") " pod="openstack/ovsdbserver-sb-0" Jan 01 08:45:10 crc kubenswrapper[4867]: I0101 08:45:10.829122 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vccg\" (UniqueName: \"kubernetes.io/projected/1620c75e-1129-4850-9b27-7666e4cb8ed5-kube-api-access-8vccg\") pod \"ovsdbserver-sb-0\" (UID: \"1620c75e-1129-4850-9b27-7666e4cb8ed5\") " pod="openstack/ovsdbserver-sb-0" Jan 01 08:45:10 crc kubenswrapper[4867]: I0101 08:45:10.832141 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1620c75e-1129-4850-9b27-7666e4cb8ed5\") " pod="openstack/ovsdbserver-sb-0" Jan 01 08:45:10 crc kubenswrapper[4867]: I0101 08:45:10.899787 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 01 08:45:20 crc kubenswrapper[4867]: E0101 08:45:20.123060 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 01 08:45:20 crc kubenswrapper[4867]: E0101 08:45:20.123771 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fkp4v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-95f5f6995-4brlh_openstack(70afb24e-9e9b-4c9a-b4bd-991497c43647): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 01 08:45:20 crc kubenswrapper[4867]: E0101 08:45:20.125219 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-95f5f6995-4brlh" podUID="70afb24e-9e9b-4c9a-b4bd-991497c43647" Jan 01 08:45:20 crc kubenswrapper[4867]: E0101 08:45:20.157437 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-95f5f6995-4brlh" podUID="70afb24e-9e9b-4c9a-b4bd-991497c43647" Jan 01 08:45:20 crc kubenswrapper[4867]: E0101 08:45:20.952584 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d" Jan 01 08:45:20 crc kubenswrapper[4867]: E0101 08:45:20.952761 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gbzfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(84d7aac6-1073-41c0-acff-169e36ec197d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 01 08:45:20 crc kubenswrapper[4867]: E0101 08:45:20.954001 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="84d7aac6-1073-41c0-acff-169e36ec197d" Jan 01 08:45:21 crc kubenswrapper[4867]: E0101 08:45:21.008340 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 01 08:45:21 crc kubenswrapper[4867]: E0101 08:45:21.008542 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x5kcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-84bb9d8bd9-dtzxj_openstack(baa9a732-d77a-4f52-a689-53f88422f10e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 01 08:45:21 crc kubenswrapper[4867]: E0101 08:45:21.010354 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-84bb9d8bd9-dtzxj" podUID="baa9a732-d77a-4f52-a689-53f88422f10e" Jan 01 08:45:21 crc kubenswrapper[4867]: E0101 08:45:21.028965 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 01 08:45:21 crc kubenswrapper[4867]: E0101 08:45:21.029237 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-98q2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5f854695bc-lj4mw_openstack(6d228478-592e-4cad-976d-cf0365f2d80e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 01 08:45:21 crc kubenswrapper[4867]: E0101 08:45:21.030426 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5f854695bc-lj4mw" podUID="6d228478-592e-4cad-976d-cf0365f2d80e" Jan 01 08:45:21 crc kubenswrapper[4867]: E0101 08:45:21.037746 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d" Jan 01 08:45:21 crc kubenswrapper[4867]: E0101 08:45:21.038139 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fgxj9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 01 08:45:21 crc kubenswrapper[4867]: E0101 08:45:21.039955 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99" Jan 01 08:45:21 crc kubenswrapper[4867]: E0101 08:45:21.059378 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 01 08:45:21 crc kubenswrapper[4867]: E0101 08:45:21.059584 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cphwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-c7cbb8f79-h46zn_openstack(2e3af20b-ad60-4621-a781-9fc1721111ef): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 01 08:45:21 crc kubenswrapper[4867]: E0101 08:45:21.060834 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-c7cbb8f79-h46zn" podUID="2e3af20b-ad60-4621-a781-9fc1721111ef" Jan 01 08:45:21 crc kubenswrapper[4867]: E0101 08:45:21.167713 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d\\\"\"" pod="openstack/rabbitmq-server-0" podUID="84d7aac6-1073-41c0-acff-169e36ec197d" Jan 01 08:45:21 crc kubenswrapper[4867]: E0101 08:45:21.167849 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99" Jan 01 08:45:21 crc kubenswrapper[4867]: E0101 08:45:21.167972 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-c7cbb8f79-h46zn" podUID="2e3af20b-ad60-4621-a781-9fc1721111ef" Jan 01 08:45:21 crc kubenswrapper[4867]: I0101 08:45:21.330979 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 08:45:21 crc kubenswrapper[4867]: I0101 08:45:21.331071 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 08:45:22 crc kubenswrapper[4867]: E0101 08:45:22.846705 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13" Jan 01 08:45:22 crc kubenswrapper[4867]: E0101 08:45:22.846928 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x94zz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(3bd7d188-bdc2-4aa8-891b-0775de1a5eeb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 01 08:45:22 crc kubenswrapper[4867]: E0101 08:45:22.848013 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="3bd7d188-bdc2-4aa8-891b-0775de1a5eeb" Jan 01 08:45:22 crc kubenswrapper[4867]: I0101 08:45:22.991581 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-lj4mw" Jan 01 08:45:23 crc kubenswrapper[4867]: I0101 08:45:23.012488 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-dtzxj" Jan 01 08:45:23 crc kubenswrapper[4867]: I0101 08:45:23.043698 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98q2m\" (UniqueName: \"kubernetes.io/projected/6d228478-592e-4cad-976d-cf0365f2d80e-kube-api-access-98q2m\") pod \"6d228478-592e-4cad-976d-cf0365f2d80e\" (UID: \"6d228478-592e-4cad-976d-cf0365f2d80e\") " Jan 01 08:45:23 crc kubenswrapper[4867]: I0101 08:45:23.044164 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d228478-592e-4cad-976d-cf0365f2d80e-dns-svc\") pod \"6d228478-592e-4cad-976d-cf0365f2d80e\" (UID: \"6d228478-592e-4cad-976d-cf0365f2d80e\") " Jan 01 08:45:23 crc kubenswrapper[4867]: I0101 08:45:23.044281 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d228478-592e-4cad-976d-cf0365f2d80e-config\") pod \"6d228478-592e-4cad-976d-cf0365f2d80e\" (UID: \"6d228478-592e-4cad-976d-cf0365f2d80e\") " Jan 01 08:45:23 crc kubenswrapper[4867]: I0101 08:45:23.046118 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d228478-592e-4cad-976d-cf0365f2d80e-config" (OuterVolumeSpecName: "config") pod "6d228478-592e-4cad-976d-cf0365f2d80e" (UID: "6d228478-592e-4cad-976d-cf0365f2d80e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:45:23 crc kubenswrapper[4867]: I0101 08:45:23.048610 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d228478-592e-4cad-976d-cf0365f2d80e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6d228478-592e-4cad-976d-cf0365f2d80e" (UID: "6d228478-592e-4cad-976d-cf0365f2d80e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:45:23 crc kubenswrapper[4867]: I0101 08:45:23.079862 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d228478-592e-4cad-976d-cf0365f2d80e-kube-api-access-98q2m" (OuterVolumeSpecName: "kube-api-access-98q2m") pod "6d228478-592e-4cad-976d-cf0365f2d80e" (UID: "6d228478-592e-4cad-976d-cf0365f2d80e"). InnerVolumeSpecName "kube-api-access-98q2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:45:23 crc kubenswrapper[4867]: I0101 08:45:23.148743 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baa9a732-d77a-4f52-a689-53f88422f10e-config\") pod \"baa9a732-d77a-4f52-a689-53f88422f10e\" (UID: \"baa9a732-d77a-4f52-a689-53f88422f10e\") " Jan 01 08:45:23 crc kubenswrapper[4867]: I0101 08:45:23.148838 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5kcj\" (UniqueName: \"kubernetes.io/projected/baa9a732-d77a-4f52-a689-53f88422f10e-kube-api-access-x5kcj\") pod \"baa9a732-d77a-4f52-a689-53f88422f10e\" (UID: \"baa9a732-d77a-4f52-a689-53f88422f10e\") " Jan 01 08:45:23 crc kubenswrapper[4867]: I0101 08:45:23.149450 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/baa9a732-d77a-4f52-a689-53f88422f10e-config" (OuterVolumeSpecName: "config") pod "baa9a732-d77a-4f52-a689-53f88422f10e" (UID: "baa9a732-d77a-4f52-a689-53f88422f10e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:45:23 crc kubenswrapper[4867]: I0101 08:45:23.149493 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d228478-592e-4cad-976d-cf0365f2d80e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:23 crc kubenswrapper[4867]: I0101 08:45:23.150313 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d228478-592e-4cad-976d-cf0365f2d80e-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:23 crc kubenswrapper[4867]: I0101 08:45:23.150389 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98q2m\" (UniqueName: \"kubernetes.io/projected/6d228478-592e-4cad-976d-cf0365f2d80e-kube-api-access-98q2m\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:23 crc kubenswrapper[4867]: I0101 08:45:23.156070 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baa9a732-d77a-4f52-a689-53f88422f10e-kube-api-access-x5kcj" (OuterVolumeSpecName: "kube-api-access-x5kcj") pod "baa9a732-d77a-4f52-a689-53f88422f10e" (UID: "baa9a732-d77a-4f52-a689-53f88422f10e"). InnerVolumeSpecName "kube-api-access-x5kcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:45:23 crc kubenswrapper[4867]: I0101 08:45:23.177180 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-lj4mw" Jan 01 08:45:23 crc kubenswrapper[4867]: I0101 08:45:23.178218 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-dtzxj" Jan 01 08:45:23 crc kubenswrapper[4867]: E0101 08:45:23.182427 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13\\\"\"" pod="openstack/openstack-galera-0" podUID="3bd7d188-bdc2-4aa8-891b-0775de1a5eeb" Jan 01 08:45:23 crc kubenswrapper[4867]: I0101 08:45:23.184173 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-lj4mw" event={"ID":"6d228478-592e-4cad-976d-cf0365f2d80e","Type":"ContainerDied","Data":"39f6cdfe21347aaaad9c545c763919925d827f7f102ac78da97b8f765114c49f"} Jan 01 08:45:23 crc kubenswrapper[4867]: I0101 08:45:23.184209 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-dtzxj" event={"ID":"baa9a732-d77a-4f52-a689-53f88422f10e","Type":"ContainerDied","Data":"c9df0b879d196f3ba6a8185a9a59af76f282b8f9abced41b48cf5347224d936d"} Jan 01 08:45:23 crc kubenswrapper[4867]: I0101 08:45:23.184225 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29454285-kqkct" event={"ID":"32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46","Type":"ContainerStarted","Data":"82a5121fba39bcb412d420a1466556657a05b4f87cb976579186b0e0909ae070"} Jan 01 08:45:23 crc kubenswrapper[4867]: I0101 08:45:23.210178 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29454285-kqkct" podStartSLOduration=23.210160126 podStartE2EDuration="23.210160126s" podCreationTimestamp="2026-01-01 08:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:45:23.201640067 +0000 UTC m=+1132.336908836" watchObservedRunningTime="2026-01-01 08:45:23.210160126 +0000 UTC m=+1132.345428905" Jan 01 08:45:23 crc kubenswrapper[4867]: I0101 08:45:23.251822 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baa9a732-d77a-4f52-a689-53f88422f10e-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:23 crc kubenswrapper[4867]: I0101 08:45:23.251858 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5kcj\" (UniqueName: \"kubernetes.io/projected/baa9a732-d77a-4f52-a689-53f88422f10e-kube-api-access-x5kcj\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:23 crc kubenswrapper[4867]: I0101 08:45:23.271243 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-lj4mw"] Jan 01 08:45:23 crc kubenswrapper[4867]: I0101 08:45:23.295498 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-lj4mw"] Jan 01 08:45:23 crc kubenswrapper[4867]: I0101 08:45:23.316840 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-dtzxj"] Jan 01 08:45:23 crc kubenswrapper[4867]: I0101 08:45:23.321844 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-dtzxj"] Jan 01 08:45:23 crc kubenswrapper[4867]: I0101 08:45:23.378488 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 01 08:45:23 crc kubenswrapper[4867]: I0101 08:45:23.455519 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 01 08:45:23 crc kubenswrapper[4867]: I0101 08:45:23.468038 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8jl6r"] Jan 01 08:45:23 crc kubenswrapper[4867]: I0101 08:45:23.473530 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 01 08:45:23 crc kubenswrapper[4867]: I0101 08:45:23.704948 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 01 08:45:24 crc kubenswrapper[4867]: I0101 08:45:24.192942 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b43ddff2-67cd-4ab7-84c1-763dd002457c","Type":"ContainerStarted","Data":"b94ed9bd15f06982404e253657bd423c9017c54e4c5ccb6ba3d583c7fce6e16b"} Jan 01 08:45:24 crc kubenswrapper[4867]: I0101 08:45:24.199245 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8jl6r" event={"ID":"02bf5c7d-1674-4308-8bcf-751d6c4a3783","Type":"ContainerStarted","Data":"d6833be0face85320c24caf8c9689ccc88d4efbbf08d20fcdfdc9aa8fe11e591"} Jan 01 08:45:24 crc kubenswrapper[4867]: I0101 08:45:24.203045 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d2662702-83ed-4457-a630-e8a6d07ffb8b","Type":"ContainerStarted","Data":"d291f59607c758f47430417e26a8d57995eef27056410df0cfe7a2699c5e4b06"} Jan 01 08:45:24 crc kubenswrapper[4867]: I0101 08:45:24.203079 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d2662702-83ed-4457-a630-e8a6d07ffb8b","Type":"ContainerStarted","Data":"face748ce233053adbf72661aa6e0d193c0f083397b4d512504857ce7c00181a"} Jan 01 08:45:24 crc kubenswrapper[4867]: I0101 08:45:24.208474 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1620c75e-1129-4850-9b27-7666e4cb8ed5","Type":"ContainerStarted","Data":"6fb001ab3687da1505fb876aa0cf0ffe799d47211ed2d8b9d98e1fcd49d7501b"} Jan 01 08:45:24 crc kubenswrapper[4867]: I0101 08:45:24.210069 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ac91c4fe-e982-4a27-b80f-e7d0d7659cc6","Type":"ContainerStarted","Data":"0148a5641a066bae6765eee8c241f1ed0108d51f663b68195353d1ed0980e9a0"} Jan 01 08:45:24 crc kubenswrapper[4867]: I0101 08:45:24.211696 4867 generic.go:334] "Generic (PLEG): container finished" podID="32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46" containerID="82a5121fba39bcb412d420a1466556657a05b4f87cb976579186b0e0909ae070" exitCode=0 Jan 01 08:45:24 crc kubenswrapper[4867]: I0101 08:45:24.211736 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29454285-kqkct" event={"ID":"32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46","Type":"ContainerDied","Data":"82a5121fba39bcb412d420a1466556657a05b4f87cb976579186b0e0909ae070"} Jan 01 08:45:24 crc kubenswrapper[4867]: I0101 08:45:24.641795 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-smgl6"] Jan 01 08:45:24 crc kubenswrapper[4867]: I0101 08:45:24.745640 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 01 08:45:24 crc kubenswrapper[4867]: W0101 08:45:24.753364 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1b4e13b_b1c7_49b6_8ac7_d6c74c869c7e.slice/crio-8286e944304ec89bdcd775c355caac2a1190ce6bb52fc57bb96a5e21818bb725 WatchSource:0}: Error finding container 8286e944304ec89bdcd775c355caac2a1190ce6bb52fc57bb96a5e21818bb725: Status 404 returned error can't find the container with id 8286e944304ec89bdcd775c355caac2a1190ce6bb52fc57bb96a5e21818bb725 Jan 01 08:45:25 crc kubenswrapper[4867]: I0101 08:45:25.138155 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d228478-592e-4cad-976d-cf0365f2d80e" path="/var/lib/kubelet/pods/6d228478-592e-4cad-976d-cf0365f2d80e/volumes" Jan 01 08:45:25 crc kubenswrapper[4867]: I0101 08:45:25.139181 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baa9a732-d77a-4f52-a689-53f88422f10e" path="/var/lib/kubelet/pods/baa9a732-d77a-4f52-a689-53f88422f10e/volumes" Jan 01 08:45:25 crc kubenswrapper[4867]: I0101 08:45:25.223017 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-smgl6" event={"ID":"d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e","Type":"ContainerStarted","Data":"8286e944304ec89bdcd775c355caac2a1190ce6bb52fc57bb96a5e21818bb725"} Jan 01 08:45:25 crc kubenswrapper[4867]: W0101 08:45:25.497597 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod654c613f_4f96_41f0_8937_d4be9f7897da.slice/crio-a15319d8cd56cd781759c4e46ec289c46ceed8c5ee394edf13b3b22673a258c4 WatchSource:0}: Error finding container a15319d8cd56cd781759c4e46ec289c46ceed8c5ee394edf13b3b22673a258c4: Status 404 returned error can't find the container with id a15319d8cd56cd781759c4e46ec289c46ceed8c5ee394edf13b3b22673a258c4 Jan 01 08:45:26 crc kubenswrapper[4867]: I0101 08:45:26.243775 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"654c613f-4f96-41f0-8937-d4be9f7897da","Type":"ContainerStarted","Data":"a15319d8cd56cd781759c4e46ec289c46ceed8c5ee394edf13b3b22673a258c4"} Jan 01 08:45:26 crc kubenswrapper[4867]: I0101 08:45:26.767204 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29454285-kqkct" Jan 01 08:45:26 crc kubenswrapper[4867]: I0101 08:45:26.812680 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbjdw\" (UniqueName: \"kubernetes.io/projected/32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46-kube-api-access-pbjdw\") pod \"32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46\" (UID: \"32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46\") " Jan 01 08:45:26 crc kubenswrapper[4867]: I0101 08:45:26.812816 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46-config-volume\") pod \"32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46\" (UID: \"32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46\") " Jan 01 08:45:26 crc kubenswrapper[4867]: I0101 08:45:26.812983 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46-secret-volume\") pod \"32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46\" (UID: \"32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46\") " Jan 01 08:45:26 crc kubenswrapper[4867]: I0101 08:45:26.822211 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46-config-volume" (OuterVolumeSpecName: "config-volume") pod "32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46" (UID: "32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:45:26 crc kubenswrapper[4867]: I0101 08:45:26.822213 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46-kube-api-access-pbjdw" (OuterVolumeSpecName: "kube-api-access-pbjdw") pod "32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46" (UID: "32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46"). InnerVolumeSpecName "kube-api-access-pbjdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:45:26 crc kubenswrapper[4867]: I0101 08:45:26.833520 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46" (UID: "32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:45:26 crc kubenswrapper[4867]: I0101 08:45:26.914421 4867 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46-config-volume\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:26 crc kubenswrapper[4867]: I0101 08:45:26.914450 4867 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:26 crc kubenswrapper[4867]: I0101 08:45:26.914460 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbjdw\" (UniqueName: \"kubernetes.io/projected/32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46-kube-api-access-pbjdw\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:27 crc kubenswrapper[4867]: I0101 08:45:27.251860 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29454285-kqkct" event={"ID":"32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46","Type":"ContainerDied","Data":"1ca43f25116a8c49bffe002290ffcccb78d75bf13d769d95781e1486b1d40797"} Jan 01 08:45:27 crc kubenswrapper[4867]: I0101 08:45:27.252252 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ca43f25116a8c49bffe002290ffcccb78d75bf13d769d95781e1486b1d40797" Jan 01 08:45:27 crc kubenswrapper[4867]: I0101 08:45:27.252070 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29454285-kqkct" Jan 01 08:45:28 crc kubenswrapper[4867]: I0101 08:45:28.266462 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d2662702-83ed-4457-a630-e8a6d07ffb8b","Type":"ContainerDied","Data":"d291f59607c758f47430417e26a8d57995eef27056410df0cfe7a2699c5e4b06"} Jan 01 08:45:28 crc kubenswrapper[4867]: I0101 08:45:28.266796 4867 generic.go:334] "Generic (PLEG): container finished" podID="d2662702-83ed-4457-a630-e8a6d07ffb8b" containerID="d291f59607c758f47430417e26a8d57995eef27056410df0cfe7a2699c5e4b06" exitCode=0 Jan 01 08:45:29 crc kubenswrapper[4867]: I0101 08:45:29.278034 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d2662702-83ed-4457-a630-e8a6d07ffb8b","Type":"ContainerStarted","Data":"efcd353d29f3de492430dcf05725698c36d4fc1c75947e3d7d13befdcc5b7a27"} Jan 01 08:45:29 crc kubenswrapper[4867]: I0101 08:45:29.280563 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"654c613f-4f96-41f0-8937-d4be9f7897da","Type":"ContainerStarted","Data":"799a9220b793a8689047c599a1077afcad897844454e5de218f99838ce959d39"} Jan 01 08:45:29 crc kubenswrapper[4867]: I0101 08:45:29.282867 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1620c75e-1129-4850-9b27-7666e4cb8ed5","Type":"ContainerStarted","Data":"b82144172ec57a88a4a7d071777b91eb41831acba69913c40e0826ab964b9099"} Jan 01 08:45:29 crc kubenswrapper[4867]: I0101 08:45:29.284837 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ac91c4fe-e982-4a27-b80f-e7d0d7659cc6","Type":"ContainerStarted","Data":"9caed6a6e124cb26a7f6e990787e129c9b7a7e22f995f9b92bc3c80929ed9d80"} Jan 01 08:45:29 crc kubenswrapper[4867]: I0101 08:45:29.285753 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 01 08:45:29 crc kubenswrapper[4867]: I0101 08:45:29.287515 4867 generic.go:334] "Generic (PLEG): container finished" podID="d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e" containerID="40c75e44cba104e45661a1d0c049238ec1f59a119529722a4ed7d8876855db31" exitCode=0 Jan 01 08:45:29 crc kubenswrapper[4867]: I0101 08:45:29.288010 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-smgl6" event={"ID":"d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e","Type":"ContainerDied","Data":"40c75e44cba104e45661a1d0c049238ec1f59a119529722a4ed7d8876855db31"} Jan 01 08:45:29 crc kubenswrapper[4867]: I0101 08:45:29.292635 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b43ddff2-67cd-4ab7-84c1-763dd002457c","Type":"ContainerStarted","Data":"4e331c080ef51c9e8e140526532ca6a567a4007701b3c6a5707e70e828973809"} Jan 01 08:45:29 crc kubenswrapper[4867]: I0101 08:45:29.292769 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 01 08:45:29 crc kubenswrapper[4867]: I0101 08:45:29.296594 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8jl6r" event={"ID":"02bf5c7d-1674-4308-8bcf-751d6c4a3783","Type":"ContainerStarted","Data":"c6772513d0760f994bd984b33ae94d311c871e514a88791bb619cd34da240dd9"} Jan 01 08:45:29 crc kubenswrapper[4867]: I0101 08:45:29.296815 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-8jl6r" Jan 01 08:45:29 crc kubenswrapper[4867]: I0101 08:45:29.313344 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=28.889743989 podStartE2EDuration="29.313313328s" podCreationTimestamp="2026-01-01 08:45:00 +0000 UTC" firstStartedPulling="2026-01-01 08:45:23.46908379 +0000 UTC m=+1132.604352569" lastFinishedPulling="2026-01-01 08:45:23.892653129 +0000 UTC m=+1133.027921908" observedRunningTime="2026-01-01 08:45:29.307912626 +0000 UTC m=+1138.443181425" watchObservedRunningTime="2026-01-01 08:45:29.313313328 +0000 UTC m=+1138.448582137" Jan 01 08:45:29 crc kubenswrapper[4867]: I0101 08:45:29.358416 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=21.333359069 podStartE2EDuration="26.358393614s" podCreationTimestamp="2026-01-01 08:45:03 +0000 UTC" firstStartedPulling="2026-01-01 08:45:23.384279988 +0000 UTC m=+1132.519548757" lastFinishedPulling="2026-01-01 08:45:28.409314523 +0000 UTC m=+1137.544583302" observedRunningTime="2026-01-01 08:45:29.333870235 +0000 UTC m=+1138.469139104" watchObservedRunningTime="2026-01-01 08:45:29.358393614 +0000 UTC m=+1138.493662423" Jan 01 08:45:29 crc kubenswrapper[4867]: I0101 08:45:29.364733 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=24.276496934 podStartE2EDuration="28.364710331s" podCreationTimestamp="2026-01-01 08:45:01 +0000 UTC" firstStartedPulling="2026-01-01 08:45:23.46373776 +0000 UTC m=+1132.599006529" lastFinishedPulling="2026-01-01 08:45:27.551951117 +0000 UTC m=+1136.687219926" observedRunningTime="2026-01-01 08:45:29.3575629 +0000 UTC m=+1138.492831689" watchObservedRunningTime="2026-01-01 08:45:29.364710331 +0000 UTC m=+1138.499979120" Jan 01 08:45:29 crc kubenswrapper[4867]: I0101 08:45:29.386699 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-8jl6r" podStartSLOduration=18.599791704 podStartE2EDuration="23.386675398s" podCreationTimestamp="2026-01-01 08:45:06 +0000 UTC" firstStartedPulling="2026-01-01 08:45:23.469136312 +0000 UTC m=+1132.604405101" lastFinishedPulling="2026-01-01 08:45:28.256019986 +0000 UTC m=+1137.391288795" observedRunningTime="2026-01-01 08:45:29.376960815 +0000 UTC m=+1138.512229604" watchObservedRunningTime="2026-01-01 08:45:29.386675398 +0000 UTC m=+1138.521944177" Jan 01 08:45:30 crc kubenswrapper[4867]: I0101 08:45:30.306258 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-smgl6" event={"ID":"d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e","Type":"ContainerStarted","Data":"c5d97f4ef6c67417f1c06bc5b592d06096afac0628ba26043d76ab1c8ed2c65b"} Jan 01 08:45:30 crc kubenswrapper[4867]: I0101 08:45:30.306659 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-smgl6" event={"ID":"d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e","Type":"ContainerStarted","Data":"823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd"} Jan 01 08:45:30 crc kubenswrapper[4867]: I0101 08:45:30.329932 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-smgl6" podStartSLOduration=20.83193563 podStartE2EDuration="24.329914066s" podCreationTimestamp="2026-01-01 08:45:06 +0000 UTC" firstStartedPulling="2026-01-01 08:45:24.757878746 +0000 UTC m=+1133.893147515" lastFinishedPulling="2026-01-01 08:45:28.255857182 +0000 UTC m=+1137.391125951" observedRunningTime="2026-01-01 08:45:30.329216316 +0000 UTC m=+1139.464485095" watchObservedRunningTime="2026-01-01 08:45:30.329914066 +0000 UTC m=+1139.465182835" Jan 01 08:45:31 crc kubenswrapper[4867]: I0101 08:45:31.313636 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-smgl6" Jan 01 08:45:31 crc kubenswrapper[4867]: I0101 08:45:31.314090 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-smgl6" Jan 01 08:45:31 crc kubenswrapper[4867]: I0101 08:45:31.397099 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 01 08:45:31 crc kubenswrapper[4867]: I0101 08:45:31.397453 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 01 08:45:32 crc kubenswrapper[4867]: I0101 08:45:32.323455 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1620c75e-1129-4850-9b27-7666e4cb8ed5","Type":"ContainerStarted","Data":"cfc5a3301aa0d535569c3eedda51c82ee9f19aa64f208230f2ac95139ea3327b"} Jan 01 08:45:32 crc kubenswrapper[4867]: I0101 08:45:32.326271 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"654c613f-4f96-41f0-8937-d4be9f7897da","Type":"ContainerStarted","Data":"af04740eea97da4b3747aedaa2d322eabd244cf11d0911b2ba02cff1211719ab"} Jan 01 08:45:32 crc kubenswrapper[4867]: I0101 08:45:32.349387 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=15.352796346 podStartE2EDuration="23.349363527s" podCreationTimestamp="2026-01-01 08:45:09 +0000 UTC" firstStartedPulling="2026-01-01 08:45:23.706256463 +0000 UTC m=+1132.841525242" lastFinishedPulling="2026-01-01 08:45:31.702823614 +0000 UTC m=+1140.838092423" observedRunningTime="2026-01-01 08:45:32.340151729 +0000 UTC m=+1141.475420518" watchObservedRunningTime="2026-01-01 08:45:32.349363527 +0000 UTC m=+1141.484632326" Jan 01 08:45:32 crc kubenswrapper[4867]: I0101 08:45:32.365459 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=19.130236209 podStartE2EDuration="25.365442009s" podCreationTimestamp="2026-01-01 08:45:07 +0000 UTC" firstStartedPulling="2026-01-01 08:45:25.500615631 +0000 UTC m=+1134.635884400" lastFinishedPulling="2026-01-01 08:45:31.735821411 +0000 UTC m=+1140.871090200" observedRunningTime="2026-01-01 08:45:32.360251583 +0000 UTC m=+1141.495520412" watchObservedRunningTime="2026-01-01 08:45:32.365442009 +0000 UTC m=+1141.500710818" Jan 01 08:45:33 crc kubenswrapper[4867]: I0101 08:45:33.007870 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 01 08:45:33 crc kubenswrapper[4867]: I0101 08:45:33.093002 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 01 08:45:33 crc kubenswrapper[4867]: I0101 08:45:33.336167 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 01 08:45:33 crc kubenswrapper[4867]: I0101 08:45:33.405952 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 01 08:45:33 crc kubenswrapper[4867]: I0101 08:45:33.414853 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 01 08:45:33 crc kubenswrapper[4867]: I0101 08:45:33.728015 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-4brlh"] Jan 01 08:45:33 crc kubenswrapper[4867]: I0101 08:45:33.757371 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bbdc7ccd7-rhzm6"] Jan 01 08:45:33 crc kubenswrapper[4867]: E0101 08:45:33.757778 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46" containerName="collect-profiles" Jan 01 08:45:33 crc kubenswrapper[4867]: I0101 08:45:33.757796 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46" containerName="collect-profiles" Jan 01 08:45:33 crc kubenswrapper[4867]: I0101 08:45:33.758033 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46" containerName="collect-profiles" Jan 01 08:45:33 crc kubenswrapper[4867]: I0101 08:45:33.759122 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bbdc7ccd7-rhzm6" Jan 01 08:45:33 crc kubenswrapper[4867]: I0101 08:45:33.764035 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 01 08:45:33 crc kubenswrapper[4867]: I0101 08:45:33.782047 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bbdc7ccd7-rhzm6"] Jan 01 08:45:33 crc kubenswrapper[4867]: I0101 08:45:33.788983 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-4gj2t"] Jan 01 08:45:33 crc kubenswrapper[4867]: I0101 08:45:33.790003 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-4gj2t" Jan 01 08:45:33 crc kubenswrapper[4867]: I0101 08:45:33.791404 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 01 08:45:33 crc kubenswrapper[4867]: I0101 08:45:33.822034 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-4gj2t"] Jan 01 08:45:33 crc kubenswrapper[4867]: I0101 08:45:33.858573 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44dcf904-76b7-4932-8731-978615eadd0a-dns-svc\") pod \"dnsmasq-dns-7bbdc7ccd7-rhzm6\" (UID: \"44dcf904-76b7-4932-8731-978615eadd0a\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-rhzm6" Jan 01 08:45:33 crc kubenswrapper[4867]: I0101 08:45:33.858604 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44dcf904-76b7-4932-8731-978615eadd0a-config\") pod \"dnsmasq-dns-7bbdc7ccd7-rhzm6\" (UID: \"44dcf904-76b7-4932-8731-978615eadd0a\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-rhzm6" Jan 01 08:45:33 crc kubenswrapper[4867]: I0101 08:45:33.858645 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44dcf904-76b7-4932-8731-978615eadd0a-ovsdbserver-nb\") pod \"dnsmasq-dns-7bbdc7ccd7-rhzm6\" (UID: \"44dcf904-76b7-4932-8731-978615eadd0a\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-rhzm6" Jan 01 08:45:33 crc kubenswrapper[4867]: I0101 08:45:33.858748 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsbqx\" (UniqueName: \"kubernetes.io/projected/44dcf904-76b7-4932-8731-978615eadd0a-kube-api-access-hsbqx\") pod \"dnsmasq-dns-7bbdc7ccd7-rhzm6\" (UID: \"44dcf904-76b7-4932-8731-978615eadd0a\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-rhzm6" Jan 01 08:45:33 crc kubenswrapper[4867]: I0101 08:45:33.962148 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/63c4f874-d21a-42b7-884a-f070d8dc2150-ovn-rundir\") pod \"ovn-controller-metrics-4gj2t\" (UID: \"63c4f874-d21a-42b7-884a-f070d8dc2150\") " pod="openstack/ovn-controller-metrics-4gj2t" Jan 01 08:45:33 crc kubenswrapper[4867]: I0101 08:45:33.962550 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44dcf904-76b7-4932-8731-978615eadd0a-dns-svc\") pod \"dnsmasq-dns-7bbdc7ccd7-rhzm6\" (UID: \"44dcf904-76b7-4932-8731-978615eadd0a\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-rhzm6" Jan 01 08:45:33 crc kubenswrapper[4867]: I0101 08:45:33.962570 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44dcf904-76b7-4932-8731-978615eadd0a-config\") pod \"dnsmasq-dns-7bbdc7ccd7-rhzm6\" (UID: \"44dcf904-76b7-4932-8731-978615eadd0a\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-rhzm6" Jan 01 08:45:33 crc kubenswrapper[4867]: I0101 08:45:33.962623 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44dcf904-76b7-4932-8731-978615eadd0a-ovsdbserver-nb\") pod \"dnsmasq-dns-7bbdc7ccd7-rhzm6\" (UID: \"44dcf904-76b7-4932-8731-978615eadd0a\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-rhzm6" Jan 01 08:45:33 crc kubenswrapper[4867]: I0101 08:45:33.962648 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/63c4f874-d21a-42b7-884a-f070d8dc2150-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4gj2t\" (UID: \"63c4f874-d21a-42b7-884a-f070d8dc2150\") " pod="openstack/ovn-controller-metrics-4gj2t" Jan 01 08:45:33 crc kubenswrapper[4867]: I0101 08:45:33.962683 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63c4f874-d21a-42b7-884a-f070d8dc2150-combined-ca-bundle\") pod \"ovn-controller-metrics-4gj2t\" (UID: \"63c4f874-d21a-42b7-884a-f070d8dc2150\") " pod="openstack/ovn-controller-metrics-4gj2t" Jan 01 08:45:33 crc kubenswrapper[4867]: I0101 08:45:33.962705 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/63c4f874-d21a-42b7-884a-f070d8dc2150-ovs-rundir\") pod \"ovn-controller-metrics-4gj2t\" (UID: \"63c4f874-d21a-42b7-884a-f070d8dc2150\") " pod="openstack/ovn-controller-metrics-4gj2t" Jan 01 08:45:33 crc kubenswrapper[4867]: I0101 08:45:33.962724 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63c4f874-d21a-42b7-884a-f070d8dc2150-config\") pod \"ovn-controller-metrics-4gj2t\" (UID: \"63c4f874-d21a-42b7-884a-f070d8dc2150\") " pod="openstack/ovn-controller-metrics-4gj2t" Jan 01 08:45:33 crc kubenswrapper[4867]: I0101 08:45:33.962747 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsbqx\" (UniqueName: \"kubernetes.io/projected/44dcf904-76b7-4932-8731-978615eadd0a-kube-api-access-hsbqx\") pod \"dnsmasq-dns-7bbdc7ccd7-rhzm6\" (UID: \"44dcf904-76b7-4932-8731-978615eadd0a\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-rhzm6" Jan 01 08:45:33 crc kubenswrapper[4867]: I0101 08:45:33.962785 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grtm8\" (UniqueName: \"kubernetes.io/projected/63c4f874-d21a-42b7-884a-f070d8dc2150-kube-api-access-grtm8\") pod \"ovn-controller-metrics-4gj2t\" (UID: \"63c4f874-d21a-42b7-884a-f070d8dc2150\") " pod="openstack/ovn-controller-metrics-4gj2t" Jan 01 08:45:33 crc kubenswrapper[4867]: I0101 08:45:33.963654 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44dcf904-76b7-4932-8731-978615eadd0a-dns-svc\") pod \"dnsmasq-dns-7bbdc7ccd7-rhzm6\" (UID: \"44dcf904-76b7-4932-8731-978615eadd0a\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-rhzm6" Jan 01 08:45:33 crc kubenswrapper[4867]: I0101 08:45:33.964185 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44dcf904-76b7-4932-8731-978615eadd0a-config\") pod \"dnsmasq-dns-7bbdc7ccd7-rhzm6\" (UID: \"44dcf904-76b7-4932-8731-978615eadd0a\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-rhzm6" Jan 01 08:45:33 crc kubenswrapper[4867]: I0101 08:45:33.964645 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44dcf904-76b7-4932-8731-978615eadd0a-ovsdbserver-nb\") pod \"dnsmasq-dns-7bbdc7ccd7-rhzm6\" (UID: \"44dcf904-76b7-4932-8731-978615eadd0a\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-rhzm6" Jan 01 08:45:33 crc kubenswrapper[4867]: I0101 08:45:33.987824 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsbqx\" (UniqueName: \"kubernetes.io/projected/44dcf904-76b7-4932-8731-978615eadd0a-kube-api-access-hsbqx\") pod \"dnsmasq-dns-7bbdc7ccd7-rhzm6\" (UID: \"44dcf904-76b7-4932-8731-978615eadd0a\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-rhzm6" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.010064 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.064492 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63c4f874-d21a-42b7-884a-f070d8dc2150-combined-ca-bundle\") pod \"ovn-controller-metrics-4gj2t\" (UID: \"63c4f874-d21a-42b7-884a-f070d8dc2150\") " pod="openstack/ovn-controller-metrics-4gj2t" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.064537 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/63c4f874-d21a-42b7-884a-f070d8dc2150-ovs-rundir\") pod \"ovn-controller-metrics-4gj2t\" (UID: \"63c4f874-d21a-42b7-884a-f070d8dc2150\") " pod="openstack/ovn-controller-metrics-4gj2t" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.064565 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63c4f874-d21a-42b7-884a-f070d8dc2150-config\") pod \"ovn-controller-metrics-4gj2t\" (UID: \"63c4f874-d21a-42b7-884a-f070d8dc2150\") " pod="openstack/ovn-controller-metrics-4gj2t" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.064611 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grtm8\" (UniqueName: \"kubernetes.io/projected/63c4f874-d21a-42b7-884a-f070d8dc2150-kube-api-access-grtm8\") pod \"ovn-controller-metrics-4gj2t\" (UID: \"63c4f874-d21a-42b7-884a-f070d8dc2150\") " pod="openstack/ovn-controller-metrics-4gj2t" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.064638 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/63c4f874-d21a-42b7-884a-f070d8dc2150-ovn-rundir\") pod \"ovn-controller-metrics-4gj2t\" (UID: \"63c4f874-d21a-42b7-884a-f070d8dc2150\") " pod="openstack/ovn-controller-metrics-4gj2t" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.064699 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/63c4f874-d21a-42b7-884a-f070d8dc2150-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4gj2t\" (UID: \"63c4f874-d21a-42b7-884a-f070d8dc2150\") " pod="openstack/ovn-controller-metrics-4gj2t" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.066479 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-h46zn"] Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.067351 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63c4f874-d21a-42b7-884a-f070d8dc2150-config\") pod \"ovn-controller-metrics-4gj2t\" (UID: \"63c4f874-d21a-42b7-884a-f070d8dc2150\") " pod="openstack/ovn-controller-metrics-4gj2t" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.068084 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/63c4f874-d21a-42b7-884a-f070d8dc2150-ovs-rundir\") pod \"ovn-controller-metrics-4gj2t\" (UID: \"63c4f874-d21a-42b7-884a-f070d8dc2150\") " pod="openstack/ovn-controller-metrics-4gj2t" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.068143 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/63c4f874-d21a-42b7-884a-f070d8dc2150-ovn-rundir\") pod \"ovn-controller-metrics-4gj2t\" (UID: \"63c4f874-d21a-42b7-884a-f070d8dc2150\") " pod="openstack/ovn-controller-metrics-4gj2t" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.070407 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/63c4f874-d21a-42b7-884a-f070d8dc2150-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4gj2t\" (UID: \"63c4f874-d21a-42b7-884a-f070d8dc2150\") " pod="openstack/ovn-controller-metrics-4gj2t" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.071664 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-v5cfx"] Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.074348 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-v5cfx" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.079637 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.080917 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63c4f874-d21a-42b7-884a-f070d8dc2150-combined-ca-bundle\") pod \"ovn-controller-metrics-4gj2t\" (UID: \"63c4f874-d21a-42b7-884a-f070d8dc2150\") " pod="openstack/ovn-controller-metrics-4gj2t" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.081523 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-v5cfx"] Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.095462 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grtm8\" (UniqueName: \"kubernetes.io/projected/63c4f874-d21a-42b7-884a-f070d8dc2150-kube-api-access-grtm8\") pod \"ovn-controller-metrics-4gj2t\" (UID: \"63c4f874-d21a-42b7-884a-f070d8dc2150\") " pod="openstack/ovn-controller-metrics-4gj2t" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.112529 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bbdc7ccd7-rhzm6" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.117045 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-4gj2t" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.128431 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-4brlh" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.180922 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.266909 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70afb24e-9e9b-4c9a-b4bd-991497c43647-dns-svc\") pod \"70afb24e-9e9b-4c9a-b4bd-991497c43647\" (UID: \"70afb24e-9e9b-4c9a-b4bd-991497c43647\") " Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.266966 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkp4v\" (UniqueName: \"kubernetes.io/projected/70afb24e-9e9b-4c9a-b4bd-991497c43647-kube-api-access-fkp4v\") pod \"70afb24e-9e9b-4c9a-b4bd-991497c43647\" (UID: \"70afb24e-9e9b-4c9a-b4bd-991497c43647\") " Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.266987 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70afb24e-9e9b-4c9a-b4bd-991497c43647-config\") pod \"70afb24e-9e9b-4c9a-b4bd-991497c43647\" (UID: \"70afb24e-9e9b-4c9a-b4bd-991497c43647\") " Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.267287 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4dcbf41-d27a-4a66-a24a-785a611208a6-ovsdbserver-nb\") pod \"dnsmasq-dns-757dc6fff9-v5cfx\" (UID: \"a4dcbf41-d27a-4a66-a24a-785a611208a6\") " pod="openstack/dnsmasq-dns-757dc6fff9-v5cfx" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.267375 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70afb24e-9e9b-4c9a-b4bd-991497c43647-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "70afb24e-9e9b-4c9a-b4bd-991497c43647" (UID: "70afb24e-9e9b-4c9a-b4bd-991497c43647"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.267401 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjl2c\" (UniqueName: \"kubernetes.io/projected/a4dcbf41-d27a-4a66-a24a-785a611208a6-kube-api-access-zjl2c\") pod \"dnsmasq-dns-757dc6fff9-v5cfx\" (UID: \"a4dcbf41-d27a-4a66-a24a-785a611208a6\") " pod="openstack/dnsmasq-dns-757dc6fff9-v5cfx" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.267459 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4dcbf41-d27a-4a66-a24a-785a611208a6-config\") pod \"dnsmasq-dns-757dc6fff9-v5cfx\" (UID: \"a4dcbf41-d27a-4a66-a24a-785a611208a6\") " pod="openstack/dnsmasq-dns-757dc6fff9-v5cfx" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.267497 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4dcbf41-d27a-4a66-a24a-785a611208a6-ovsdbserver-sb\") pod \"dnsmasq-dns-757dc6fff9-v5cfx\" (UID: \"a4dcbf41-d27a-4a66-a24a-785a611208a6\") " pod="openstack/dnsmasq-dns-757dc6fff9-v5cfx" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.267520 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4dcbf41-d27a-4a66-a24a-785a611208a6-dns-svc\") pod \"dnsmasq-dns-757dc6fff9-v5cfx\" (UID: \"a4dcbf41-d27a-4a66-a24a-785a611208a6\") " pod="openstack/dnsmasq-dns-757dc6fff9-v5cfx" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.267845 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70afb24e-9e9b-4c9a-b4bd-991497c43647-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.267875 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70afb24e-9e9b-4c9a-b4bd-991497c43647-config" (OuterVolumeSpecName: "config") pod "70afb24e-9e9b-4c9a-b4bd-991497c43647" (UID: "70afb24e-9e9b-4c9a-b4bd-991497c43647"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.271989 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70afb24e-9e9b-4c9a-b4bd-991497c43647-kube-api-access-fkp4v" (OuterVolumeSpecName: "kube-api-access-fkp4v") pod "70afb24e-9e9b-4c9a-b4bd-991497c43647" (UID: "70afb24e-9e9b-4c9a-b4bd-991497c43647"). InnerVolumeSpecName "kube-api-access-fkp4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.346213 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-4brlh" event={"ID":"70afb24e-9e9b-4c9a-b4bd-991497c43647","Type":"ContainerDied","Data":"77af901a2c9127405e730a65746c9b373b94637eb26af63d1cae0a4d7f7e2717"} Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.346307 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-4brlh" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.348852 4867 generic.go:334] "Generic (PLEG): container finished" podID="2e3af20b-ad60-4621-a781-9fc1721111ef" containerID="24d433b7c68976da3047716eedb0d6af197945922605e3053421306ed393e18f" exitCode=0 Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.349686 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c7cbb8f79-h46zn" event={"ID":"2e3af20b-ad60-4621-a781-9fc1721111ef","Type":"ContainerDied","Data":"24d433b7c68976da3047716eedb0d6af197945922605e3053421306ed393e18f"} Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.353924 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99","Type":"ContainerStarted","Data":"dc368467d4b3d995dcecfe0ff1d3410bbb5d37c4caf4fae784cd19c720d828d6"} Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.368915 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjl2c\" (UniqueName: \"kubernetes.io/projected/a4dcbf41-d27a-4a66-a24a-785a611208a6-kube-api-access-zjl2c\") pod \"dnsmasq-dns-757dc6fff9-v5cfx\" (UID: \"a4dcbf41-d27a-4a66-a24a-785a611208a6\") " pod="openstack/dnsmasq-dns-757dc6fff9-v5cfx" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.368997 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4dcbf41-d27a-4a66-a24a-785a611208a6-config\") pod \"dnsmasq-dns-757dc6fff9-v5cfx\" (UID: \"a4dcbf41-d27a-4a66-a24a-785a611208a6\") " pod="openstack/dnsmasq-dns-757dc6fff9-v5cfx" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.369020 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4dcbf41-d27a-4a66-a24a-785a611208a6-ovsdbserver-sb\") pod \"dnsmasq-dns-757dc6fff9-v5cfx\" (UID: \"a4dcbf41-d27a-4a66-a24a-785a611208a6\") " pod="openstack/dnsmasq-dns-757dc6fff9-v5cfx" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.369053 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4dcbf41-d27a-4a66-a24a-785a611208a6-dns-svc\") pod \"dnsmasq-dns-757dc6fff9-v5cfx\" (UID: \"a4dcbf41-d27a-4a66-a24a-785a611208a6\") " pod="openstack/dnsmasq-dns-757dc6fff9-v5cfx" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.369079 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4dcbf41-d27a-4a66-a24a-785a611208a6-ovsdbserver-nb\") pod \"dnsmasq-dns-757dc6fff9-v5cfx\" (UID: \"a4dcbf41-d27a-4a66-a24a-785a611208a6\") " pod="openstack/dnsmasq-dns-757dc6fff9-v5cfx" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.369137 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkp4v\" (UniqueName: \"kubernetes.io/projected/70afb24e-9e9b-4c9a-b4bd-991497c43647-kube-api-access-fkp4v\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.369148 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70afb24e-9e9b-4c9a-b4bd-991497c43647-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.370444 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4dcbf41-d27a-4a66-a24a-785a611208a6-ovsdbserver-nb\") pod \"dnsmasq-dns-757dc6fff9-v5cfx\" (UID: \"a4dcbf41-d27a-4a66-a24a-785a611208a6\") " pod="openstack/dnsmasq-dns-757dc6fff9-v5cfx" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.370523 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4dcbf41-d27a-4a66-a24a-785a611208a6-config\") pod \"dnsmasq-dns-757dc6fff9-v5cfx\" (UID: \"a4dcbf41-d27a-4a66-a24a-785a611208a6\") " pod="openstack/dnsmasq-dns-757dc6fff9-v5cfx" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.370585 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4dcbf41-d27a-4a66-a24a-785a611208a6-ovsdbserver-sb\") pod \"dnsmasq-dns-757dc6fff9-v5cfx\" (UID: \"a4dcbf41-d27a-4a66-a24a-785a611208a6\") " pod="openstack/dnsmasq-dns-757dc6fff9-v5cfx" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.371099 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4dcbf41-d27a-4a66-a24a-785a611208a6-dns-svc\") pod \"dnsmasq-dns-757dc6fff9-v5cfx\" (UID: \"a4dcbf41-d27a-4a66-a24a-785a611208a6\") " pod="openstack/dnsmasq-dns-757dc6fff9-v5cfx" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.387374 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjl2c\" (UniqueName: \"kubernetes.io/projected/a4dcbf41-d27a-4a66-a24a-785a611208a6-kube-api-access-zjl2c\") pod \"dnsmasq-dns-757dc6fff9-v5cfx\" (UID: \"a4dcbf41-d27a-4a66-a24a-785a611208a6\") " pod="openstack/dnsmasq-dns-757dc6fff9-v5cfx" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.433980 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-4brlh"] Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.450485 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-v5cfx" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.471421 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-4brlh"] Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.569390 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bbdc7ccd7-rhzm6"] Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.612275 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-4gj2t"] Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.656875 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c7cbb8f79-h46zn" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.777107 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e3af20b-ad60-4621-a781-9fc1721111ef-dns-svc\") pod \"2e3af20b-ad60-4621-a781-9fc1721111ef\" (UID: \"2e3af20b-ad60-4621-a781-9fc1721111ef\") " Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.777485 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e3af20b-ad60-4621-a781-9fc1721111ef-config\") pod \"2e3af20b-ad60-4621-a781-9fc1721111ef\" (UID: \"2e3af20b-ad60-4621-a781-9fc1721111ef\") " Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.777551 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cphwl\" (UniqueName: \"kubernetes.io/projected/2e3af20b-ad60-4621-a781-9fc1721111ef-kube-api-access-cphwl\") pod \"2e3af20b-ad60-4621-a781-9fc1721111ef\" (UID: \"2e3af20b-ad60-4621-a781-9fc1721111ef\") " Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.780342 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e3af20b-ad60-4621-a781-9fc1721111ef-kube-api-access-cphwl" (OuterVolumeSpecName: "kube-api-access-cphwl") pod "2e3af20b-ad60-4621-a781-9fc1721111ef" (UID: "2e3af20b-ad60-4621-a781-9fc1721111ef"). InnerVolumeSpecName "kube-api-access-cphwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.793661 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e3af20b-ad60-4621-a781-9fc1721111ef-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2e3af20b-ad60-4621-a781-9fc1721111ef" (UID: "2e3af20b-ad60-4621-a781-9fc1721111ef"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.813550 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e3af20b-ad60-4621-a781-9fc1721111ef-config" (OuterVolumeSpecName: "config") pod "2e3af20b-ad60-4621-a781-9fc1721111ef" (UID: "2e3af20b-ad60-4621-a781-9fc1721111ef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.882040 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cphwl\" (UniqueName: \"kubernetes.io/projected/2e3af20b-ad60-4621-a781-9fc1721111ef-kube-api-access-cphwl\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.882283 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e3af20b-ad60-4621-a781-9fc1721111ef-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.882383 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e3af20b-ad60-4621-a781-9fc1721111ef-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.902058 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.926433 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-v5cfx"] Jan 01 08:45:34 crc kubenswrapper[4867]: W0101 08:45:34.930977 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4dcbf41_d27a_4a66_a24a_785a611208a6.slice/crio-9208fdcbf89954cf3790f8b02b5aaccfe02d7fcf2a02992f9696f2f46e41fe94 WatchSource:0}: Error finding container 9208fdcbf89954cf3790f8b02b5aaccfe02d7fcf2a02992f9696f2f46e41fe94: Status 404 returned error can't find the container with id 9208fdcbf89954cf3790f8b02b5aaccfe02d7fcf2a02992f9696f2f46e41fe94 Jan 01 08:45:34 crc kubenswrapper[4867]: I0101 08:45:34.943799 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 01 08:45:35 crc kubenswrapper[4867]: I0101 08:45:35.141919 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70afb24e-9e9b-4c9a-b4bd-991497c43647" path="/var/lib/kubelet/pods/70afb24e-9e9b-4c9a-b4bd-991497c43647/volumes" Jan 01 08:45:35 crc kubenswrapper[4867]: I0101 08:45:35.368913 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c7cbb8f79-h46zn" Jan 01 08:45:35 crc kubenswrapper[4867]: I0101 08:45:35.368923 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c7cbb8f79-h46zn" event={"ID":"2e3af20b-ad60-4621-a781-9fc1721111ef","Type":"ContainerDied","Data":"83e396ca0a360f9f88a9ec3e8a1ee0b6e128507f48a1ab62135d3939058ab952"} Jan 01 08:45:35 crc kubenswrapper[4867]: I0101 08:45:35.368984 4867 scope.go:117] "RemoveContainer" containerID="24d433b7c68976da3047716eedb0d6af197945922605e3053421306ed393e18f" Jan 01 08:45:35 crc kubenswrapper[4867]: I0101 08:45:35.372902 4867 generic.go:334] "Generic (PLEG): container finished" podID="a4dcbf41-d27a-4a66-a24a-785a611208a6" containerID="27a350461decde3e1c6b7abed2748c74f42f57b8606023bad6ddf8eae19fd27c" exitCode=0 Jan 01 08:45:35 crc kubenswrapper[4867]: I0101 08:45:35.372969 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-v5cfx" event={"ID":"a4dcbf41-d27a-4a66-a24a-785a611208a6","Type":"ContainerDied","Data":"27a350461decde3e1c6b7abed2748c74f42f57b8606023bad6ddf8eae19fd27c"} Jan 01 08:45:35 crc kubenswrapper[4867]: I0101 08:45:35.373025 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-v5cfx" event={"ID":"a4dcbf41-d27a-4a66-a24a-785a611208a6","Type":"ContainerStarted","Data":"9208fdcbf89954cf3790f8b02b5aaccfe02d7fcf2a02992f9696f2f46e41fe94"} Jan 01 08:45:35 crc kubenswrapper[4867]: I0101 08:45:35.377202 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-4gj2t" event={"ID":"63c4f874-d21a-42b7-884a-f070d8dc2150","Type":"ContainerStarted","Data":"880fc92f7ecbd0a9c36266766043de57fa6004b144c4752222808fdfded81541"} Jan 01 08:45:35 crc kubenswrapper[4867]: I0101 08:45:35.377245 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-4gj2t" event={"ID":"63c4f874-d21a-42b7-884a-f070d8dc2150","Type":"ContainerStarted","Data":"6c446e1789571c09aba9cc08cb6a2a94dffcf35f4eb48e907e19acde767a3fa1"} Jan 01 08:45:35 crc kubenswrapper[4867]: I0101 08:45:35.379313 4867 generic.go:334] "Generic (PLEG): container finished" podID="44dcf904-76b7-4932-8731-978615eadd0a" containerID="ba2ca743dc66600316d6eefca657a40868329628c1ce0189a5d81d6ba7076f94" exitCode=0 Jan 01 08:45:35 crc kubenswrapper[4867]: I0101 08:45:35.381272 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bbdc7ccd7-rhzm6" event={"ID":"44dcf904-76b7-4932-8731-978615eadd0a","Type":"ContainerDied","Data":"ba2ca743dc66600316d6eefca657a40868329628c1ce0189a5d81d6ba7076f94"} Jan 01 08:45:35 crc kubenswrapper[4867]: I0101 08:45:35.381324 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bbdc7ccd7-rhzm6" event={"ID":"44dcf904-76b7-4932-8731-978615eadd0a","Type":"ContainerStarted","Data":"1fc17caa18efa7a1f4269c9635f80294f56315fae64dd46d2176441969778651"} Jan 01 08:45:35 crc kubenswrapper[4867]: I0101 08:45:35.381347 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 01 08:45:35 crc kubenswrapper[4867]: I0101 08:45:35.447089 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-h46zn"] Jan 01 08:45:35 crc kubenswrapper[4867]: I0101 08:45:35.460850 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-h46zn"] Jan 01 08:45:35 crc kubenswrapper[4867]: I0101 08:45:35.460955 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 01 08:45:35 crc kubenswrapper[4867]: I0101 08:45:35.463690 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-4gj2t" podStartSLOduration=2.463670115 podStartE2EDuration="2.463670115s" podCreationTimestamp="2026-01-01 08:45:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:45:35.446857903 +0000 UTC m=+1144.582126692" watchObservedRunningTime="2026-01-01 08:45:35.463670115 +0000 UTC m=+1144.598938894" Jan 01 08:45:35 crc kubenswrapper[4867]: I0101 08:45:35.811195 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 01 08:45:35 crc kubenswrapper[4867]: E0101 08:45:35.811878 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e3af20b-ad60-4621-a781-9fc1721111ef" containerName="init" Jan 01 08:45:35 crc kubenswrapper[4867]: I0101 08:45:35.811915 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e3af20b-ad60-4621-a781-9fc1721111ef" containerName="init" Jan 01 08:45:35 crc kubenswrapper[4867]: I0101 08:45:35.812129 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e3af20b-ad60-4621-a781-9fc1721111ef" containerName="init" Jan 01 08:45:35 crc kubenswrapper[4867]: I0101 08:45:35.812949 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 01 08:45:35 crc kubenswrapper[4867]: I0101 08:45:35.814976 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-v6pb8" Jan 01 08:45:35 crc kubenswrapper[4867]: I0101 08:45:35.815131 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 01 08:45:35 crc kubenswrapper[4867]: I0101 08:45:35.815388 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 01 08:45:35 crc kubenswrapper[4867]: I0101 08:45:35.817944 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 01 08:45:35 crc kubenswrapper[4867]: I0101 08:45:35.835466 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 01 08:45:35 crc kubenswrapper[4867]: I0101 08:45:35.909405 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9943de7c-1d29-416f-ba57-ea51bf9e56f3-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9943de7c-1d29-416f-ba57-ea51bf9e56f3\") " pod="openstack/ovn-northd-0" Jan 01 08:45:35 crc kubenswrapper[4867]: I0101 08:45:35.909635 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9943de7c-1d29-416f-ba57-ea51bf9e56f3-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9943de7c-1d29-416f-ba57-ea51bf9e56f3\") " pod="openstack/ovn-northd-0" Jan 01 08:45:35 crc kubenswrapper[4867]: I0101 08:45:35.910015 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9943de7c-1d29-416f-ba57-ea51bf9e56f3-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9943de7c-1d29-416f-ba57-ea51bf9e56f3\") " pod="openstack/ovn-northd-0" Jan 01 08:45:35 crc kubenswrapper[4867]: I0101 08:45:35.910138 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9943de7c-1d29-416f-ba57-ea51bf9e56f3-config\") pod \"ovn-northd-0\" (UID: \"9943de7c-1d29-416f-ba57-ea51bf9e56f3\") " pod="openstack/ovn-northd-0" Jan 01 08:45:35 crc kubenswrapper[4867]: I0101 08:45:35.910298 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9943de7c-1d29-416f-ba57-ea51bf9e56f3-scripts\") pod \"ovn-northd-0\" (UID: \"9943de7c-1d29-416f-ba57-ea51bf9e56f3\") " pod="openstack/ovn-northd-0" Jan 01 08:45:35 crc kubenswrapper[4867]: I0101 08:45:35.910380 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9943de7c-1d29-416f-ba57-ea51bf9e56f3-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9943de7c-1d29-416f-ba57-ea51bf9e56f3\") " pod="openstack/ovn-northd-0" Jan 01 08:45:35 crc kubenswrapper[4867]: I0101 08:45:35.910454 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8rpj\" (UniqueName: \"kubernetes.io/projected/9943de7c-1d29-416f-ba57-ea51bf9e56f3-kube-api-access-g8rpj\") pod \"ovn-northd-0\" (UID: \"9943de7c-1d29-416f-ba57-ea51bf9e56f3\") " pod="openstack/ovn-northd-0" Jan 01 08:45:36 crc kubenswrapper[4867]: I0101 08:45:36.013358 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9943de7c-1d29-416f-ba57-ea51bf9e56f3-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9943de7c-1d29-416f-ba57-ea51bf9e56f3\") " pod="openstack/ovn-northd-0" Jan 01 08:45:36 crc kubenswrapper[4867]: I0101 08:45:36.013407 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9943de7c-1d29-416f-ba57-ea51bf9e56f3-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9943de7c-1d29-416f-ba57-ea51bf9e56f3\") " pod="openstack/ovn-northd-0" Jan 01 08:45:36 crc kubenswrapper[4867]: I0101 08:45:36.013437 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9943de7c-1d29-416f-ba57-ea51bf9e56f3-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9943de7c-1d29-416f-ba57-ea51bf9e56f3\") " pod="openstack/ovn-northd-0" Jan 01 08:45:36 crc kubenswrapper[4867]: I0101 08:45:36.013481 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9943de7c-1d29-416f-ba57-ea51bf9e56f3-config\") pod \"ovn-northd-0\" (UID: \"9943de7c-1d29-416f-ba57-ea51bf9e56f3\") " pod="openstack/ovn-northd-0" Jan 01 08:45:36 crc kubenswrapper[4867]: I0101 08:45:36.013599 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9943de7c-1d29-416f-ba57-ea51bf9e56f3-scripts\") pod \"ovn-northd-0\" (UID: \"9943de7c-1d29-416f-ba57-ea51bf9e56f3\") " pod="openstack/ovn-northd-0" Jan 01 08:45:36 crc kubenswrapper[4867]: I0101 08:45:36.013630 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9943de7c-1d29-416f-ba57-ea51bf9e56f3-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9943de7c-1d29-416f-ba57-ea51bf9e56f3\") " pod="openstack/ovn-northd-0" Jan 01 08:45:36 crc kubenswrapper[4867]: I0101 08:45:36.013661 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8rpj\" (UniqueName: \"kubernetes.io/projected/9943de7c-1d29-416f-ba57-ea51bf9e56f3-kube-api-access-g8rpj\") pod \"ovn-northd-0\" (UID: \"9943de7c-1d29-416f-ba57-ea51bf9e56f3\") " pod="openstack/ovn-northd-0" Jan 01 08:45:36 crc kubenswrapper[4867]: I0101 08:45:36.013994 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9943de7c-1d29-416f-ba57-ea51bf9e56f3-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9943de7c-1d29-416f-ba57-ea51bf9e56f3\") " pod="openstack/ovn-northd-0" Jan 01 08:45:36 crc kubenswrapper[4867]: I0101 08:45:36.014954 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9943de7c-1d29-416f-ba57-ea51bf9e56f3-scripts\") pod \"ovn-northd-0\" (UID: \"9943de7c-1d29-416f-ba57-ea51bf9e56f3\") " pod="openstack/ovn-northd-0" Jan 01 08:45:36 crc kubenswrapper[4867]: I0101 08:45:36.015466 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9943de7c-1d29-416f-ba57-ea51bf9e56f3-config\") pod \"ovn-northd-0\" (UID: \"9943de7c-1d29-416f-ba57-ea51bf9e56f3\") " pod="openstack/ovn-northd-0" Jan 01 08:45:36 crc kubenswrapper[4867]: I0101 08:45:36.017803 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9943de7c-1d29-416f-ba57-ea51bf9e56f3-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9943de7c-1d29-416f-ba57-ea51bf9e56f3\") " pod="openstack/ovn-northd-0" Jan 01 08:45:36 crc kubenswrapper[4867]: I0101 08:45:36.017860 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9943de7c-1d29-416f-ba57-ea51bf9e56f3-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9943de7c-1d29-416f-ba57-ea51bf9e56f3\") " pod="openstack/ovn-northd-0" Jan 01 08:45:36 crc kubenswrapper[4867]: I0101 08:45:36.033339 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8rpj\" (UniqueName: \"kubernetes.io/projected/9943de7c-1d29-416f-ba57-ea51bf9e56f3-kube-api-access-g8rpj\") pod \"ovn-northd-0\" (UID: \"9943de7c-1d29-416f-ba57-ea51bf9e56f3\") " pod="openstack/ovn-northd-0" Jan 01 08:45:36 crc kubenswrapper[4867]: I0101 08:45:36.051737 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9943de7c-1d29-416f-ba57-ea51bf9e56f3-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9943de7c-1d29-416f-ba57-ea51bf9e56f3\") " pod="openstack/ovn-northd-0" Jan 01 08:45:36 crc kubenswrapper[4867]: I0101 08:45:36.126735 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 01 08:45:36 crc kubenswrapper[4867]: I0101 08:45:36.391206 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"84d7aac6-1073-41c0-acff-169e36ec197d","Type":"ContainerStarted","Data":"ccf5ec4f83d69a7451d4e4e6f25b8108ea8d0370b161ff5d8a9669794a1fb386"} Jan 01 08:45:36 crc kubenswrapper[4867]: I0101 08:45:36.398361 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bbdc7ccd7-rhzm6" event={"ID":"44dcf904-76b7-4932-8731-978615eadd0a","Type":"ContainerStarted","Data":"7beffd6c5a6d6111b79ca3ecbfd5e6c4852dba9d20080a5f77976783df05ff63"} Jan 01 08:45:36 crc kubenswrapper[4867]: I0101 08:45:36.398783 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7bbdc7ccd7-rhzm6" Jan 01 08:45:36 crc kubenswrapper[4867]: I0101 08:45:36.406462 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-v5cfx" event={"ID":"a4dcbf41-d27a-4a66-a24a-785a611208a6","Type":"ContainerStarted","Data":"439664e75e182759edbab275d1ecc86257de913c81115c69833fc5bc5c16432d"} Jan 01 08:45:36 crc kubenswrapper[4867]: I0101 08:45:36.406492 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757dc6fff9-v5cfx" Jan 01 08:45:36 crc kubenswrapper[4867]: I0101 08:45:36.430736 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757dc6fff9-v5cfx" podStartSLOduration=2.4307183820000002 podStartE2EDuration="2.430718382s" podCreationTimestamp="2026-01-01 08:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:45:36.425360541 +0000 UTC m=+1145.560629330" watchObservedRunningTime="2026-01-01 08:45:36.430718382 +0000 UTC m=+1145.565987151" Jan 01 08:45:36 crc kubenswrapper[4867]: I0101 08:45:36.449933 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7bbdc7ccd7-rhzm6" podStartSLOduration=3.449907651 podStartE2EDuration="3.449907651s" podCreationTimestamp="2026-01-01 08:45:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:45:36.442099571 +0000 UTC m=+1145.577368360" watchObservedRunningTime="2026-01-01 08:45:36.449907651 +0000 UTC m=+1145.585176420" Jan 01 08:45:36 crc kubenswrapper[4867]: I0101 08:45:36.628364 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 01 08:45:36 crc kubenswrapper[4867]: W0101 08:45:36.632204 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9943de7c_1d29_416f_ba57_ea51bf9e56f3.slice/crio-2afb3fba923f62bb17eecb5f89c16c0e6495c8ed2c3f37ce01d15ea389a0d4e4 WatchSource:0}: Error finding container 2afb3fba923f62bb17eecb5f89c16c0e6495c8ed2c3f37ce01d15ea389a0d4e4: Status 404 returned error can't find the container with id 2afb3fba923f62bb17eecb5f89c16c0e6495c8ed2c3f37ce01d15ea389a0d4e4 Jan 01 08:45:36 crc kubenswrapper[4867]: I0101 08:45:36.838113 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 01 08:45:37 crc kubenswrapper[4867]: I0101 08:45:37.158863 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e3af20b-ad60-4621-a781-9fc1721111ef" path="/var/lib/kubelet/pods/2e3af20b-ad60-4621-a781-9fc1721111ef/volumes" Jan 01 08:45:37 crc kubenswrapper[4867]: I0101 08:45:37.413202 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9943de7c-1d29-416f-ba57-ea51bf9e56f3","Type":"ContainerStarted","Data":"2afb3fba923f62bb17eecb5f89c16c0e6495c8ed2c3f37ce01d15ea389a0d4e4"} Jan 01 08:45:37 crc kubenswrapper[4867]: I0101 08:45:37.415588 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb","Type":"ContainerStarted","Data":"ddbbd1f9c02c5fe9c1620e55640fa0cc298224e65712bb5df0b7c0ca0dbbf444"} Jan 01 08:45:38 crc kubenswrapper[4867]: I0101 08:45:38.422933 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9943de7c-1d29-416f-ba57-ea51bf9e56f3","Type":"ContainerStarted","Data":"92bfa5f8823984895188abd8d532b965b9e2ae8de93cf5f7e5a288490fe32e3c"} Jan 01 08:45:38 crc kubenswrapper[4867]: I0101 08:45:38.423280 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9943de7c-1d29-416f-ba57-ea51bf9e56f3","Type":"ContainerStarted","Data":"ec53f251aded63efc11dcea8ffde6a118aeb1632f72313429e33668486c985a2"} Jan 01 08:45:38 crc kubenswrapper[4867]: I0101 08:45:38.423301 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 01 08:45:38 crc kubenswrapper[4867]: I0101 08:45:38.453525 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.5455446200000003 podStartE2EDuration="3.453487326s" podCreationTimestamp="2026-01-01 08:45:35 +0000 UTC" firstStartedPulling="2026-01-01 08:45:36.637953643 +0000 UTC m=+1145.773222452" lastFinishedPulling="2026-01-01 08:45:37.545896389 +0000 UTC m=+1146.681165158" observedRunningTime="2026-01-01 08:45:38.448942608 +0000 UTC m=+1147.584211397" watchObservedRunningTime="2026-01-01 08:45:38.453487326 +0000 UTC m=+1147.588756105" Jan 01 08:45:40 crc kubenswrapper[4867]: I0101 08:45:40.160627 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-7dh4x"] Jan 01 08:45:40 crc kubenswrapper[4867]: I0101 08:45:40.162054 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7dh4x" Jan 01 08:45:40 crc kubenswrapper[4867]: I0101 08:45:40.166737 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 01 08:45:40 crc kubenswrapper[4867]: I0101 08:45:40.174623 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-7dh4x"] Jan 01 08:45:40 crc kubenswrapper[4867]: I0101 08:45:40.218737 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24b04661-c1d8-41c9-9365-e36e19ac638c-operator-scripts\") pod \"root-account-create-update-7dh4x\" (UID: \"24b04661-c1d8-41c9-9365-e36e19ac638c\") " pod="openstack/root-account-create-update-7dh4x" Jan 01 08:45:40 crc kubenswrapper[4867]: I0101 08:45:40.218817 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz58c\" (UniqueName: \"kubernetes.io/projected/24b04661-c1d8-41c9-9365-e36e19ac638c-kube-api-access-xz58c\") pod \"root-account-create-update-7dh4x\" (UID: \"24b04661-c1d8-41c9-9365-e36e19ac638c\") " pod="openstack/root-account-create-update-7dh4x" Jan 01 08:45:40 crc kubenswrapper[4867]: I0101 08:45:40.320637 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz58c\" (UniqueName: \"kubernetes.io/projected/24b04661-c1d8-41c9-9365-e36e19ac638c-kube-api-access-xz58c\") pod \"root-account-create-update-7dh4x\" (UID: \"24b04661-c1d8-41c9-9365-e36e19ac638c\") " pod="openstack/root-account-create-update-7dh4x" Jan 01 08:45:40 crc kubenswrapper[4867]: I0101 08:45:40.320761 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24b04661-c1d8-41c9-9365-e36e19ac638c-operator-scripts\") pod \"root-account-create-update-7dh4x\" (UID: \"24b04661-c1d8-41c9-9365-e36e19ac638c\") " pod="openstack/root-account-create-update-7dh4x" Jan 01 08:45:40 crc kubenswrapper[4867]: I0101 08:45:40.321432 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24b04661-c1d8-41c9-9365-e36e19ac638c-operator-scripts\") pod \"root-account-create-update-7dh4x\" (UID: \"24b04661-c1d8-41c9-9365-e36e19ac638c\") " pod="openstack/root-account-create-update-7dh4x" Jan 01 08:45:40 crc kubenswrapper[4867]: I0101 08:45:40.360913 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz58c\" (UniqueName: \"kubernetes.io/projected/24b04661-c1d8-41c9-9365-e36e19ac638c-kube-api-access-xz58c\") pod \"root-account-create-update-7dh4x\" (UID: \"24b04661-c1d8-41c9-9365-e36e19ac638c\") " pod="openstack/root-account-create-update-7dh4x" Jan 01 08:45:40 crc kubenswrapper[4867]: I0101 08:45:40.478646 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7dh4x" Jan 01 08:45:40 crc kubenswrapper[4867]: I0101 08:45:40.778579 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-7dh4x"] Jan 01 08:45:40 crc kubenswrapper[4867]: W0101 08:45:40.785779 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24b04661_c1d8_41c9_9365_e36e19ac638c.slice/crio-2286f181a3e83de20d08651fe2ae66a016d642feb7c643ba456412a7fab33ae8 WatchSource:0}: Error finding container 2286f181a3e83de20d08651fe2ae66a016d642feb7c643ba456412a7fab33ae8: Status 404 returned error can't find the container with id 2286f181a3e83de20d08651fe2ae66a016d642feb7c643ba456412a7fab33ae8 Jan 01 08:45:41 crc kubenswrapper[4867]: I0101 08:45:41.454019 4867 generic.go:334] "Generic (PLEG): container finished" podID="24b04661-c1d8-41c9-9365-e36e19ac638c" containerID="797df420b6252d0a8b79bcb4bf7f136bf6cec5144a63c844132b22175617f27e" exitCode=0 Jan 01 08:45:41 crc kubenswrapper[4867]: I0101 08:45:41.454129 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7dh4x" event={"ID":"24b04661-c1d8-41c9-9365-e36e19ac638c","Type":"ContainerDied","Data":"797df420b6252d0a8b79bcb4bf7f136bf6cec5144a63c844132b22175617f27e"} Jan 01 08:45:41 crc kubenswrapper[4867]: I0101 08:45:41.456144 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7dh4x" event={"ID":"24b04661-c1d8-41c9-9365-e36e19ac638c","Type":"ContainerStarted","Data":"2286f181a3e83de20d08651fe2ae66a016d642feb7c643ba456412a7fab33ae8"} Jan 01 08:45:42 crc kubenswrapper[4867]: I0101 08:45:42.467364 4867 generic.go:334] "Generic (PLEG): container finished" podID="3bd7d188-bdc2-4aa8-891b-0775de1a5eeb" containerID="ddbbd1f9c02c5fe9c1620e55640fa0cc298224e65712bb5df0b7c0ca0dbbf444" exitCode=0 Jan 01 08:45:42 crc kubenswrapper[4867]: I0101 08:45:42.467550 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb","Type":"ContainerDied","Data":"ddbbd1f9c02c5fe9c1620e55640fa0cc298224e65712bb5df0b7c0ca0dbbf444"} Jan 01 08:45:42 crc kubenswrapper[4867]: I0101 08:45:42.838551 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7dh4x" Jan 01 08:45:42 crc kubenswrapper[4867]: I0101 08:45:42.972538 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xz58c\" (UniqueName: \"kubernetes.io/projected/24b04661-c1d8-41c9-9365-e36e19ac638c-kube-api-access-xz58c\") pod \"24b04661-c1d8-41c9-9365-e36e19ac638c\" (UID: \"24b04661-c1d8-41c9-9365-e36e19ac638c\") " Jan 01 08:45:42 crc kubenswrapper[4867]: I0101 08:45:42.973055 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24b04661-c1d8-41c9-9365-e36e19ac638c-operator-scripts\") pod \"24b04661-c1d8-41c9-9365-e36e19ac638c\" (UID: \"24b04661-c1d8-41c9-9365-e36e19ac638c\") " Jan 01 08:45:42 crc kubenswrapper[4867]: I0101 08:45:42.974430 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24b04661-c1d8-41c9-9365-e36e19ac638c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "24b04661-c1d8-41c9-9365-e36e19ac638c" (UID: "24b04661-c1d8-41c9-9365-e36e19ac638c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:45:42 crc kubenswrapper[4867]: I0101 08:45:42.979901 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24b04661-c1d8-41c9-9365-e36e19ac638c-kube-api-access-xz58c" (OuterVolumeSpecName: "kube-api-access-xz58c") pod "24b04661-c1d8-41c9-9365-e36e19ac638c" (UID: "24b04661-c1d8-41c9-9365-e36e19ac638c"). InnerVolumeSpecName "kube-api-access-xz58c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:45:43 crc kubenswrapper[4867]: I0101 08:45:43.075281 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24b04661-c1d8-41c9-9365-e36e19ac638c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:43 crc kubenswrapper[4867]: I0101 08:45:43.075312 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xz58c\" (UniqueName: \"kubernetes.io/projected/24b04661-c1d8-41c9-9365-e36e19ac638c-kube-api-access-xz58c\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:43 crc kubenswrapper[4867]: I0101 08:45:43.381313 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bbdc7ccd7-rhzm6"] Jan 01 08:45:43 crc kubenswrapper[4867]: I0101 08:45:43.381539 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7bbdc7ccd7-rhzm6" podUID="44dcf904-76b7-4932-8731-978615eadd0a" containerName="dnsmasq-dns" containerID="cri-o://7beffd6c5a6d6111b79ca3ecbfd5e6c4852dba9d20080a5f77976783df05ff63" gracePeriod=10 Jan 01 08:45:43 crc kubenswrapper[4867]: I0101 08:45:43.382645 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7bbdc7ccd7-rhzm6" Jan 01 08:45:43 crc kubenswrapper[4867]: I0101 08:45:43.424642 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-x5wp2"] Jan 01 08:45:43 crc kubenswrapper[4867]: E0101 08:45:43.424987 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b04661-c1d8-41c9-9365-e36e19ac638c" containerName="mariadb-account-create-update" Jan 01 08:45:43 crc kubenswrapper[4867]: I0101 08:45:43.425002 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b04661-c1d8-41c9-9365-e36e19ac638c" containerName="mariadb-account-create-update" Jan 01 08:45:43 crc kubenswrapper[4867]: I0101 08:45:43.425166 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b04661-c1d8-41c9-9365-e36e19ac638c" containerName="mariadb-account-create-update" Jan 01 08:45:43 crc kubenswrapper[4867]: I0101 08:45:43.425935 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-x5wp2" Jan 01 08:45:43 crc kubenswrapper[4867]: I0101 08:45:43.475141 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb","Type":"ContainerStarted","Data":"b839a4dffb22a75f3657a1d1eebb4e7c86aa3448b01b75268a7fa008e4d35304"} Jan 01 08:45:43 crc kubenswrapper[4867]: I0101 08:45:43.477641 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7dh4x" event={"ID":"24b04661-c1d8-41c9-9365-e36e19ac638c","Type":"ContainerDied","Data":"2286f181a3e83de20d08651fe2ae66a016d642feb7c643ba456412a7fab33ae8"} Jan 01 08:45:43 crc kubenswrapper[4867]: I0101 08:45:43.477668 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2286f181a3e83de20d08651fe2ae66a016d642feb7c643ba456412a7fab33ae8" Jan 01 08:45:43 crc kubenswrapper[4867]: I0101 08:45:43.477709 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7dh4x" Jan 01 08:45:43 crc kubenswrapper[4867]: I0101 08:45:43.484317 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmp96\" (UniqueName: \"kubernetes.io/projected/e1293cec-4975-472a-adc0-8d14637a64fe-kube-api-access-dmp96\") pod \"dnsmasq-dns-6cb545bd4c-x5wp2\" (UID: \"e1293cec-4975-472a-adc0-8d14637a64fe\") " pod="openstack/dnsmasq-dns-6cb545bd4c-x5wp2" Jan 01 08:45:43 crc kubenswrapper[4867]: I0101 08:45:43.484378 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1293cec-4975-472a-adc0-8d14637a64fe-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb545bd4c-x5wp2\" (UID: \"e1293cec-4975-472a-adc0-8d14637a64fe\") " pod="openstack/dnsmasq-dns-6cb545bd4c-x5wp2" Jan 01 08:45:43 crc kubenswrapper[4867]: I0101 08:45:43.484399 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1293cec-4975-472a-adc0-8d14637a64fe-config\") pod \"dnsmasq-dns-6cb545bd4c-x5wp2\" (UID: \"e1293cec-4975-472a-adc0-8d14637a64fe\") " pod="openstack/dnsmasq-dns-6cb545bd4c-x5wp2" Jan 01 08:45:43 crc kubenswrapper[4867]: I0101 08:45:43.484420 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1293cec-4975-472a-adc0-8d14637a64fe-dns-svc\") pod \"dnsmasq-dns-6cb545bd4c-x5wp2\" (UID: \"e1293cec-4975-472a-adc0-8d14637a64fe\") " pod="openstack/dnsmasq-dns-6cb545bd4c-x5wp2" Jan 01 08:45:43 crc kubenswrapper[4867]: I0101 08:45:43.484465 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1293cec-4975-472a-adc0-8d14637a64fe-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb545bd4c-x5wp2\" (UID: \"e1293cec-4975-472a-adc0-8d14637a64fe\") " pod="openstack/dnsmasq-dns-6cb545bd4c-x5wp2" Jan 01 08:45:43 crc kubenswrapper[4867]: I0101 08:45:43.524144 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-x5wp2"] Jan 01 08:45:43 crc kubenswrapper[4867]: I0101 08:45:43.536204 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371991.31859 podStartE2EDuration="45.536185101s" podCreationTimestamp="2026-01-01 08:44:58 +0000 UTC" firstStartedPulling="2026-01-01 08:45:00.559153779 +0000 UTC m=+1109.694422558" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:45:43.522587059 +0000 UTC m=+1152.657855838" watchObservedRunningTime="2026-01-01 08:45:43.536185101 +0000 UTC m=+1152.671453860" Jan 01 08:45:43 crc kubenswrapper[4867]: I0101 08:45:43.586098 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1293cec-4975-472a-adc0-8d14637a64fe-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb545bd4c-x5wp2\" (UID: \"e1293cec-4975-472a-adc0-8d14637a64fe\") " pod="openstack/dnsmasq-dns-6cb545bd4c-x5wp2" Jan 01 08:45:43 crc kubenswrapper[4867]: I0101 08:45:43.586145 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1293cec-4975-472a-adc0-8d14637a64fe-config\") pod \"dnsmasq-dns-6cb545bd4c-x5wp2\" (UID: \"e1293cec-4975-472a-adc0-8d14637a64fe\") " pod="openstack/dnsmasq-dns-6cb545bd4c-x5wp2" Jan 01 08:45:43 crc kubenswrapper[4867]: I0101 08:45:43.586312 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1293cec-4975-472a-adc0-8d14637a64fe-dns-svc\") pod \"dnsmasq-dns-6cb545bd4c-x5wp2\" (UID: \"e1293cec-4975-472a-adc0-8d14637a64fe\") " pod="openstack/dnsmasq-dns-6cb545bd4c-x5wp2" Jan 01 08:45:43 crc kubenswrapper[4867]: I0101 08:45:43.586446 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1293cec-4975-472a-adc0-8d14637a64fe-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb545bd4c-x5wp2\" (UID: \"e1293cec-4975-472a-adc0-8d14637a64fe\") " pod="openstack/dnsmasq-dns-6cb545bd4c-x5wp2" Jan 01 08:45:43 crc kubenswrapper[4867]: I0101 08:45:43.586616 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmp96\" (UniqueName: \"kubernetes.io/projected/e1293cec-4975-472a-adc0-8d14637a64fe-kube-api-access-dmp96\") pod \"dnsmasq-dns-6cb545bd4c-x5wp2\" (UID: \"e1293cec-4975-472a-adc0-8d14637a64fe\") " pod="openstack/dnsmasq-dns-6cb545bd4c-x5wp2" Jan 01 08:45:43 crc kubenswrapper[4867]: I0101 08:45:43.587068 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1293cec-4975-472a-adc0-8d14637a64fe-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb545bd4c-x5wp2\" (UID: \"e1293cec-4975-472a-adc0-8d14637a64fe\") " pod="openstack/dnsmasq-dns-6cb545bd4c-x5wp2" Jan 01 08:45:43 crc kubenswrapper[4867]: I0101 08:45:43.587158 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1293cec-4975-472a-adc0-8d14637a64fe-config\") pod \"dnsmasq-dns-6cb545bd4c-x5wp2\" (UID: \"e1293cec-4975-472a-adc0-8d14637a64fe\") " pod="openstack/dnsmasq-dns-6cb545bd4c-x5wp2" Jan 01 08:45:43 crc kubenswrapper[4867]: I0101 08:45:43.587215 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1293cec-4975-472a-adc0-8d14637a64fe-dns-svc\") pod \"dnsmasq-dns-6cb545bd4c-x5wp2\" (UID: \"e1293cec-4975-472a-adc0-8d14637a64fe\") " pod="openstack/dnsmasq-dns-6cb545bd4c-x5wp2" Jan 01 08:45:43 crc kubenswrapper[4867]: I0101 08:45:43.587645 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1293cec-4975-472a-adc0-8d14637a64fe-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb545bd4c-x5wp2\" (UID: \"e1293cec-4975-472a-adc0-8d14637a64fe\") " pod="openstack/dnsmasq-dns-6cb545bd4c-x5wp2" Jan 01 08:45:43 crc kubenswrapper[4867]: I0101 08:45:43.603271 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmp96\" (UniqueName: \"kubernetes.io/projected/e1293cec-4975-472a-adc0-8d14637a64fe-kube-api-access-dmp96\") pod \"dnsmasq-dns-6cb545bd4c-x5wp2\" (UID: \"e1293cec-4975-472a-adc0-8d14637a64fe\") " pod="openstack/dnsmasq-dns-6cb545bd4c-x5wp2" Jan 01 08:45:43 crc kubenswrapper[4867]: I0101 08:45:43.739135 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-x5wp2" Jan 01 08:45:44 crc kubenswrapper[4867]: I0101 08:45:44.113723 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7bbdc7ccd7-rhzm6" podUID="44dcf904-76b7-4932-8731-978615eadd0a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Jan 01 08:45:44 crc kubenswrapper[4867]: W0101 08:45:44.200643 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1293cec_4975_472a_adc0_8d14637a64fe.slice/crio-84044f6f92c0a380f8340b3831ce5ae52f63d3173d961a67ef42c3d73cc362b0 WatchSource:0}: Error finding container 84044f6f92c0a380f8340b3831ce5ae52f63d3173d961a67ef42c3d73cc362b0: Status 404 returned error can't find the container with id 84044f6f92c0a380f8340b3831ce5ae52f63d3173d961a67ef42c3d73cc362b0 Jan 01 08:45:44 crc kubenswrapper[4867]: I0101 08:45:44.208060 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-x5wp2"] Jan 01 08:45:44 crc kubenswrapper[4867]: I0101 08:45:44.453129 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757dc6fff9-v5cfx" Jan 01 08:45:44 crc kubenswrapper[4867]: I0101 08:45:44.487168 4867 generic.go:334] "Generic (PLEG): container finished" podID="44dcf904-76b7-4932-8731-978615eadd0a" containerID="7beffd6c5a6d6111b79ca3ecbfd5e6c4852dba9d20080a5f77976783df05ff63" exitCode=0 Jan 01 08:45:44 crc kubenswrapper[4867]: I0101 08:45:44.487269 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bbdc7ccd7-rhzm6" event={"ID":"44dcf904-76b7-4932-8731-978615eadd0a","Type":"ContainerDied","Data":"7beffd6c5a6d6111b79ca3ecbfd5e6c4852dba9d20080a5f77976783df05ff63"} Jan 01 08:45:44 crc kubenswrapper[4867]: I0101 08:45:44.498207 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-x5wp2" event={"ID":"e1293cec-4975-472a-adc0-8d14637a64fe","Type":"ContainerStarted","Data":"84044f6f92c0a380f8340b3831ce5ae52f63d3173d961a67ef42c3d73cc362b0"} Jan 01 08:45:44 crc kubenswrapper[4867]: I0101 08:45:44.526678 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 01 08:45:44 crc kubenswrapper[4867]: I0101 08:45:44.536821 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 01 08:45:44 crc kubenswrapper[4867]: I0101 08:45:44.544751 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-tjwnh" Jan 01 08:45:44 crc kubenswrapper[4867]: I0101 08:45:44.545040 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 01 08:45:44 crc kubenswrapper[4867]: I0101 08:45:44.545332 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 01 08:45:44 crc kubenswrapper[4867]: I0101 08:45:44.545612 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 01 08:45:44 crc kubenswrapper[4867]: I0101 08:45:44.572187 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 01 08:45:44 crc kubenswrapper[4867]: I0101 08:45:44.708713 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f1f687f2-3229-401c-b5cb-f79e96311c45-cache\") pod \"swift-storage-0\" (UID: \"f1f687f2-3229-401c-b5cb-f79e96311c45\") " pod="openstack/swift-storage-0" Jan 01 08:45:44 crc kubenswrapper[4867]: I0101 08:45:44.708972 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f1f687f2-3229-401c-b5cb-f79e96311c45-etc-swift\") pod \"swift-storage-0\" (UID: \"f1f687f2-3229-401c-b5cb-f79e96311c45\") " pod="openstack/swift-storage-0" Jan 01 08:45:44 crc kubenswrapper[4867]: I0101 08:45:44.709144 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"f1f687f2-3229-401c-b5cb-f79e96311c45\") " pod="openstack/swift-storage-0" Jan 01 08:45:44 crc kubenswrapper[4867]: I0101 08:45:44.709181 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pktsx\" (UniqueName: \"kubernetes.io/projected/f1f687f2-3229-401c-b5cb-f79e96311c45-kube-api-access-pktsx\") pod \"swift-storage-0\" (UID: \"f1f687f2-3229-401c-b5cb-f79e96311c45\") " pod="openstack/swift-storage-0" Jan 01 08:45:44 crc kubenswrapper[4867]: I0101 08:45:44.709203 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f1f687f2-3229-401c-b5cb-f79e96311c45-lock\") pod \"swift-storage-0\" (UID: \"f1f687f2-3229-401c-b5cb-f79e96311c45\") " pod="openstack/swift-storage-0" Jan 01 08:45:44 crc kubenswrapper[4867]: I0101 08:45:44.811293 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f1f687f2-3229-401c-b5cb-f79e96311c45-cache\") pod \"swift-storage-0\" (UID: \"f1f687f2-3229-401c-b5cb-f79e96311c45\") " pod="openstack/swift-storage-0" Jan 01 08:45:44 crc kubenswrapper[4867]: I0101 08:45:44.811375 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f1f687f2-3229-401c-b5cb-f79e96311c45-etc-swift\") pod \"swift-storage-0\" (UID: \"f1f687f2-3229-401c-b5cb-f79e96311c45\") " pod="openstack/swift-storage-0" Jan 01 08:45:44 crc kubenswrapper[4867]: I0101 08:45:44.811438 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"f1f687f2-3229-401c-b5cb-f79e96311c45\") " pod="openstack/swift-storage-0" Jan 01 08:45:44 crc kubenswrapper[4867]: I0101 08:45:44.811460 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pktsx\" (UniqueName: \"kubernetes.io/projected/f1f687f2-3229-401c-b5cb-f79e96311c45-kube-api-access-pktsx\") pod \"swift-storage-0\" (UID: \"f1f687f2-3229-401c-b5cb-f79e96311c45\") " pod="openstack/swift-storage-0" Jan 01 08:45:44 crc kubenswrapper[4867]: I0101 08:45:44.811482 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f1f687f2-3229-401c-b5cb-f79e96311c45-lock\") pod \"swift-storage-0\" (UID: \"f1f687f2-3229-401c-b5cb-f79e96311c45\") " pod="openstack/swift-storage-0" Jan 01 08:45:44 crc kubenswrapper[4867]: E0101 08:45:44.811617 4867 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 01 08:45:44 crc kubenswrapper[4867]: E0101 08:45:44.811651 4867 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 01 08:45:44 crc kubenswrapper[4867]: E0101 08:45:44.811723 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1f687f2-3229-401c-b5cb-f79e96311c45-etc-swift podName:f1f687f2-3229-401c-b5cb-f79e96311c45 nodeName:}" failed. No retries permitted until 2026-01-01 08:45:45.311701522 +0000 UTC m=+1154.446970301 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f1f687f2-3229-401c-b5cb-f79e96311c45-etc-swift") pod "swift-storage-0" (UID: "f1f687f2-3229-401c-b5cb-f79e96311c45") : configmap "swift-ring-files" not found Jan 01 08:45:44 crc kubenswrapper[4867]: I0101 08:45:44.811918 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"f1f687f2-3229-401c-b5cb-f79e96311c45\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/swift-storage-0" Jan 01 08:45:44 crc kubenswrapper[4867]: I0101 08:45:44.812034 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f1f687f2-3229-401c-b5cb-f79e96311c45-lock\") pod \"swift-storage-0\" (UID: \"f1f687f2-3229-401c-b5cb-f79e96311c45\") " pod="openstack/swift-storage-0" Jan 01 08:45:44 crc kubenswrapper[4867]: I0101 08:45:44.812581 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f1f687f2-3229-401c-b5cb-f79e96311c45-cache\") pod \"swift-storage-0\" (UID: \"f1f687f2-3229-401c-b5cb-f79e96311c45\") " pod="openstack/swift-storage-0" Jan 01 08:45:44 crc kubenswrapper[4867]: I0101 08:45:44.835775 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pktsx\" (UniqueName: \"kubernetes.io/projected/f1f687f2-3229-401c-b5cb-f79e96311c45-kube-api-access-pktsx\") pod \"swift-storage-0\" (UID: \"f1f687f2-3229-401c-b5cb-f79e96311c45\") " pod="openstack/swift-storage-0" Jan 01 08:45:44 crc kubenswrapper[4867]: I0101 08:45:44.845512 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"f1f687f2-3229-401c-b5cb-f79e96311c45\") " pod="openstack/swift-storage-0" Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.067453 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-sn8tf"] Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.069511 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sn8tf" Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.072256 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.072601 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.081614 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.111953 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-sn8tf"] Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.221397 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6640c65c-7090-4961-ba25-038487f6c62b-scripts\") pod \"swift-ring-rebalance-sn8tf\" (UID: \"6640c65c-7090-4961-ba25-038487f6c62b\") " pod="openstack/swift-ring-rebalance-sn8tf" Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.221662 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6640c65c-7090-4961-ba25-038487f6c62b-ring-data-devices\") pod \"swift-ring-rebalance-sn8tf\" (UID: \"6640c65c-7090-4961-ba25-038487f6c62b\") " pod="openstack/swift-ring-rebalance-sn8tf" Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.221700 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6640c65c-7090-4961-ba25-038487f6c62b-swiftconf\") pod \"swift-ring-rebalance-sn8tf\" (UID: \"6640c65c-7090-4961-ba25-038487f6c62b\") " pod="openstack/swift-ring-rebalance-sn8tf" Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.221736 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kdg5\" (UniqueName: \"kubernetes.io/projected/6640c65c-7090-4961-ba25-038487f6c62b-kube-api-access-6kdg5\") pod \"swift-ring-rebalance-sn8tf\" (UID: \"6640c65c-7090-4961-ba25-038487f6c62b\") " pod="openstack/swift-ring-rebalance-sn8tf" Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.221768 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6640c65c-7090-4961-ba25-038487f6c62b-dispersionconf\") pod \"swift-ring-rebalance-sn8tf\" (UID: \"6640c65c-7090-4961-ba25-038487f6c62b\") " pod="openstack/swift-ring-rebalance-sn8tf" Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.221844 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6640c65c-7090-4961-ba25-038487f6c62b-combined-ca-bundle\") pod \"swift-ring-rebalance-sn8tf\" (UID: \"6640c65c-7090-4961-ba25-038487f6c62b\") " pod="openstack/swift-ring-rebalance-sn8tf" Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.221968 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6640c65c-7090-4961-ba25-038487f6c62b-etc-swift\") pod \"swift-ring-rebalance-sn8tf\" (UID: \"6640c65c-7090-4961-ba25-038487f6c62b\") " pod="openstack/swift-ring-rebalance-sn8tf" Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.323000 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6640c65c-7090-4961-ba25-038487f6c62b-ring-data-devices\") pod \"swift-ring-rebalance-sn8tf\" (UID: \"6640c65c-7090-4961-ba25-038487f6c62b\") " pod="openstack/swift-ring-rebalance-sn8tf" Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.323037 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6640c65c-7090-4961-ba25-038487f6c62b-swiftconf\") pod \"swift-ring-rebalance-sn8tf\" (UID: \"6640c65c-7090-4961-ba25-038487f6c62b\") " pod="openstack/swift-ring-rebalance-sn8tf" Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.323065 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kdg5\" (UniqueName: \"kubernetes.io/projected/6640c65c-7090-4961-ba25-038487f6c62b-kube-api-access-6kdg5\") pod \"swift-ring-rebalance-sn8tf\" (UID: \"6640c65c-7090-4961-ba25-038487f6c62b\") " pod="openstack/swift-ring-rebalance-sn8tf" Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.323095 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6640c65c-7090-4961-ba25-038487f6c62b-dispersionconf\") pod \"swift-ring-rebalance-sn8tf\" (UID: \"6640c65c-7090-4961-ba25-038487f6c62b\") " pod="openstack/swift-ring-rebalance-sn8tf" Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.323140 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6640c65c-7090-4961-ba25-038487f6c62b-combined-ca-bundle\") pod \"swift-ring-rebalance-sn8tf\" (UID: \"6640c65c-7090-4961-ba25-038487f6c62b\") " pod="openstack/swift-ring-rebalance-sn8tf" Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.323169 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f1f687f2-3229-401c-b5cb-f79e96311c45-etc-swift\") pod \"swift-storage-0\" (UID: \"f1f687f2-3229-401c-b5cb-f79e96311c45\") " pod="openstack/swift-storage-0" Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.323242 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6640c65c-7090-4961-ba25-038487f6c62b-etc-swift\") pod \"swift-ring-rebalance-sn8tf\" (UID: \"6640c65c-7090-4961-ba25-038487f6c62b\") " pod="openstack/swift-ring-rebalance-sn8tf" Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.323289 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6640c65c-7090-4961-ba25-038487f6c62b-scripts\") pod \"swift-ring-rebalance-sn8tf\" (UID: \"6640c65c-7090-4961-ba25-038487f6c62b\") " pod="openstack/swift-ring-rebalance-sn8tf" Jan 01 08:45:45 crc kubenswrapper[4867]: E0101 08:45:45.323995 4867 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 01 08:45:45 crc kubenswrapper[4867]: E0101 08:45:45.324029 4867 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 01 08:45:45 crc kubenswrapper[4867]: E0101 08:45:45.324242 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1f687f2-3229-401c-b5cb-f79e96311c45-etc-swift podName:f1f687f2-3229-401c-b5cb-f79e96311c45 nodeName:}" failed. No retries permitted until 2026-01-01 08:45:46.324193579 +0000 UTC m=+1155.459462388 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f1f687f2-3229-401c-b5cb-f79e96311c45-etc-swift") pod "swift-storage-0" (UID: "f1f687f2-3229-401c-b5cb-f79e96311c45") : configmap "swift-ring-files" not found Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.324737 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6640c65c-7090-4961-ba25-038487f6c62b-etc-swift\") pod \"swift-ring-rebalance-sn8tf\" (UID: \"6640c65c-7090-4961-ba25-038487f6c62b\") " pod="openstack/swift-ring-rebalance-sn8tf" Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.324915 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6640c65c-7090-4961-ba25-038487f6c62b-ring-data-devices\") pod \"swift-ring-rebalance-sn8tf\" (UID: \"6640c65c-7090-4961-ba25-038487f6c62b\") " pod="openstack/swift-ring-rebalance-sn8tf" Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.325278 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6640c65c-7090-4961-ba25-038487f6c62b-scripts\") pod \"swift-ring-rebalance-sn8tf\" (UID: \"6640c65c-7090-4961-ba25-038487f6c62b\") " pod="openstack/swift-ring-rebalance-sn8tf" Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.332398 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6640c65c-7090-4961-ba25-038487f6c62b-swiftconf\") pod \"swift-ring-rebalance-sn8tf\" (UID: \"6640c65c-7090-4961-ba25-038487f6c62b\") " pod="openstack/swift-ring-rebalance-sn8tf" Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.332482 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6640c65c-7090-4961-ba25-038487f6c62b-dispersionconf\") pod \"swift-ring-rebalance-sn8tf\" (UID: \"6640c65c-7090-4961-ba25-038487f6c62b\") " pod="openstack/swift-ring-rebalance-sn8tf" Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.333041 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6640c65c-7090-4961-ba25-038487f6c62b-combined-ca-bundle\") pod \"swift-ring-rebalance-sn8tf\" (UID: \"6640c65c-7090-4961-ba25-038487f6c62b\") " pod="openstack/swift-ring-rebalance-sn8tf" Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.342081 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kdg5\" (UniqueName: \"kubernetes.io/projected/6640c65c-7090-4961-ba25-038487f6c62b-kube-api-access-6kdg5\") pod \"swift-ring-rebalance-sn8tf\" (UID: \"6640c65c-7090-4961-ba25-038487f6c62b\") " pod="openstack/swift-ring-rebalance-sn8tf" Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.391393 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sn8tf" Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.426227 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bbdc7ccd7-rhzm6" Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.527404 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44dcf904-76b7-4932-8731-978615eadd0a-dns-svc\") pod \"44dcf904-76b7-4932-8731-978615eadd0a\" (UID: \"44dcf904-76b7-4932-8731-978615eadd0a\") " Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.527924 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44dcf904-76b7-4932-8731-978615eadd0a-ovsdbserver-nb\") pod \"44dcf904-76b7-4932-8731-978615eadd0a\" (UID: \"44dcf904-76b7-4932-8731-978615eadd0a\") " Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.528007 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsbqx\" (UniqueName: \"kubernetes.io/projected/44dcf904-76b7-4932-8731-978615eadd0a-kube-api-access-hsbqx\") pod \"44dcf904-76b7-4932-8731-978615eadd0a\" (UID: \"44dcf904-76b7-4932-8731-978615eadd0a\") " Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.528080 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44dcf904-76b7-4932-8731-978615eadd0a-config\") pod \"44dcf904-76b7-4932-8731-978615eadd0a\" (UID: \"44dcf904-76b7-4932-8731-978615eadd0a\") " Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.538960 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44dcf904-76b7-4932-8731-978615eadd0a-kube-api-access-hsbqx" (OuterVolumeSpecName: "kube-api-access-hsbqx") pod "44dcf904-76b7-4932-8731-978615eadd0a" (UID: "44dcf904-76b7-4932-8731-978615eadd0a"). InnerVolumeSpecName "kube-api-access-hsbqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.550627 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bbdc7ccd7-rhzm6" Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.551650 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bbdc7ccd7-rhzm6" event={"ID":"44dcf904-76b7-4932-8731-978615eadd0a","Type":"ContainerDied","Data":"1fc17caa18efa7a1f4269c9635f80294f56315fae64dd46d2176441969778651"} Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.551739 4867 scope.go:117] "RemoveContainer" containerID="7beffd6c5a6d6111b79ca3ecbfd5e6c4852dba9d20080a5f77976783df05ff63" Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.557087 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-x5wp2" event={"ID":"e1293cec-4975-472a-adc0-8d14637a64fe","Type":"ContainerStarted","Data":"6b49aef3bdc30e3be5984484878d48a8da496fe53750f664add0a142424f5d99"} Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.619849 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44dcf904-76b7-4932-8731-978615eadd0a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "44dcf904-76b7-4932-8731-978615eadd0a" (UID: "44dcf904-76b7-4932-8731-978615eadd0a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.620505 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44dcf904-76b7-4932-8731-978615eadd0a-config" (OuterVolumeSpecName: "config") pod "44dcf904-76b7-4932-8731-978615eadd0a" (UID: "44dcf904-76b7-4932-8731-978615eadd0a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.626288 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44dcf904-76b7-4932-8731-978615eadd0a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "44dcf904-76b7-4932-8731-978615eadd0a" (UID: "44dcf904-76b7-4932-8731-978615eadd0a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.630545 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44dcf904-76b7-4932-8731-978615eadd0a-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.630652 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44dcf904-76b7-4932-8731-978615eadd0a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.630740 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44dcf904-76b7-4932-8731-978615eadd0a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.630803 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsbqx\" (UniqueName: \"kubernetes.io/projected/44dcf904-76b7-4932-8731-978615eadd0a-kube-api-access-hsbqx\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.771986 4867 scope.go:117] "RemoveContainer" containerID="ba2ca743dc66600316d6eefca657a40868329628c1ce0189a5d81d6ba7076f94" Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.886714 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bbdc7ccd7-rhzm6"] Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.895638 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bbdc7ccd7-rhzm6"] Jan 01 08:45:45 crc kubenswrapper[4867]: I0101 08:45:45.912190 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-sn8tf"] Jan 01 08:45:45 crc kubenswrapper[4867]: W0101 08:45:45.918520 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6640c65c_7090_4961_ba25_038487f6c62b.slice/crio-1ce90bf840fa08b6ed96cf7b4c93c2fee865af26819dbf0adf9adb266dcbbdf3 WatchSource:0}: Error finding container 1ce90bf840fa08b6ed96cf7b4c93c2fee865af26819dbf0adf9adb266dcbbdf3: Status 404 returned error can't find the container with id 1ce90bf840fa08b6ed96cf7b4c93c2fee865af26819dbf0adf9adb266dcbbdf3 Jan 01 08:45:46 crc kubenswrapper[4867]: I0101 08:45:46.343560 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f1f687f2-3229-401c-b5cb-f79e96311c45-etc-swift\") pod \"swift-storage-0\" (UID: \"f1f687f2-3229-401c-b5cb-f79e96311c45\") " pod="openstack/swift-storage-0" Jan 01 08:45:46 crc kubenswrapper[4867]: E0101 08:45:46.343831 4867 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 01 08:45:46 crc kubenswrapper[4867]: E0101 08:45:46.343877 4867 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 01 08:45:46 crc kubenswrapper[4867]: E0101 08:45:46.344010 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1f687f2-3229-401c-b5cb-f79e96311c45-etc-swift podName:f1f687f2-3229-401c-b5cb-f79e96311c45 nodeName:}" failed. No retries permitted until 2026-01-01 08:45:48.343973527 +0000 UTC m=+1157.479242336 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f1f687f2-3229-401c-b5cb-f79e96311c45-etc-swift") pod "swift-storage-0" (UID: "f1f687f2-3229-401c-b5cb-f79e96311c45") : configmap "swift-ring-files" not found Jan 01 08:45:46 crc kubenswrapper[4867]: I0101 08:45:46.568651 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sn8tf" event={"ID":"6640c65c-7090-4961-ba25-038487f6c62b","Type":"ContainerStarted","Data":"1ce90bf840fa08b6ed96cf7b4c93c2fee865af26819dbf0adf9adb266dcbbdf3"} Jan 01 08:45:46 crc kubenswrapper[4867]: I0101 08:45:46.571032 4867 generic.go:334] "Generic (PLEG): container finished" podID="e1293cec-4975-472a-adc0-8d14637a64fe" containerID="6b49aef3bdc30e3be5984484878d48a8da496fe53750f664add0a142424f5d99" exitCode=0 Jan 01 08:45:46 crc kubenswrapper[4867]: I0101 08:45:46.571090 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-x5wp2" event={"ID":"e1293cec-4975-472a-adc0-8d14637a64fe","Type":"ContainerDied","Data":"6b49aef3bdc30e3be5984484878d48a8da496fe53750f664add0a142424f5d99"} Jan 01 08:45:47 crc kubenswrapper[4867]: I0101 08:45:47.162738 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44dcf904-76b7-4932-8731-978615eadd0a" path="/var/lib/kubelet/pods/44dcf904-76b7-4932-8731-978615eadd0a/volumes" Jan 01 08:45:47 crc kubenswrapper[4867]: I0101 08:45:47.583640 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-x5wp2" event={"ID":"e1293cec-4975-472a-adc0-8d14637a64fe","Type":"ContainerStarted","Data":"0b406d5c43d02ac6453d2af0a9651013ddc04057462f18dba4ed864d0c258363"} Jan 01 08:45:47 crc kubenswrapper[4867]: I0101 08:45:47.583793 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cb545bd4c-x5wp2" Jan 01 08:45:47 crc kubenswrapper[4867]: I0101 08:45:47.621766 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cb545bd4c-x5wp2" podStartSLOduration=4.621740052 podStartE2EDuration="4.621740052s" podCreationTimestamp="2026-01-01 08:45:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:45:47.607004178 +0000 UTC m=+1156.742272957" watchObservedRunningTime="2026-01-01 08:45:47.621740052 +0000 UTC m=+1156.757008821" Jan 01 08:45:48 crc kubenswrapper[4867]: I0101 08:45:48.382852 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f1f687f2-3229-401c-b5cb-f79e96311c45-etc-swift\") pod \"swift-storage-0\" (UID: \"f1f687f2-3229-401c-b5cb-f79e96311c45\") " pod="openstack/swift-storage-0" Jan 01 08:45:48 crc kubenswrapper[4867]: E0101 08:45:48.383158 4867 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 01 08:45:48 crc kubenswrapper[4867]: E0101 08:45:48.383444 4867 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 01 08:45:48 crc kubenswrapper[4867]: E0101 08:45:48.383515 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1f687f2-3229-401c-b5cb-f79e96311c45-etc-swift podName:f1f687f2-3229-401c-b5cb-f79e96311c45 nodeName:}" failed. No retries permitted until 2026-01-01 08:45:52.383496912 +0000 UTC m=+1161.518765681 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f1f687f2-3229-401c-b5cb-f79e96311c45-etc-swift") pod "swift-storage-0" (UID: "f1f687f2-3229-401c-b5cb-f79e96311c45") : configmap "swift-ring-files" not found Jan 01 08:45:50 crc kubenswrapper[4867]: I0101 08:45:50.009161 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 01 08:45:50 crc kubenswrapper[4867]: I0101 08:45:50.009536 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 01 08:45:50 crc kubenswrapper[4867]: I0101 08:45:50.089908 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 01 08:45:50 crc kubenswrapper[4867]: I0101 08:45:50.607962 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sn8tf" event={"ID":"6640c65c-7090-4961-ba25-038487f6c62b","Type":"ContainerStarted","Data":"18becf772101ab6b2c53a4dce6cb85a47ab0a01a65a2c7b3664c945f540dbbb7"} Jan 01 08:45:50 crc kubenswrapper[4867]: I0101 08:45:50.625939 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-sn8tf" podStartSLOduration=1.927050908 podStartE2EDuration="5.625918817s" podCreationTimestamp="2026-01-01 08:45:45 +0000 UTC" firstStartedPulling="2026-01-01 08:45:45.923375962 +0000 UTC m=+1155.058644731" lastFinishedPulling="2026-01-01 08:45:49.622243871 +0000 UTC m=+1158.757512640" observedRunningTime="2026-01-01 08:45:50.624010794 +0000 UTC m=+1159.759279593" watchObservedRunningTime="2026-01-01 08:45:50.625918817 +0000 UTC m=+1159.761187596" Jan 01 08:45:50 crc kubenswrapper[4867]: I0101 08:45:50.684738 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.196401 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.331615 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.331682 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.331731 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-69jph" Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.332413 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"81817d336fc213658d5e33bc8d0ea2842c8843cc5c0fbe3de4796b71ea1ba225"} pod="openshift-machine-config-operator/machine-config-daemon-69jph" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.332476 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" containerID="cri-o://81817d336fc213658d5e33bc8d0ea2842c8843cc5c0fbe3de4796b71ea1ba225" gracePeriod=600 Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.383357 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-4t8xd"] Jan 01 08:45:51 crc kubenswrapper[4867]: E0101 08:45:51.383983 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44dcf904-76b7-4932-8731-978615eadd0a" containerName="init" Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.384006 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="44dcf904-76b7-4932-8731-978615eadd0a" containerName="init" Jan 01 08:45:51 crc kubenswrapper[4867]: E0101 08:45:51.384077 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44dcf904-76b7-4932-8731-978615eadd0a" containerName="dnsmasq-dns" Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.384088 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="44dcf904-76b7-4932-8731-978615eadd0a" containerName="dnsmasq-dns" Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.384289 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="44dcf904-76b7-4932-8731-978615eadd0a" containerName="dnsmasq-dns" Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.384909 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4t8xd" Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.394011 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-4t8xd"] Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.531402 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-d8ca-account-create-update-bdzbd"] Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.534171 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d8ca-account-create-update-bdzbd" Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.536787 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncx6c\" (UniqueName: \"kubernetes.io/projected/86fbff8a-ec9f-4575-be56-2e32acdf53ad-kube-api-access-ncx6c\") pod \"keystone-db-create-4t8xd\" (UID: \"86fbff8a-ec9f-4575-be56-2e32acdf53ad\") " pod="openstack/keystone-db-create-4t8xd" Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.536858 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86fbff8a-ec9f-4575-be56-2e32acdf53ad-operator-scripts\") pod \"keystone-db-create-4t8xd\" (UID: \"86fbff8a-ec9f-4575-be56-2e32acdf53ad\") " pod="openstack/keystone-db-create-4t8xd" Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.539254 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.539476 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d8ca-account-create-update-bdzbd"] Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.615543 4867 generic.go:334] "Generic (PLEG): container finished" podID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerID="81817d336fc213658d5e33bc8d0ea2842c8843cc5c0fbe3de4796b71ea1ba225" exitCode=0 Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.616335 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerDied","Data":"81817d336fc213658d5e33bc8d0ea2842c8843cc5c0fbe3de4796b71ea1ba225"} Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.616373 4867 scope.go:117] "RemoveContainer" containerID="5c0242f8cb2cb86cd3c1961752ae798238bc46747b9db37482dfc5091eb3d814" Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.638991 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncx6c\" (UniqueName: \"kubernetes.io/projected/86fbff8a-ec9f-4575-be56-2e32acdf53ad-kube-api-access-ncx6c\") pod \"keystone-db-create-4t8xd\" (UID: \"86fbff8a-ec9f-4575-be56-2e32acdf53ad\") " pod="openstack/keystone-db-create-4t8xd" Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.639065 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86fbff8a-ec9f-4575-be56-2e32acdf53ad-operator-scripts\") pod \"keystone-db-create-4t8xd\" (UID: \"86fbff8a-ec9f-4575-be56-2e32acdf53ad\") " pod="openstack/keystone-db-create-4t8xd" Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.639119 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwvlg\" (UniqueName: \"kubernetes.io/projected/f1cfea17-e2e5-4785-ac84-a3c14a0cf1d0-kube-api-access-cwvlg\") pod \"keystone-d8ca-account-create-update-bdzbd\" (UID: \"f1cfea17-e2e5-4785-ac84-a3c14a0cf1d0\") " pod="openstack/keystone-d8ca-account-create-update-bdzbd" Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.639143 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1cfea17-e2e5-4785-ac84-a3c14a0cf1d0-operator-scripts\") pod \"keystone-d8ca-account-create-update-bdzbd\" (UID: \"f1cfea17-e2e5-4785-ac84-a3c14a0cf1d0\") " pod="openstack/keystone-d8ca-account-create-update-bdzbd" Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.641896 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86fbff8a-ec9f-4575-be56-2e32acdf53ad-operator-scripts\") pod \"keystone-db-create-4t8xd\" (UID: \"86fbff8a-ec9f-4575-be56-2e32acdf53ad\") " pod="openstack/keystone-db-create-4t8xd" Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.657190 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncx6c\" (UniqueName: \"kubernetes.io/projected/86fbff8a-ec9f-4575-be56-2e32acdf53ad-kube-api-access-ncx6c\") pod \"keystone-db-create-4t8xd\" (UID: \"86fbff8a-ec9f-4575-be56-2e32acdf53ad\") " pod="openstack/keystone-db-create-4t8xd" Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.707195 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-2p22z"] Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.717746 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4t8xd" Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.724496 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-2p22z"] Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.724590 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2p22z" Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.749966 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwvlg\" (UniqueName: \"kubernetes.io/projected/f1cfea17-e2e5-4785-ac84-a3c14a0cf1d0-kube-api-access-cwvlg\") pod \"keystone-d8ca-account-create-update-bdzbd\" (UID: \"f1cfea17-e2e5-4785-ac84-a3c14a0cf1d0\") " pod="openstack/keystone-d8ca-account-create-update-bdzbd" Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.750267 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1cfea17-e2e5-4785-ac84-a3c14a0cf1d0-operator-scripts\") pod \"keystone-d8ca-account-create-update-bdzbd\" (UID: \"f1cfea17-e2e5-4785-ac84-a3c14a0cf1d0\") " pod="openstack/keystone-d8ca-account-create-update-bdzbd" Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.751249 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1cfea17-e2e5-4785-ac84-a3c14a0cf1d0-operator-scripts\") pod \"keystone-d8ca-account-create-update-bdzbd\" (UID: \"f1cfea17-e2e5-4785-ac84-a3c14a0cf1d0\") " pod="openstack/keystone-d8ca-account-create-update-bdzbd" Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.767033 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwvlg\" (UniqueName: \"kubernetes.io/projected/f1cfea17-e2e5-4785-ac84-a3c14a0cf1d0-kube-api-access-cwvlg\") pod \"keystone-d8ca-account-create-update-bdzbd\" (UID: \"f1cfea17-e2e5-4785-ac84-a3c14a0cf1d0\") " pod="openstack/keystone-d8ca-account-create-update-bdzbd" Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.834155 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-2ede-account-create-update-2qf6t"] Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.835771 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2ede-account-create-update-2qf6t" Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.838316 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.840492 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-2ede-account-create-update-2qf6t"] Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.864932 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d8ca-account-create-update-bdzbd" Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.868304 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2pw9\" (UniqueName: \"kubernetes.io/projected/de595f01-a50d-44f7-a2da-6dbb32c429ec-kube-api-access-r2pw9\") pod \"placement-db-create-2p22z\" (UID: \"de595f01-a50d-44f7-a2da-6dbb32c429ec\") " pod="openstack/placement-db-create-2p22z" Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.868484 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de595f01-a50d-44f7-a2da-6dbb32c429ec-operator-scripts\") pod \"placement-db-create-2p22z\" (UID: \"de595f01-a50d-44f7-a2da-6dbb32c429ec\") " pod="openstack/placement-db-create-2p22z" Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.970236 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de595f01-a50d-44f7-a2da-6dbb32c429ec-operator-scripts\") pod \"placement-db-create-2p22z\" (UID: \"de595f01-a50d-44f7-a2da-6dbb32c429ec\") " pod="openstack/placement-db-create-2p22z" Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.970294 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fd08681-e332-4a24-9e90-d0085dc5e069-operator-scripts\") pod \"placement-2ede-account-create-update-2qf6t\" (UID: \"9fd08681-e332-4a24-9e90-d0085dc5e069\") " pod="openstack/placement-2ede-account-create-update-2qf6t" Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.970382 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjr46\" (UniqueName: \"kubernetes.io/projected/9fd08681-e332-4a24-9e90-d0085dc5e069-kube-api-access-fjr46\") pod \"placement-2ede-account-create-update-2qf6t\" (UID: \"9fd08681-e332-4a24-9e90-d0085dc5e069\") " pod="openstack/placement-2ede-account-create-update-2qf6t" Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.970496 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2pw9\" (UniqueName: \"kubernetes.io/projected/de595f01-a50d-44f7-a2da-6dbb32c429ec-kube-api-access-r2pw9\") pod \"placement-db-create-2p22z\" (UID: \"de595f01-a50d-44f7-a2da-6dbb32c429ec\") " pod="openstack/placement-db-create-2p22z" Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.971167 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de595f01-a50d-44f7-a2da-6dbb32c429ec-operator-scripts\") pod \"placement-db-create-2p22z\" (UID: \"de595f01-a50d-44f7-a2da-6dbb32c429ec\") " pod="openstack/placement-db-create-2p22z" Jan 01 08:45:51 crc kubenswrapper[4867]: I0101 08:45:51.999449 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2pw9\" (UniqueName: \"kubernetes.io/projected/de595f01-a50d-44f7-a2da-6dbb32c429ec-kube-api-access-r2pw9\") pod \"placement-db-create-2p22z\" (UID: \"de595f01-a50d-44f7-a2da-6dbb32c429ec\") " pod="openstack/placement-db-create-2p22z" Jan 01 08:45:52 crc kubenswrapper[4867]: I0101 08:45:52.043269 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2p22z" Jan 01 08:45:52 crc kubenswrapper[4867]: I0101 08:45:52.060103 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-4t8xd"] Jan 01 08:45:52 crc kubenswrapper[4867]: I0101 08:45:52.071465 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjr46\" (UniqueName: \"kubernetes.io/projected/9fd08681-e332-4a24-9e90-d0085dc5e069-kube-api-access-fjr46\") pod \"placement-2ede-account-create-update-2qf6t\" (UID: \"9fd08681-e332-4a24-9e90-d0085dc5e069\") " pod="openstack/placement-2ede-account-create-update-2qf6t" Jan 01 08:45:52 crc kubenswrapper[4867]: I0101 08:45:52.071648 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fd08681-e332-4a24-9e90-d0085dc5e069-operator-scripts\") pod \"placement-2ede-account-create-update-2qf6t\" (UID: \"9fd08681-e332-4a24-9e90-d0085dc5e069\") " pod="openstack/placement-2ede-account-create-update-2qf6t" Jan 01 08:45:52 crc kubenswrapper[4867]: I0101 08:45:52.072435 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fd08681-e332-4a24-9e90-d0085dc5e069-operator-scripts\") pod \"placement-2ede-account-create-update-2qf6t\" (UID: \"9fd08681-e332-4a24-9e90-d0085dc5e069\") " pod="openstack/placement-2ede-account-create-update-2qf6t" Jan 01 08:45:52 crc kubenswrapper[4867]: I0101 08:45:52.094649 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjr46\" (UniqueName: \"kubernetes.io/projected/9fd08681-e332-4a24-9e90-d0085dc5e069-kube-api-access-fjr46\") pod \"placement-2ede-account-create-update-2qf6t\" (UID: \"9fd08681-e332-4a24-9e90-d0085dc5e069\") " pod="openstack/placement-2ede-account-create-update-2qf6t" Jan 01 08:45:52 crc kubenswrapper[4867]: I0101 08:45:52.188364 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2ede-account-create-update-2qf6t" Jan 01 08:45:52 crc kubenswrapper[4867]: I0101 08:45:52.396363 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d8ca-account-create-update-bdzbd"] Jan 01 08:45:52 crc kubenswrapper[4867]: I0101 08:45:52.480038 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f1f687f2-3229-401c-b5cb-f79e96311c45-etc-swift\") pod \"swift-storage-0\" (UID: \"f1f687f2-3229-401c-b5cb-f79e96311c45\") " pod="openstack/swift-storage-0" Jan 01 08:45:52 crc kubenswrapper[4867]: E0101 08:45:52.480225 4867 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 01 08:45:52 crc kubenswrapper[4867]: E0101 08:45:52.480255 4867 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 01 08:45:52 crc kubenswrapper[4867]: E0101 08:45:52.480511 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1f687f2-3229-401c-b5cb-f79e96311c45-etc-swift podName:f1f687f2-3229-401c-b5cb-f79e96311c45 nodeName:}" failed. No retries permitted until 2026-01-01 08:46:00.480493836 +0000 UTC m=+1169.615762605 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f1f687f2-3229-401c-b5cb-f79e96311c45-etc-swift") pod "swift-storage-0" (UID: "f1f687f2-3229-401c-b5cb-f79e96311c45") : configmap "swift-ring-files" not found Jan 01 08:45:52 crc kubenswrapper[4867]: I0101 08:45:52.596822 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-2p22z"] Jan 01 08:45:52 crc kubenswrapper[4867]: I0101 08:45:52.628803 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerStarted","Data":"fae12ab6ce4b32e7095b166bc2001d0435bf314dafdb60059b95e31213f00b52"} Jan 01 08:45:52 crc kubenswrapper[4867]: I0101 08:45:52.633497 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2p22z" event={"ID":"de595f01-a50d-44f7-a2da-6dbb32c429ec","Type":"ContainerStarted","Data":"898ebf760cdd91e7c0116116c53d5be45fdeb912e8ffe151a839e07863b7e9fb"} Jan 01 08:45:52 crc kubenswrapper[4867]: I0101 08:45:52.635324 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4t8xd" event={"ID":"86fbff8a-ec9f-4575-be56-2e32acdf53ad","Type":"ContainerStarted","Data":"0d53a6fe6b01eb124abe4dedb90d283c58982a5634b2bd9c0b8f028378652e7d"} Jan 01 08:45:52 crc kubenswrapper[4867]: I0101 08:45:52.635357 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4t8xd" event={"ID":"86fbff8a-ec9f-4575-be56-2e32acdf53ad","Type":"ContainerStarted","Data":"9c3f869705a775c9c0bb599ca54dcf71067539ded73276d86ca6242faf0af8d3"} Jan 01 08:45:52 crc kubenswrapper[4867]: I0101 08:45:52.636562 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d8ca-account-create-update-bdzbd" event={"ID":"f1cfea17-e2e5-4785-ac84-a3c14a0cf1d0","Type":"ContainerStarted","Data":"3d4d78dbcd5af8ebb174d95fb01bea809792013aaa5a20c15880c4595f08ee67"} Jan 01 08:45:52 crc kubenswrapper[4867]: I0101 08:45:52.717504 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-4t8xd" podStartSLOduration=1.717484384 podStartE2EDuration="1.717484384s" podCreationTimestamp="2026-01-01 08:45:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:45:52.680828584 +0000 UTC m=+1161.816097353" watchObservedRunningTime="2026-01-01 08:45:52.717484384 +0000 UTC m=+1161.852753153" Jan 01 08:45:52 crc kubenswrapper[4867]: I0101 08:45:52.731950 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-2ede-account-create-update-2qf6t"] Jan 01 08:45:53 crc kubenswrapper[4867]: I0101 08:45:53.648095 4867 generic.go:334] "Generic (PLEG): container finished" podID="9fd08681-e332-4a24-9e90-d0085dc5e069" containerID="5b37eaae3cd4207fb60bf9295335cfc6fc1bae0503f20ddf7845ef59590b88e0" exitCode=0 Jan 01 08:45:53 crc kubenswrapper[4867]: I0101 08:45:53.648149 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2ede-account-create-update-2qf6t" event={"ID":"9fd08681-e332-4a24-9e90-d0085dc5e069","Type":"ContainerDied","Data":"5b37eaae3cd4207fb60bf9295335cfc6fc1bae0503f20ddf7845ef59590b88e0"} Jan 01 08:45:53 crc kubenswrapper[4867]: I0101 08:45:53.648407 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2ede-account-create-update-2qf6t" event={"ID":"9fd08681-e332-4a24-9e90-d0085dc5e069","Type":"ContainerStarted","Data":"4e6fca9b4f9532f2e61bdc9852982fe27364f8da64ad993c517e57142ee80ced"} Jan 01 08:45:53 crc kubenswrapper[4867]: I0101 08:45:53.651453 4867 generic.go:334] "Generic (PLEG): container finished" podID="de595f01-a50d-44f7-a2da-6dbb32c429ec" containerID="27a459f45063b31c51749133d58df4adb80c865b6cba29a1ddd5b68907c8e260" exitCode=0 Jan 01 08:45:53 crc kubenswrapper[4867]: I0101 08:45:53.651546 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2p22z" event={"ID":"de595f01-a50d-44f7-a2da-6dbb32c429ec","Type":"ContainerDied","Data":"27a459f45063b31c51749133d58df4adb80c865b6cba29a1ddd5b68907c8e260"} Jan 01 08:45:53 crc kubenswrapper[4867]: I0101 08:45:53.653616 4867 generic.go:334] "Generic (PLEG): container finished" podID="86fbff8a-ec9f-4575-be56-2e32acdf53ad" containerID="0d53a6fe6b01eb124abe4dedb90d283c58982a5634b2bd9c0b8f028378652e7d" exitCode=0 Jan 01 08:45:53 crc kubenswrapper[4867]: I0101 08:45:53.653669 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4t8xd" event={"ID":"86fbff8a-ec9f-4575-be56-2e32acdf53ad","Type":"ContainerDied","Data":"0d53a6fe6b01eb124abe4dedb90d283c58982a5634b2bd9c0b8f028378652e7d"} Jan 01 08:45:53 crc kubenswrapper[4867]: I0101 08:45:53.655441 4867 generic.go:334] "Generic (PLEG): container finished" podID="f1cfea17-e2e5-4785-ac84-a3c14a0cf1d0" containerID="0d11f9fcb8565559e963bacc9e303403745a1fea936cb1a19e716220ca56c821" exitCode=0 Jan 01 08:45:53 crc kubenswrapper[4867]: I0101 08:45:53.655509 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d8ca-account-create-update-bdzbd" event={"ID":"f1cfea17-e2e5-4785-ac84-a3c14a0cf1d0","Type":"ContainerDied","Data":"0d11f9fcb8565559e963bacc9e303403745a1fea936cb1a19e716220ca56c821"} Jan 01 08:45:53 crc kubenswrapper[4867]: I0101 08:45:53.740926 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6cb545bd4c-x5wp2" Jan 01 08:45:53 crc kubenswrapper[4867]: I0101 08:45:53.807534 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-v5cfx"] Jan 01 08:45:53 crc kubenswrapper[4867]: I0101 08:45:53.807753 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757dc6fff9-v5cfx" podUID="a4dcbf41-d27a-4a66-a24a-785a611208a6" containerName="dnsmasq-dns" containerID="cri-o://439664e75e182759edbab275d1ecc86257de913c81115c69833fc5bc5c16432d" gracePeriod=10 Jan 01 08:45:54 crc kubenswrapper[4867]: I0101 08:45:54.308763 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-v5cfx" Jan 01 08:45:54 crc kubenswrapper[4867]: I0101 08:45:54.410473 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4dcbf41-d27a-4a66-a24a-785a611208a6-config\") pod \"a4dcbf41-d27a-4a66-a24a-785a611208a6\" (UID: \"a4dcbf41-d27a-4a66-a24a-785a611208a6\") " Jan 01 08:45:54 crc kubenswrapper[4867]: I0101 08:45:54.410593 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4dcbf41-d27a-4a66-a24a-785a611208a6-dns-svc\") pod \"a4dcbf41-d27a-4a66-a24a-785a611208a6\" (UID: \"a4dcbf41-d27a-4a66-a24a-785a611208a6\") " Jan 01 08:45:54 crc kubenswrapper[4867]: I0101 08:45:54.410689 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4dcbf41-d27a-4a66-a24a-785a611208a6-ovsdbserver-sb\") pod \"a4dcbf41-d27a-4a66-a24a-785a611208a6\" (UID: \"a4dcbf41-d27a-4a66-a24a-785a611208a6\") " Jan 01 08:45:54 crc kubenswrapper[4867]: I0101 08:45:54.410736 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4dcbf41-d27a-4a66-a24a-785a611208a6-ovsdbserver-nb\") pod \"a4dcbf41-d27a-4a66-a24a-785a611208a6\" (UID: \"a4dcbf41-d27a-4a66-a24a-785a611208a6\") " Jan 01 08:45:54 crc kubenswrapper[4867]: I0101 08:45:54.410762 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjl2c\" (UniqueName: \"kubernetes.io/projected/a4dcbf41-d27a-4a66-a24a-785a611208a6-kube-api-access-zjl2c\") pod \"a4dcbf41-d27a-4a66-a24a-785a611208a6\" (UID: \"a4dcbf41-d27a-4a66-a24a-785a611208a6\") " Jan 01 08:45:54 crc kubenswrapper[4867]: I0101 08:45:54.417526 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4dcbf41-d27a-4a66-a24a-785a611208a6-kube-api-access-zjl2c" (OuterVolumeSpecName: "kube-api-access-zjl2c") pod "a4dcbf41-d27a-4a66-a24a-785a611208a6" (UID: "a4dcbf41-d27a-4a66-a24a-785a611208a6"). InnerVolumeSpecName "kube-api-access-zjl2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:45:54 crc kubenswrapper[4867]: I0101 08:45:54.455160 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4dcbf41-d27a-4a66-a24a-785a611208a6-config" (OuterVolumeSpecName: "config") pod "a4dcbf41-d27a-4a66-a24a-785a611208a6" (UID: "a4dcbf41-d27a-4a66-a24a-785a611208a6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:45:54 crc kubenswrapper[4867]: I0101 08:45:54.455591 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4dcbf41-d27a-4a66-a24a-785a611208a6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a4dcbf41-d27a-4a66-a24a-785a611208a6" (UID: "a4dcbf41-d27a-4a66-a24a-785a611208a6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:45:54 crc kubenswrapper[4867]: I0101 08:45:54.462986 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4dcbf41-d27a-4a66-a24a-785a611208a6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a4dcbf41-d27a-4a66-a24a-785a611208a6" (UID: "a4dcbf41-d27a-4a66-a24a-785a611208a6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:45:54 crc kubenswrapper[4867]: I0101 08:45:54.465200 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4dcbf41-d27a-4a66-a24a-785a611208a6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a4dcbf41-d27a-4a66-a24a-785a611208a6" (UID: "a4dcbf41-d27a-4a66-a24a-785a611208a6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:45:54 crc kubenswrapper[4867]: I0101 08:45:54.512176 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4dcbf41-d27a-4a66-a24a-785a611208a6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:54 crc kubenswrapper[4867]: I0101 08:45:54.512218 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4dcbf41-d27a-4a66-a24a-785a611208a6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:54 crc kubenswrapper[4867]: I0101 08:45:54.512234 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjl2c\" (UniqueName: \"kubernetes.io/projected/a4dcbf41-d27a-4a66-a24a-785a611208a6-kube-api-access-zjl2c\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:54 crc kubenswrapper[4867]: I0101 08:45:54.512249 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4dcbf41-d27a-4a66-a24a-785a611208a6-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:54 crc kubenswrapper[4867]: I0101 08:45:54.512260 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4dcbf41-d27a-4a66-a24a-785a611208a6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:54 crc kubenswrapper[4867]: I0101 08:45:54.668171 4867 generic.go:334] "Generic (PLEG): container finished" podID="a4dcbf41-d27a-4a66-a24a-785a611208a6" containerID="439664e75e182759edbab275d1ecc86257de913c81115c69833fc5bc5c16432d" exitCode=0 Jan 01 08:45:54 crc kubenswrapper[4867]: I0101 08:45:54.668335 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-v5cfx" event={"ID":"a4dcbf41-d27a-4a66-a24a-785a611208a6","Type":"ContainerDied","Data":"439664e75e182759edbab275d1ecc86257de913c81115c69833fc5bc5c16432d"} Jan 01 08:45:54 crc kubenswrapper[4867]: I0101 08:45:54.668417 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-v5cfx" event={"ID":"a4dcbf41-d27a-4a66-a24a-785a611208a6","Type":"ContainerDied","Data":"9208fdcbf89954cf3790f8b02b5aaccfe02d7fcf2a02992f9696f2f46e41fe94"} Jan 01 08:45:54 crc kubenswrapper[4867]: I0101 08:45:54.668461 4867 scope.go:117] "RemoveContainer" containerID="439664e75e182759edbab275d1ecc86257de913c81115c69833fc5bc5c16432d" Jan 01 08:45:54 crc kubenswrapper[4867]: I0101 08:45:54.668459 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-v5cfx" Jan 01 08:45:54 crc kubenswrapper[4867]: I0101 08:45:54.702160 4867 scope.go:117] "RemoveContainer" containerID="27a350461decde3e1c6b7abed2748c74f42f57b8606023bad6ddf8eae19fd27c" Jan 01 08:45:54 crc kubenswrapper[4867]: I0101 08:45:54.722267 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-v5cfx"] Jan 01 08:45:54 crc kubenswrapper[4867]: I0101 08:45:54.730492 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-v5cfx"] Jan 01 08:45:54 crc kubenswrapper[4867]: I0101 08:45:54.748358 4867 scope.go:117] "RemoveContainer" containerID="439664e75e182759edbab275d1ecc86257de913c81115c69833fc5bc5c16432d" Jan 01 08:45:54 crc kubenswrapper[4867]: E0101 08:45:54.748864 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"439664e75e182759edbab275d1ecc86257de913c81115c69833fc5bc5c16432d\": container with ID starting with 439664e75e182759edbab275d1ecc86257de913c81115c69833fc5bc5c16432d not found: ID does not exist" containerID="439664e75e182759edbab275d1ecc86257de913c81115c69833fc5bc5c16432d" Jan 01 08:45:54 crc kubenswrapper[4867]: I0101 08:45:54.748906 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"439664e75e182759edbab275d1ecc86257de913c81115c69833fc5bc5c16432d"} err="failed to get container status \"439664e75e182759edbab275d1ecc86257de913c81115c69833fc5bc5c16432d\": rpc error: code = NotFound desc = could not find container \"439664e75e182759edbab275d1ecc86257de913c81115c69833fc5bc5c16432d\": container with ID starting with 439664e75e182759edbab275d1ecc86257de913c81115c69833fc5bc5c16432d not found: ID does not exist" Jan 01 08:45:54 crc kubenswrapper[4867]: I0101 08:45:54.748927 4867 scope.go:117] "RemoveContainer" containerID="27a350461decde3e1c6b7abed2748c74f42f57b8606023bad6ddf8eae19fd27c" Jan 01 08:45:54 crc kubenswrapper[4867]: E0101 08:45:54.749219 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27a350461decde3e1c6b7abed2748c74f42f57b8606023bad6ddf8eae19fd27c\": container with ID starting with 27a350461decde3e1c6b7abed2748c74f42f57b8606023bad6ddf8eae19fd27c not found: ID does not exist" containerID="27a350461decde3e1c6b7abed2748c74f42f57b8606023bad6ddf8eae19fd27c" Jan 01 08:45:54 crc kubenswrapper[4867]: I0101 08:45:54.749256 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27a350461decde3e1c6b7abed2748c74f42f57b8606023bad6ddf8eae19fd27c"} err="failed to get container status \"27a350461decde3e1c6b7abed2748c74f42f57b8606023bad6ddf8eae19fd27c\": rpc error: code = NotFound desc = could not find container \"27a350461decde3e1c6b7abed2748c74f42f57b8606023bad6ddf8eae19fd27c\": container with ID starting with 27a350461decde3e1c6b7abed2748c74f42f57b8606023bad6ddf8eae19fd27c not found: ID does not exist" Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.011012 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d8ca-account-create-update-bdzbd" Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.093076 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4t8xd" Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.128504 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwvlg\" (UniqueName: \"kubernetes.io/projected/f1cfea17-e2e5-4785-ac84-a3c14a0cf1d0-kube-api-access-cwvlg\") pod \"f1cfea17-e2e5-4785-ac84-a3c14a0cf1d0\" (UID: \"f1cfea17-e2e5-4785-ac84-a3c14a0cf1d0\") " Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.128697 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1cfea17-e2e5-4785-ac84-a3c14a0cf1d0-operator-scripts\") pod \"f1cfea17-e2e5-4785-ac84-a3c14a0cf1d0\" (UID: \"f1cfea17-e2e5-4785-ac84-a3c14a0cf1d0\") " Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.129821 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1cfea17-e2e5-4785-ac84-a3c14a0cf1d0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f1cfea17-e2e5-4785-ac84-a3c14a0cf1d0" (UID: "f1cfea17-e2e5-4785-ac84-a3c14a0cf1d0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.133108 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1cfea17-e2e5-4785-ac84-a3c14a0cf1d0-kube-api-access-cwvlg" (OuterVolumeSpecName: "kube-api-access-cwvlg") pod "f1cfea17-e2e5-4785-ac84-a3c14a0cf1d0" (UID: "f1cfea17-e2e5-4785-ac84-a3c14a0cf1d0"). InnerVolumeSpecName "kube-api-access-cwvlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.140474 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4dcbf41-d27a-4a66-a24a-785a611208a6" path="/var/lib/kubelet/pods/a4dcbf41-d27a-4a66-a24a-785a611208a6/volumes" Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.229930 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncx6c\" (UniqueName: \"kubernetes.io/projected/86fbff8a-ec9f-4575-be56-2e32acdf53ad-kube-api-access-ncx6c\") pod \"86fbff8a-ec9f-4575-be56-2e32acdf53ad\" (UID: \"86fbff8a-ec9f-4575-be56-2e32acdf53ad\") " Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.229991 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86fbff8a-ec9f-4575-be56-2e32acdf53ad-operator-scripts\") pod \"86fbff8a-ec9f-4575-be56-2e32acdf53ad\" (UID: \"86fbff8a-ec9f-4575-be56-2e32acdf53ad\") " Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.230617 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86fbff8a-ec9f-4575-be56-2e32acdf53ad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "86fbff8a-ec9f-4575-be56-2e32acdf53ad" (UID: "86fbff8a-ec9f-4575-be56-2e32acdf53ad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.231810 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86fbff8a-ec9f-4575-be56-2e32acdf53ad-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.232046 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1cfea17-e2e5-4785-ac84-a3c14a0cf1d0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.232061 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwvlg\" (UniqueName: \"kubernetes.io/projected/f1cfea17-e2e5-4785-ac84-a3c14a0cf1d0-kube-api-access-cwvlg\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.232901 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86fbff8a-ec9f-4575-be56-2e32acdf53ad-kube-api-access-ncx6c" (OuterVolumeSpecName: "kube-api-access-ncx6c") pod "86fbff8a-ec9f-4575-be56-2e32acdf53ad" (UID: "86fbff8a-ec9f-4575-be56-2e32acdf53ad"). InnerVolumeSpecName "kube-api-access-ncx6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.236997 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2p22z" Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.253332 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2ede-account-create-update-2qf6t" Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.333674 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjr46\" (UniqueName: \"kubernetes.io/projected/9fd08681-e332-4a24-9e90-d0085dc5e069-kube-api-access-fjr46\") pod \"9fd08681-e332-4a24-9e90-d0085dc5e069\" (UID: \"9fd08681-e332-4a24-9e90-d0085dc5e069\") " Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.333847 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de595f01-a50d-44f7-a2da-6dbb32c429ec-operator-scripts\") pod \"de595f01-a50d-44f7-a2da-6dbb32c429ec\" (UID: \"de595f01-a50d-44f7-a2da-6dbb32c429ec\") " Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.333916 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fd08681-e332-4a24-9e90-d0085dc5e069-operator-scripts\") pod \"9fd08681-e332-4a24-9e90-d0085dc5e069\" (UID: \"9fd08681-e332-4a24-9e90-d0085dc5e069\") " Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.333974 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2pw9\" (UniqueName: \"kubernetes.io/projected/de595f01-a50d-44f7-a2da-6dbb32c429ec-kube-api-access-r2pw9\") pod \"de595f01-a50d-44f7-a2da-6dbb32c429ec\" (UID: \"de595f01-a50d-44f7-a2da-6dbb32c429ec\") " Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.334416 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncx6c\" (UniqueName: \"kubernetes.io/projected/86fbff8a-ec9f-4575-be56-2e32acdf53ad-kube-api-access-ncx6c\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.334794 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fd08681-e332-4a24-9e90-d0085dc5e069-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9fd08681-e332-4a24-9e90-d0085dc5e069" (UID: "9fd08681-e332-4a24-9e90-d0085dc5e069"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.335277 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de595f01-a50d-44f7-a2da-6dbb32c429ec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "de595f01-a50d-44f7-a2da-6dbb32c429ec" (UID: "de595f01-a50d-44f7-a2da-6dbb32c429ec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.340298 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fd08681-e332-4a24-9e90-d0085dc5e069-kube-api-access-fjr46" (OuterVolumeSpecName: "kube-api-access-fjr46") pod "9fd08681-e332-4a24-9e90-d0085dc5e069" (UID: "9fd08681-e332-4a24-9e90-d0085dc5e069"). InnerVolumeSpecName "kube-api-access-fjr46". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.342310 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de595f01-a50d-44f7-a2da-6dbb32c429ec-kube-api-access-r2pw9" (OuterVolumeSpecName: "kube-api-access-r2pw9") pod "de595f01-a50d-44f7-a2da-6dbb32c429ec" (UID: "de595f01-a50d-44f7-a2da-6dbb32c429ec"). InnerVolumeSpecName "kube-api-access-r2pw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.436451 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2pw9\" (UniqueName: \"kubernetes.io/projected/de595f01-a50d-44f7-a2da-6dbb32c429ec-kube-api-access-r2pw9\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.436790 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjr46\" (UniqueName: \"kubernetes.io/projected/9fd08681-e332-4a24-9e90-d0085dc5e069-kube-api-access-fjr46\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.436804 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de595f01-a50d-44f7-a2da-6dbb32c429ec-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.436815 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fd08681-e332-4a24-9e90-d0085dc5e069-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.682690 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2p22z" Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.683488 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2p22z" event={"ID":"de595f01-a50d-44f7-a2da-6dbb32c429ec","Type":"ContainerDied","Data":"898ebf760cdd91e7c0116116c53d5be45fdeb912e8ffe151a839e07863b7e9fb"} Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.683532 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="898ebf760cdd91e7c0116116c53d5be45fdeb912e8ffe151a839e07863b7e9fb" Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.688085 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4t8xd" Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.690472 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4t8xd" event={"ID":"86fbff8a-ec9f-4575-be56-2e32acdf53ad","Type":"ContainerDied","Data":"9c3f869705a775c9c0bb599ca54dcf71067539ded73276d86ca6242faf0af8d3"} Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.690514 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c3f869705a775c9c0bb599ca54dcf71067539ded73276d86ca6242faf0af8d3" Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.694617 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d8ca-account-create-update-bdzbd" Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.694669 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d8ca-account-create-update-bdzbd" event={"ID":"f1cfea17-e2e5-4785-ac84-a3c14a0cf1d0","Type":"ContainerDied","Data":"3d4d78dbcd5af8ebb174d95fb01bea809792013aaa5a20c15880c4595f08ee67"} Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.694713 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d4d78dbcd5af8ebb174d95fb01bea809792013aaa5a20c15880c4595f08ee67" Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.704005 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2ede-account-create-update-2qf6t" event={"ID":"9fd08681-e332-4a24-9e90-d0085dc5e069","Type":"ContainerDied","Data":"4e6fca9b4f9532f2e61bdc9852982fe27364f8da64ad993c517e57142ee80ced"} Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.704035 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e6fca9b4f9532f2e61bdc9852982fe27364f8da64ad993c517e57142ee80ced" Jan 01 08:45:55 crc kubenswrapper[4867]: I0101 08:45:55.704163 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2ede-account-create-update-2qf6t" Jan 01 08:45:57 crc kubenswrapper[4867]: I0101 08:45:57.139419 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-7bpzf"] Jan 01 08:45:57 crc kubenswrapper[4867]: E0101 08:45:57.140174 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86fbff8a-ec9f-4575-be56-2e32acdf53ad" containerName="mariadb-database-create" Jan 01 08:45:57 crc kubenswrapper[4867]: I0101 08:45:57.140197 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="86fbff8a-ec9f-4575-be56-2e32acdf53ad" containerName="mariadb-database-create" Jan 01 08:45:57 crc kubenswrapper[4867]: E0101 08:45:57.140220 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4dcbf41-d27a-4a66-a24a-785a611208a6" containerName="dnsmasq-dns" Jan 01 08:45:57 crc kubenswrapper[4867]: I0101 08:45:57.140233 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4dcbf41-d27a-4a66-a24a-785a611208a6" containerName="dnsmasq-dns" Jan 01 08:45:57 crc kubenswrapper[4867]: E0101 08:45:57.140247 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4dcbf41-d27a-4a66-a24a-785a611208a6" containerName="init" Jan 01 08:45:57 crc kubenswrapper[4867]: I0101 08:45:57.140257 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4dcbf41-d27a-4a66-a24a-785a611208a6" containerName="init" Jan 01 08:45:57 crc kubenswrapper[4867]: E0101 08:45:57.140289 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1cfea17-e2e5-4785-ac84-a3c14a0cf1d0" containerName="mariadb-account-create-update" Jan 01 08:45:57 crc kubenswrapper[4867]: I0101 08:45:57.140301 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1cfea17-e2e5-4785-ac84-a3c14a0cf1d0" containerName="mariadb-account-create-update" Jan 01 08:45:57 crc kubenswrapper[4867]: E0101 08:45:57.140325 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fd08681-e332-4a24-9e90-d0085dc5e069" containerName="mariadb-account-create-update" Jan 01 08:45:57 crc kubenswrapper[4867]: I0101 08:45:57.140337 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fd08681-e332-4a24-9e90-d0085dc5e069" containerName="mariadb-account-create-update" Jan 01 08:45:57 crc kubenswrapper[4867]: E0101 08:45:57.140358 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de595f01-a50d-44f7-a2da-6dbb32c429ec" containerName="mariadb-database-create" Jan 01 08:45:57 crc kubenswrapper[4867]: I0101 08:45:57.140368 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="de595f01-a50d-44f7-a2da-6dbb32c429ec" containerName="mariadb-database-create" Jan 01 08:45:57 crc kubenswrapper[4867]: I0101 08:45:57.140616 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1cfea17-e2e5-4785-ac84-a3c14a0cf1d0" containerName="mariadb-account-create-update" Jan 01 08:45:57 crc kubenswrapper[4867]: I0101 08:45:57.140637 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="de595f01-a50d-44f7-a2da-6dbb32c429ec" containerName="mariadb-database-create" Jan 01 08:45:57 crc kubenswrapper[4867]: I0101 08:45:57.140659 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fd08681-e332-4a24-9e90-d0085dc5e069" containerName="mariadb-account-create-update" Jan 01 08:45:57 crc kubenswrapper[4867]: I0101 08:45:57.140675 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4dcbf41-d27a-4a66-a24a-785a611208a6" containerName="dnsmasq-dns" Jan 01 08:45:57 crc kubenswrapper[4867]: I0101 08:45:57.140717 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="86fbff8a-ec9f-4575-be56-2e32acdf53ad" containerName="mariadb-database-create" Jan 01 08:45:57 crc kubenswrapper[4867]: I0101 08:45:57.141564 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7bpzf" Jan 01 08:45:57 crc kubenswrapper[4867]: I0101 08:45:57.148106 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-7bpzf"] Jan 01 08:45:57 crc kubenswrapper[4867]: I0101 08:45:57.157147 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-69c1-account-create-update-4fm55"] Jan 01 08:45:57 crc kubenswrapper[4867]: I0101 08:45:57.158596 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-69c1-account-create-update-4fm55" Jan 01 08:45:57 crc kubenswrapper[4867]: I0101 08:45:57.163526 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 01 08:45:57 crc kubenswrapper[4867]: I0101 08:45:57.173398 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-69c1-account-create-update-4fm55"] Jan 01 08:45:57 crc kubenswrapper[4867]: I0101 08:45:57.289404 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/967c4acd-2c93-49a1-9b42-71e23f0b28d0-operator-scripts\") pod \"glance-69c1-account-create-update-4fm55\" (UID: \"967c4acd-2c93-49a1-9b42-71e23f0b28d0\") " pod="openstack/glance-69c1-account-create-update-4fm55" Jan 01 08:45:57 crc kubenswrapper[4867]: I0101 08:45:57.291038 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9582f632-f340-4a0f-b436-c959a32e797e-operator-scripts\") pod \"glance-db-create-7bpzf\" (UID: \"9582f632-f340-4a0f-b436-c959a32e797e\") " pod="openstack/glance-db-create-7bpzf" Jan 01 08:45:57 crc kubenswrapper[4867]: I0101 08:45:57.291394 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7fs8\" (UniqueName: \"kubernetes.io/projected/967c4acd-2c93-49a1-9b42-71e23f0b28d0-kube-api-access-g7fs8\") pod \"glance-69c1-account-create-update-4fm55\" (UID: \"967c4acd-2c93-49a1-9b42-71e23f0b28d0\") " pod="openstack/glance-69c1-account-create-update-4fm55" Jan 01 08:45:57 crc kubenswrapper[4867]: I0101 08:45:57.291555 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7m8b\" (UniqueName: \"kubernetes.io/projected/9582f632-f340-4a0f-b436-c959a32e797e-kube-api-access-f7m8b\") pod \"glance-db-create-7bpzf\" (UID: \"9582f632-f340-4a0f-b436-c959a32e797e\") " pod="openstack/glance-db-create-7bpzf" Jan 01 08:45:57 crc kubenswrapper[4867]: I0101 08:45:57.392500 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7m8b\" (UniqueName: \"kubernetes.io/projected/9582f632-f340-4a0f-b436-c959a32e797e-kube-api-access-f7m8b\") pod \"glance-db-create-7bpzf\" (UID: \"9582f632-f340-4a0f-b436-c959a32e797e\") " pod="openstack/glance-db-create-7bpzf" Jan 01 08:45:57 crc kubenswrapper[4867]: I0101 08:45:57.392564 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/967c4acd-2c93-49a1-9b42-71e23f0b28d0-operator-scripts\") pod \"glance-69c1-account-create-update-4fm55\" (UID: \"967c4acd-2c93-49a1-9b42-71e23f0b28d0\") " pod="openstack/glance-69c1-account-create-update-4fm55" Jan 01 08:45:57 crc kubenswrapper[4867]: I0101 08:45:57.392659 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9582f632-f340-4a0f-b436-c959a32e797e-operator-scripts\") pod \"glance-db-create-7bpzf\" (UID: \"9582f632-f340-4a0f-b436-c959a32e797e\") " pod="openstack/glance-db-create-7bpzf" Jan 01 08:45:57 crc kubenswrapper[4867]: I0101 08:45:57.392687 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7fs8\" (UniqueName: \"kubernetes.io/projected/967c4acd-2c93-49a1-9b42-71e23f0b28d0-kube-api-access-g7fs8\") pod \"glance-69c1-account-create-update-4fm55\" (UID: \"967c4acd-2c93-49a1-9b42-71e23f0b28d0\") " pod="openstack/glance-69c1-account-create-update-4fm55" Jan 01 08:45:57 crc kubenswrapper[4867]: I0101 08:45:57.393419 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/967c4acd-2c93-49a1-9b42-71e23f0b28d0-operator-scripts\") pod \"glance-69c1-account-create-update-4fm55\" (UID: \"967c4acd-2c93-49a1-9b42-71e23f0b28d0\") " pod="openstack/glance-69c1-account-create-update-4fm55" Jan 01 08:45:57 crc kubenswrapper[4867]: I0101 08:45:57.393419 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9582f632-f340-4a0f-b436-c959a32e797e-operator-scripts\") pod \"glance-db-create-7bpzf\" (UID: \"9582f632-f340-4a0f-b436-c959a32e797e\") " pod="openstack/glance-db-create-7bpzf" Jan 01 08:45:57 crc kubenswrapper[4867]: I0101 08:45:57.414819 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7fs8\" (UniqueName: \"kubernetes.io/projected/967c4acd-2c93-49a1-9b42-71e23f0b28d0-kube-api-access-g7fs8\") pod \"glance-69c1-account-create-update-4fm55\" (UID: \"967c4acd-2c93-49a1-9b42-71e23f0b28d0\") " pod="openstack/glance-69c1-account-create-update-4fm55" Jan 01 08:45:57 crc kubenswrapper[4867]: I0101 08:45:57.415151 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7m8b\" (UniqueName: \"kubernetes.io/projected/9582f632-f340-4a0f-b436-c959a32e797e-kube-api-access-f7m8b\") pod \"glance-db-create-7bpzf\" (UID: \"9582f632-f340-4a0f-b436-c959a32e797e\") " pod="openstack/glance-db-create-7bpzf" Jan 01 08:45:57 crc kubenswrapper[4867]: I0101 08:45:57.460430 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7bpzf" Jan 01 08:45:57 crc kubenswrapper[4867]: I0101 08:45:57.488471 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-69c1-account-create-update-4fm55" Jan 01 08:45:57 crc kubenswrapper[4867]: I0101 08:45:57.720910 4867 generic.go:334] "Generic (PLEG): container finished" podID="6640c65c-7090-4961-ba25-038487f6c62b" containerID="18becf772101ab6b2c53a4dce6cb85a47ab0a01a65a2c7b3664c945f540dbbb7" exitCode=0 Jan 01 08:45:57 crc kubenswrapper[4867]: I0101 08:45:57.720956 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sn8tf" event={"ID":"6640c65c-7090-4961-ba25-038487f6c62b","Type":"ContainerDied","Data":"18becf772101ab6b2c53a4dce6cb85a47ab0a01a65a2c7b3664c945f540dbbb7"} Jan 01 08:45:57 crc kubenswrapper[4867]: I0101 08:45:57.948316 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-7bpzf"] Jan 01 08:45:58 crc kubenswrapper[4867]: I0101 08:45:58.011572 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-69c1-account-create-update-4fm55"] Jan 01 08:45:58 crc kubenswrapper[4867]: I0101 08:45:58.724937 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-7dh4x"] Jan 01 08:45:58 crc kubenswrapper[4867]: I0101 08:45:58.737018 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-7dh4x"] Jan 01 08:45:58 crc kubenswrapper[4867]: I0101 08:45:58.742136 4867 generic.go:334] "Generic (PLEG): container finished" podID="9582f632-f340-4a0f-b436-c959a32e797e" containerID="334d4d6eb157e3eebf46b0a2ddb1231acde816efcbd086adbf555a1977e94879" exitCode=0 Jan 01 08:45:58 crc kubenswrapper[4867]: I0101 08:45:58.742227 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7bpzf" event={"ID":"9582f632-f340-4a0f-b436-c959a32e797e","Type":"ContainerDied","Data":"334d4d6eb157e3eebf46b0a2ddb1231acde816efcbd086adbf555a1977e94879"} Jan 01 08:45:58 crc kubenswrapper[4867]: I0101 08:45:58.742252 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7bpzf" event={"ID":"9582f632-f340-4a0f-b436-c959a32e797e","Type":"ContainerStarted","Data":"d0b4d493f35137673a28acf9cc7c848f672aa46b34831d7c48b6d22bb2e681ef"} Jan 01 08:45:58 crc kubenswrapper[4867]: I0101 08:45:58.762913 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-fdllh"] Jan 01 08:45:58 crc kubenswrapper[4867]: I0101 08:45:58.763946 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fdllh" Jan 01 08:45:58 crc kubenswrapper[4867]: I0101 08:45:58.767191 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 01 08:45:58 crc kubenswrapper[4867]: I0101 08:45:58.767580 4867 generic.go:334] "Generic (PLEG): container finished" podID="967c4acd-2c93-49a1-9b42-71e23f0b28d0" containerID="0d0ed7617262d47e474047d2c96fd322c3300a423e91e4a44690bf1270fbf435" exitCode=0 Jan 01 08:45:58 crc kubenswrapper[4867]: I0101 08:45:58.767874 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-69c1-account-create-update-4fm55" event={"ID":"967c4acd-2c93-49a1-9b42-71e23f0b28d0","Type":"ContainerDied","Data":"0d0ed7617262d47e474047d2c96fd322c3300a423e91e4a44690bf1270fbf435"} Jan 01 08:45:58 crc kubenswrapper[4867]: I0101 08:45:58.767928 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-69c1-account-create-update-4fm55" event={"ID":"967c4acd-2c93-49a1-9b42-71e23f0b28d0","Type":"ContainerStarted","Data":"b2ec90cd26d139a70d0df00e4c52df3111f8766c2dde0d05bf57c5ab2298b6ad"} Jan 01 08:45:58 crc kubenswrapper[4867]: I0101 08:45:58.784304 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fdllh"] Jan 01 08:45:58 crc kubenswrapper[4867]: I0101 08:45:58.923101 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6fcj\" (UniqueName: \"kubernetes.io/projected/44ddda60-ee4a-453c-82fb-bb99e16fc076-kube-api-access-s6fcj\") pod \"root-account-create-update-fdllh\" (UID: \"44ddda60-ee4a-453c-82fb-bb99e16fc076\") " pod="openstack/root-account-create-update-fdllh" Jan 01 08:45:58 crc kubenswrapper[4867]: I0101 08:45:58.923165 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44ddda60-ee4a-453c-82fb-bb99e16fc076-operator-scripts\") pod \"root-account-create-update-fdllh\" (UID: \"44ddda60-ee4a-453c-82fb-bb99e16fc076\") " pod="openstack/root-account-create-update-fdllh" Jan 01 08:45:59 crc kubenswrapper[4867]: I0101 08:45:59.024043 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6fcj\" (UniqueName: \"kubernetes.io/projected/44ddda60-ee4a-453c-82fb-bb99e16fc076-kube-api-access-s6fcj\") pod \"root-account-create-update-fdllh\" (UID: \"44ddda60-ee4a-453c-82fb-bb99e16fc076\") " pod="openstack/root-account-create-update-fdllh" Jan 01 08:45:59 crc kubenswrapper[4867]: I0101 08:45:59.024304 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44ddda60-ee4a-453c-82fb-bb99e16fc076-operator-scripts\") pod \"root-account-create-update-fdllh\" (UID: \"44ddda60-ee4a-453c-82fb-bb99e16fc076\") " pod="openstack/root-account-create-update-fdllh" Jan 01 08:45:59 crc kubenswrapper[4867]: I0101 08:45:59.024991 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44ddda60-ee4a-453c-82fb-bb99e16fc076-operator-scripts\") pod \"root-account-create-update-fdllh\" (UID: \"44ddda60-ee4a-453c-82fb-bb99e16fc076\") " pod="openstack/root-account-create-update-fdllh" Jan 01 08:45:59 crc kubenswrapper[4867]: I0101 08:45:59.043038 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6fcj\" (UniqueName: \"kubernetes.io/projected/44ddda60-ee4a-453c-82fb-bb99e16fc076-kube-api-access-s6fcj\") pod \"root-account-create-update-fdllh\" (UID: \"44ddda60-ee4a-453c-82fb-bb99e16fc076\") " pod="openstack/root-account-create-update-fdllh" Jan 01 08:45:59 crc kubenswrapper[4867]: I0101 08:45:59.114596 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sn8tf" Jan 01 08:45:59 crc kubenswrapper[4867]: I0101 08:45:59.122238 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fdllh" Jan 01 08:45:59 crc kubenswrapper[4867]: I0101 08:45:59.144169 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24b04661-c1d8-41c9-9365-e36e19ac638c" path="/var/lib/kubelet/pods/24b04661-c1d8-41c9-9365-e36e19ac638c/volumes" Jan 01 08:45:59 crc kubenswrapper[4867]: I0101 08:45:59.230236 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6640c65c-7090-4961-ba25-038487f6c62b-dispersionconf\") pod \"6640c65c-7090-4961-ba25-038487f6c62b\" (UID: \"6640c65c-7090-4961-ba25-038487f6c62b\") " Jan 01 08:45:59 crc kubenswrapper[4867]: I0101 08:45:59.230291 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6640c65c-7090-4961-ba25-038487f6c62b-swiftconf\") pod \"6640c65c-7090-4961-ba25-038487f6c62b\" (UID: \"6640c65c-7090-4961-ba25-038487f6c62b\") " Jan 01 08:45:59 crc kubenswrapper[4867]: I0101 08:45:59.230330 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6640c65c-7090-4961-ba25-038487f6c62b-ring-data-devices\") pod \"6640c65c-7090-4961-ba25-038487f6c62b\" (UID: \"6640c65c-7090-4961-ba25-038487f6c62b\") " Jan 01 08:45:59 crc kubenswrapper[4867]: I0101 08:45:59.230348 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6640c65c-7090-4961-ba25-038487f6c62b-combined-ca-bundle\") pod \"6640c65c-7090-4961-ba25-038487f6c62b\" (UID: \"6640c65c-7090-4961-ba25-038487f6c62b\") " Jan 01 08:45:59 crc kubenswrapper[4867]: I0101 08:45:59.230386 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6640c65c-7090-4961-ba25-038487f6c62b-etc-swift\") pod \"6640c65c-7090-4961-ba25-038487f6c62b\" (UID: \"6640c65c-7090-4961-ba25-038487f6c62b\") " Jan 01 08:45:59 crc kubenswrapper[4867]: I0101 08:45:59.230409 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kdg5\" (UniqueName: \"kubernetes.io/projected/6640c65c-7090-4961-ba25-038487f6c62b-kube-api-access-6kdg5\") pod \"6640c65c-7090-4961-ba25-038487f6c62b\" (UID: \"6640c65c-7090-4961-ba25-038487f6c62b\") " Jan 01 08:45:59 crc kubenswrapper[4867]: I0101 08:45:59.230442 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6640c65c-7090-4961-ba25-038487f6c62b-scripts\") pod \"6640c65c-7090-4961-ba25-038487f6c62b\" (UID: \"6640c65c-7090-4961-ba25-038487f6c62b\") " Jan 01 08:45:59 crc kubenswrapper[4867]: I0101 08:45:59.232429 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6640c65c-7090-4961-ba25-038487f6c62b-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "6640c65c-7090-4961-ba25-038487f6c62b" (UID: "6640c65c-7090-4961-ba25-038487f6c62b"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:45:59 crc kubenswrapper[4867]: I0101 08:45:59.233267 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6640c65c-7090-4961-ba25-038487f6c62b-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "6640c65c-7090-4961-ba25-038487f6c62b" (UID: "6640c65c-7090-4961-ba25-038487f6c62b"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:45:59 crc kubenswrapper[4867]: I0101 08:45:59.235073 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6640c65c-7090-4961-ba25-038487f6c62b-kube-api-access-6kdg5" (OuterVolumeSpecName: "kube-api-access-6kdg5") pod "6640c65c-7090-4961-ba25-038487f6c62b" (UID: "6640c65c-7090-4961-ba25-038487f6c62b"). InnerVolumeSpecName "kube-api-access-6kdg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:45:59 crc kubenswrapper[4867]: I0101 08:45:59.236094 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6640c65c-7090-4961-ba25-038487f6c62b-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "6640c65c-7090-4961-ba25-038487f6c62b" (UID: "6640c65c-7090-4961-ba25-038487f6c62b"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:45:59 crc kubenswrapper[4867]: I0101 08:45:59.258663 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6640c65c-7090-4961-ba25-038487f6c62b-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "6640c65c-7090-4961-ba25-038487f6c62b" (UID: "6640c65c-7090-4961-ba25-038487f6c62b"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:45:59 crc kubenswrapper[4867]: I0101 08:45:59.266498 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6640c65c-7090-4961-ba25-038487f6c62b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6640c65c-7090-4961-ba25-038487f6c62b" (UID: "6640c65c-7090-4961-ba25-038487f6c62b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:45:59 crc kubenswrapper[4867]: I0101 08:45:59.268405 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6640c65c-7090-4961-ba25-038487f6c62b-scripts" (OuterVolumeSpecName: "scripts") pod "6640c65c-7090-4961-ba25-038487f6c62b" (UID: "6640c65c-7090-4961-ba25-038487f6c62b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:45:59 crc kubenswrapper[4867]: I0101 08:45:59.335000 4867 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6640c65c-7090-4961-ba25-038487f6c62b-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:59 crc kubenswrapper[4867]: I0101 08:45:59.335039 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6640c65c-7090-4961-ba25-038487f6c62b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:59 crc kubenswrapper[4867]: I0101 08:45:59.335051 4867 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6640c65c-7090-4961-ba25-038487f6c62b-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:59 crc kubenswrapper[4867]: I0101 08:45:59.335065 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kdg5\" (UniqueName: \"kubernetes.io/projected/6640c65c-7090-4961-ba25-038487f6c62b-kube-api-access-6kdg5\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:59 crc kubenswrapper[4867]: I0101 08:45:59.335080 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6640c65c-7090-4961-ba25-038487f6c62b-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:59 crc kubenswrapper[4867]: I0101 08:45:59.335091 4867 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6640c65c-7090-4961-ba25-038487f6c62b-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:59 crc kubenswrapper[4867]: I0101 08:45:59.335102 4867 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6640c65c-7090-4961-ba25-038487f6c62b-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 01 08:45:59 crc kubenswrapper[4867]: I0101 08:45:59.589964 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fdllh"] Jan 01 08:45:59 crc kubenswrapper[4867]: I0101 08:45:59.775936 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sn8tf" Jan 01 08:45:59 crc kubenswrapper[4867]: I0101 08:45:59.775961 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sn8tf" event={"ID":"6640c65c-7090-4961-ba25-038487f6c62b","Type":"ContainerDied","Data":"1ce90bf840fa08b6ed96cf7b4c93c2fee865af26819dbf0adf9adb266dcbbdf3"} Jan 01 08:45:59 crc kubenswrapper[4867]: I0101 08:45:59.776013 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ce90bf840fa08b6ed96cf7b4c93c2fee865af26819dbf0adf9adb266dcbbdf3" Jan 01 08:45:59 crc kubenswrapper[4867]: I0101 08:45:59.778823 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fdllh" event={"ID":"44ddda60-ee4a-453c-82fb-bb99e16fc076","Type":"ContainerStarted","Data":"0b77667b1946dbb880951c13289dda1a815b3f0ebcce03d5ef30d4b809c5bd4b"} Jan 01 08:45:59 crc kubenswrapper[4867]: I0101 08:45:59.778953 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fdllh" event={"ID":"44ddda60-ee4a-453c-82fb-bb99e16fc076","Type":"ContainerStarted","Data":"393dce7db9322b509fb302ba474e3ea9935c5b1bfdb345463f95b243d0bc66af"} Jan 01 08:45:59 crc kubenswrapper[4867]: I0101 08:45:59.800535 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-fdllh" podStartSLOduration=1.8005161520000001 podStartE2EDuration="1.800516152s" podCreationTimestamp="2026-01-01 08:45:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:45:59.795687767 +0000 UTC m=+1168.930956546" watchObservedRunningTime="2026-01-01 08:45:59.800516152 +0000 UTC m=+1168.935784931" Jan 01 08:46:00 crc kubenswrapper[4867]: I0101 08:46:00.214834 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-69c1-account-create-update-4fm55" Jan 01 08:46:00 crc kubenswrapper[4867]: I0101 08:46:00.221088 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7bpzf" Jan 01 08:46:00 crc kubenswrapper[4867]: I0101 08:46:00.349235 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/967c4acd-2c93-49a1-9b42-71e23f0b28d0-operator-scripts\") pod \"967c4acd-2c93-49a1-9b42-71e23f0b28d0\" (UID: \"967c4acd-2c93-49a1-9b42-71e23f0b28d0\") " Jan 01 08:46:00 crc kubenswrapper[4867]: I0101 08:46:00.349316 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7fs8\" (UniqueName: \"kubernetes.io/projected/967c4acd-2c93-49a1-9b42-71e23f0b28d0-kube-api-access-g7fs8\") pod \"967c4acd-2c93-49a1-9b42-71e23f0b28d0\" (UID: \"967c4acd-2c93-49a1-9b42-71e23f0b28d0\") " Jan 01 08:46:00 crc kubenswrapper[4867]: I0101 08:46:00.349459 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7m8b\" (UniqueName: \"kubernetes.io/projected/9582f632-f340-4a0f-b436-c959a32e797e-kube-api-access-f7m8b\") pod \"9582f632-f340-4a0f-b436-c959a32e797e\" (UID: \"9582f632-f340-4a0f-b436-c959a32e797e\") " Jan 01 08:46:00 crc kubenswrapper[4867]: I0101 08:46:00.349946 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/967c4acd-2c93-49a1-9b42-71e23f0b28d0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "967c4acd-2c93-49a1-9b42-71e23f0b28d0" (UID: "967c4acd-2c93-49a1-9b42-71e23f0b28d0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:46:00 crc kubenswrapper[4867]: I0101 08:46:00.350292 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9582f632-f340-4a0f-b436-c959a32e797e-operator-scripts\") pod \"9582f632-f340-4a0f-b436-c959a32e797e\" (UID: \"9582f632-f340-4a0f-b436-c959a32e797e\") " Jan 01 08:46:00 crc kubenswrapper[4867]: I0101 08:46:00.350704 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9582f632-f340-4a0f-b436-c959a32e797e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9582f632-f340-4a0f-b436-c959a32e797e" (UID: "9582f632-f340-4a0f-b436-c959a32e797e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:46:00 crc kubenswrapper[4867]: I0101 08:46:00.350735 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/967c4acd-2c93-49a1-9b42-71e23f0b28d0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:00 crc kubenswrapper[4867]: I0101 08:46:00.354379 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/967c4acd-2c93-49a1-9b42-71e23f0b28d0-kube-api-access-g7fs8" (OuterVolumeSpecName: "kube-api-access-g7fs8") pod "967c4acd-2c93-49a1-9b42-71e23f0b28d0" (UID: "967c4acd-2c93-49a1-9b42-71e23f0b28d0"). InnerVolumeSpecName "kube-api-access-g7fs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:46:00 crc kubenswrapper[4867]: I0101 08:46:00.354555 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9582f632-f340-4a0f-b436-c959a32e797e-kube-api-access-f7m8b" (OuterVolumeSpecName: "kube-api-access-f7m8b") pod "9582f632-f340-4a0f-b436-c959a32e797e" (UID: "9582f632-f340-4a0f-b436-c959a32e797e"). InnerVolumeSpecName "kube-api-access-f7m8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:46:00 crc kubenswrapper[4867]: I0101 08:46:00.453972 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7fs8\" (UniqueName: \"kubernetes.io/projected/967c4acd-2c93-49a1-9b42-71e23f0b28d0-kube-api-access-g7fs8\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:00 crc kubenswrapper[4867]: I0101 08:46:00.454009 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7m8b\" (UniqueName: \"kubernetes.io/projected/9582f632-f340-4a0f-b436-c959a32e797e-kube-api-access-f7m8b\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:00 crc kubenswrapper[4867]: I0101 08:46:00.454023 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9582f632-f340-4a0f-b436-c959a32e797e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:00 crc kubenswrapper[4867]: I0101 08:46:00.554840 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f1f687f2-3229-401c-b5cb-f79e96311c45-etc-swift\") pod \"swift-storage-0\" (UID: \"f1f687f2-3229-401c-b5cb-f79e96311c45\") " pod="openstack/swift-storage-0" Jan 01 08:46:00 crc kubenswrapper[4867]: I0101 08:46:00.558799 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f1f687f2-3229-401c-b5cb-f79e96311c45-etc-swift\") pod \"swift-storage-0\" (UID: \"f1f687f2-3229-401c-b5cb-f79e96311c45\") " pod="openstack/swift-storage-0" Jan 01 08:46:00 crc kubenswrapper[4867]: I0101 08:46:00.764751 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 01 08:46:00 crc kubenswrapper[4867]: I0101 08:46:00.799583 4867 generic.go:334] "Generic (PLEG): container finished" podID="44ddda60-ee4a-453c-82fb-bb99e16fc076" containerID="0b77667b1946dbb880951c13289dda1a815b3f0ebcce03d5ef30d4b809c5bd4b" exitCode=0 Jan 01 08:46:00 crc kubenswrapper[4867]: I0101 08:46:00.799960 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fdllh" event={"ID":"44ddda60-ee4a-453c-82fb-bb99e16fc076","Type":"ContainerDied","Data":"0b77667b1946dbb880951c13289dda1a815b3f0ebcce03d5ef30d4b809c5bd4b"} Jan 01 08:46:00 crc kubenswrapper[4867]: I0101 08:46:00.804350 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7bpzf" event={"ID":"9582f632-f340-4a0f-b436-c959a32e797e","Type":"ContainerDied","Data":"d0b4d493f35137673a28acf9cc7c848f672aa46b34831d7c48b6d22bb2e681ef"} Jan 01 08:46:00 crc kubenswrapper[4867]: I0101 08:46:00.804385 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0b4d493f35137673a28acf9cc7c848f672aa46b34831d7c48b6d22bb2e681ef" Jan 01 08:46:00 crc kubenswrapper[4867]: I0101 08:46:00.804812 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7bpzf" Jan 01 08:46:00 crc kubenswrapper[4867]: I0101 08:46:00.812347 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-69c1-account-create-update-4fm55" event={"ID":"967c4acd-2c93-49a1-9b42-71e23f0b28d0","Type":"ContainerDied","Data":"b2ec90cd26d139a70d0df00e4c52df3111f8766c2dde0d05bf57c5ab2298b6ad"} Jan 01 08:46:00 crc kubenswrapper[4867]: I0101 08:46:00.812398 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2ec90cd26d139a70d0df00e4c52df3111f8766c2dde0d05bf57c5ab2298b6ad" Jan 01 08:46:00 crc kubenswrapper[4867]: I0101 08:46:00.812493 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-69c1-account-create-update-4fm55" Jan 01 08:46:01 crc kubenswrapper[4867]: I0101 08:46:01.341066 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 01 08:46:01 crc kubenswrapper[4867]: W0101 08:46:01.350949 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1f687f2_3229_401c_b5cb_f79e96311c45.slice/crio-eeda38690f053421ceda9618b7f4c7b6a16f4b1db48821d6e4e0c79fbbabae99 WatchSource:0}: Error finding container eeda38690f053421ceda9618b7f4c7b6a16f4b1db48821d6e4e0c79fbbabae99: Status 404 returned error can't find the container with id eeda38690f053421ceda9618b7f4c7b6a16f4b1db48821d6e4e0c79fbbabae99 Jan 01 08:46:01 crc kubenswrapper[4867]: I0101 08:46:01.827522 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1f687f2-3229-401c-b5cb-f79e96311c45","Type":"ContainerStarted","Data":"eeda38690f053421ceda9618b7f4c7b6a16f4b1db48821d6e4e0c79fbbabae99"} Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.159563 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-smgl6" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.175759 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-8jl6r" podUID="02bf5c7d-1674-4308-8bcf-751d6c4a3783" containerName="ovn-controller" probeResult="failure" output=< Jan 01 08:46:02 crc kubenswrapper[4867]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 01 08:46:02 crc kubenswrapper[4867]: > Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.216154 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-smgl6" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.220322 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fdllh" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.281581 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44ddda60-ee4a-453c-82fb-bb99e16fc076-operator-scripts\") pod \"44ddda60-ee4a-453c-82fb-bb99e16fc076\" (UID: \"44ddda60-ee4a-453c-82fb-bb99e16fc076\") " Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.281667 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6fcj\" (UniqueName: \"kubernetes.io/projected/44ddda60-ee4a-453c-82fb-bb99e16fc076-kube-api-access-s6fcj\") pod \"44ddda60-ee4a-453c-82fb-bb99e16fc076\" (UID: \"44ddda60-ee4a-453c-82fb-bb99e16fc076\") " Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.282673 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44ddda60-ee4a-453c-82fb-bb99e16fc076-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "44ddda60-ee4a-453c-82fb-bb99e16fc076" (UID: "44ddda60-ee4a-453c-82fb-bb99e16fc076"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.296375 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44ddda60-ee4a-453c-82fb-bb99e16fc076-kube-api-access-s6fcj" (OuterVolumeSpecName: "kube-api-access-s6fcj") pod "44ddda60-ee4a-453c-82fb-bb99e16fc076" (UID: "44ddda60-ee4a-453c-82fb-bb99e16fc076"). InnerVolumeSpecName "kube-api-access-s6fcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.384761 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44ddda60-ee4a-453c-82fb-bb99e16fc076-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.384797 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6fcj\" (UniqueName: \"kubernetes.io/projected/44ddda60-ee4a-453c-82fb-bb99e16fc076-kube-api-access-s6fcj\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.454682 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-hhzrt"] Jan 01 08:46:02 crc kubenswrapper[4867]: E0101 08:46:02.455019 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9582f632-f340-4a0f-b436-c959a32e797e" containerName="mariadb-database-create" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.455035 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9582f632-f340-4a0f-b436-c959a32e797e" containerName="mariadb-database-create" Jan 01 08:46:02 crc kubenswrapper[4867]: E0101 08:46:02.455071 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44ddda60-ee4a-453c-82fb-bb99e16fc076" containerName="mariadb-account-create-update" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.455082 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="44ddda60-ee4a-453c-82fb-bb99e16fc076" containerName="mariadb-account-create-update" Jan 01 08:46:02 crc kubenswrapper[4867]: E0101 08:46:02.455096 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6640c65c-7090-4961-ba25-038487f6c62b" containerName="swift-ring-rebalance" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.455104 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6640c65c-7090-4961-ba25-038487f6c62b" containerName="swift-ring-rebalance" Jan 01 08:46:02 crc kubenswrapper[4867]: E0101 08:46:02.455116 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="967c4acd-2c93-49a1-9b42-71e23f0b28d0" containerName="mariadb-account-create-update" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.455121 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="967c4acd-2c93-49a1-9b42-71e23f0b28d0" containerName="mariadb-account-create-update" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.455258 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="6640c65c-7090-4961-ba25-038487f6c62b" containerName="swift-ring-rebalance" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.455279 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="44ddda60-ee4a-453c-82fb-bb99e16fc076" containerName="mariadb-account-create-update" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.455292 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="967c4acd-2c93-49a1-9b42-71e23f0b28d0" containerName="mariadb-account-create-update" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.455300 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="9582f632-f340-4a0f-b436-c959a32e797e" containerName="mariadb-database-create" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.455812 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-hhzrt" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.464898 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-sg4nb" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.468151 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.485816 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85694b8-9e77-48d3-9338-4b65bfe5d21f-config-data\") pod \"glance-db-sync-hhzrt\" (UID: \"f85694b8-9e77-48d3-9338-4b65bfe5d21f\") " pod="openstack/glance-db-sync-hhzrt" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.485898 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85694b8-9e77-48d3-9338-4b65bfe5d21f-combined-ca-bundle\") pod \"glance-db-sync-hhzrt\" (UID: \"f85694b8-9e77-48d3-9338-4b65bfe5d21f\") " pod="openstack/glance-db-sync-hhzrt" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.485931 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7622\" (UniqueName: \"kubernetes.io/projected/f85694b8-9e77-48d3-9338-4b65bfe5d21f-kube-api-access-v7622\") pod \"glance-db-sync-hhzrt\" (UID: \"f85694b8-9e77-48d3-9338-4b65bfe5d21f\") " pod="openstack/glance-db-sync-hhzrt" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.485989 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f85694b8-9e77-48d3-9338-4b65bfe5d21f-db-sync-config-data\") pod \"glance-db-sync-hhzrt\" (UID: \"f85694b8-9e77-48d3-9338-4b65bfe5d21f\") " pod="openstack/glance-db-sync-hhzrt" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.535103 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-hhzrt"] Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.589860 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7622\" (UniqueName: \"kubernetes.io/projected/f85694b8-9e77-48d3-9338-4b65bfe5d21f-kube-api-access-v7622\") pod \"glance-db-sync-hhzrt\" (UID: \"f85694b8-9e77-48d3-9338-4b65bfe5d21f\") " pod="openstack/glance-db-sync-hhzrt" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.589947 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f85694b8-9e77-48d3-9338-4b65bfe5d21f-db-sync-config-data\") pod \"glance-db-sync-hhzrt\" (UID: \"f85694b8-9e77-48d3-9338-4b65bfe5d21f\") " pod="openstack/glance-db-sync-hhzrt" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.590028 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85694b8-9e77-48d3-9338-4b65bfe5d21f-config-data\") pod \"glance-db-sync-hhzrt\" (UID: \"f85694b8-9e77-48d3-9338-4b65bfe5d21f\") " pod="openstack/glance-db-sync-hhzrt" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.590057 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85694b8-9e77-48d3-9338-4b65bfe5d21f-combined-ca-bundle\") pod \"glance-db-sync-hhzrt\" (UID: \"f85694b8-9e77-48d3-9338-4b65bfe5d21f\") " pod="openstack/glance-db-sync-hhzrt" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.591296 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-8jl6r-config-cqkns"] Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.592400 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8jl6r-config-cqkns" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.600423 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f85694b8-9e77-48d3-9338-4b65bfe5d21f-db-sync-config-data\") pod \"glance-db-sync-hhzrt\" (UID: \"f85694b8-9e77-48d3-9338-4b65bfe5d21f\") " pod="openstack/glance-db-sync-hhzrt" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.618741 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85694b8-9e77-48d3-9338-4b65bfe5d21f-config-data\") pod \"glance-db-sync-hhzrt\" (UID: \"f85694b8-9e77-48d3-9338-4b65bfe5d21f\") " pod="openstack/glance-db-sync-hhzrt" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.621554 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85694b8-9e77-48d3-9338-4b65bfe5d21f-combined-ca-bundle\") pod \"glance-db-sync-hhzrt\" (UID: \"f85694b8-9e77-48d3-9338-4b65bfe5d21f\") " pod="openstack/glance-db-sync-hhzrt" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.632874 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7622\" (UniqueName: \"kubernetes.io/projected/f85694b8-9e77-48d3-9338-4b65bfe5d21f-kube-api-access-v7622\") pod \"glance-db-sync-hhzrt\" (UID: \"f85694b8-9e77-48d3-9338-4b65bfe5d21f\") " pod="openstack/glance-db-sync-hhzrt" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.632996 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.645437 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8jl6r-config-cqkns"] Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.690998 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22zdc\" (UniqueName: \"kubernetes.io/projected/90146b6b-3eb2-41f2-92f1-7410152155a0-kube-api-access-22zdc\") pod \"ovn-controller-8jl6r-config-cqkns\" (UID: \"90146b6b-3eb2-41f2-92f1-7410152155a0\") " pod="openstack/ovn-controller-8jl6r-config-cqkns" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.691082 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/90146b6b-3eb2-41f2-92f1-7410152155a0-var-log-ovn\") pod \"ovn-controller-8jl6r-config-cqkns\" (UID: \"90146b6b-3eb2-41f2-92f1-7410152155a0\") " pod="openstack/ovn-controller-8jl6r-config-cqkns" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.691253 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/90146b6b-3eb2-41f2-92f1-7410152155a0-var-run-ovn\") pod \"ovn-controller-8jl6r-config-cqkns\" (UID: \"90146b6b-3eb2-41f2-92f1-7410152155a0\") " pod="openstack/ovn-controller-8jl6r-config-cqkns" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.691415 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/90146b6b-3eb2-41f2-92f1-7410152155a0-additional-scripts\") pod \"ovn-controller-8jl6r-config-cqkns\" (UID: \"90146b6b-3eb2-41f2-92f1-7410152155a0\") " pod="openstack/ovn-controller-8jl6r-config-cqkns" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.691485 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90146b6b-3eb2-41f2-92f1-7410152155a0-scripts\") pod \"ovn-controller-8jl6r-config-cqkns\" (UID: \"90146b6b-3eb2-41f2-92f1-7410152155a0\") " pod="openstack/ovn-controller-8jl6r-config-cqkns" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.691649 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/90146b6b-3eb2-41f2-92f1-7410152155a0-var-run\") pod \"ovn-controller-8jl6r-config-cqkns\" (UID: \"90146b6b-3eb2-41f2-92f1-7410152155a0\") " pod="openstack/ovn-controller-8jl6r-config-cqkns" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.769710 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-hhzrt" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.793380 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/90146b6b-3eb2-41f2-92f1-7410152155a0-var-run\") pod \"ovn-controller-8jl6r-config-cqkns\" (UID: \"90146b6b-3eb2-41f2-92f1-7410152155a0\") " pod="openstack/ovn-controller-8jl6r-config-cqkns" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.793441 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22zdc\" (UniqueName: \"kubernetes.io/projected/90146b6b-3eb2-41f2-92f1-7410152155a0-kube-api-access-22zdc\") pod \"ovn-controller-8jl6r-config-cqkns\" (UID: \"90146b6b-3eb2-41f2-92f1-7410152155a0\") " pod="openstack/ovn-controller-8jl6r-config-cqkns" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.793482 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/90146b6b-3eb2-41f2-92f1-7410152155a0-var-log-ovn\") pod \"ovn-controller-8jl6r-config-cqkns\" (UID: \"90146b6b-3eb2-41f2-92f1-7410152155a0\") " pod="openstack/ovn-controller-8jl6r-config-cqkns" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.793536 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/90146b6b-3eb2-41f2-92f1-7410152155a0-var-run-ovn\") pod \"ovn-controller-8jl6r-config-cqkns\" (UID: \"90146b6b-3eb2-41f2-92f1-7410152155a0\") " pod="openstack/ovn-controller-8jl6r-config-cqkns" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.793582 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/90146b6b-3eb2-41f2-92f1-7410152155a0-additional-scripts\") pod \"ovn-controller-8jl6r-config-cqkns\" (UID: \"90146b6b-3eb2-41f2-92f1-7410152155a0\") " pod="openstack/ovn-controller-8jl6r-config-cqkns" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.793620 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90146b6b-3eb2-41f2-92f1-7410152155a0-scripts\") pod \"ovn-controller-8jl6r-config-cqkns\" (UID: \"90146b6b-3eb2-41f2-92f1-7410152155a0\") " pod="openstack/ovn-controller-8jl6r-config-cqkns" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.793746 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/90146b6b-3eb2-41f2-92f1-7410152155a0-var-run\") pod \"ovn-controller-8jl6r-config-cqkns\" (UID: \"90146b6b-3eb2-41f2-92f1-7410152155a0\") " pod="openstack/ovn-controller-8jl6r-config-cqkns" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.793746 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/90146b6b-3eb2-41f2-92f1-7410152155a0-var-log-ovn\") pod \"ovn-controller-8jl6r-config-cqkns\" (UID: \"90146b6b-3eb2-41f2-92f1-7410152155a0\") " pod="openstack/ovn-controller-8jl6r-config-cqkns" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.793844 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/90146b6b-3eb2-41f2-92f1-7410152155a0-var-run-ovn\") pod \"ovn-controller-8jl6r-config-cqkns\" (UID: \"90146b6b-3eb2-41f2-92f1-7410152155a0\") " pod="openstack/ovn-controller-8jl6r-config-cqkns" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.795296 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/90146b6b-3eb2-41f2-92f1-7410152155a0-additional-scripts\") pod \"ovn-controller-8jl6r-config-cqkns\" (UID: \"90146b6b-3eb2-41f2-92f1-7410152155a0\") " pod="openstack/ovn-controller-8jl6r-config-cqkns" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.795684 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90146b6b-3eb2-41f2-92f1-7410152155a0-scripts\") pod \"ovn-controller-8jl6r-config-cqkns\" (UID: \"90146b6b-3eb2-41f2-92f1-7410152155a0\") " pod="openstack/ovn-controller-8jl6r-config-cqkns" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.812978 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22zdc\" (UniqueName: \"kubernetes.io/projected/90146b6b-3eb2-41f2-92f1-7410152155a0-kube-api-access-22zdc\") pod \"ovn-controller-8jl6r-config-cqkns\" (UID: \"90146b6b-3eb2-41f2-92f1-7410152155a0\") " pod="openstack/ovn-controller-8jl6r-config-cqkns" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.860501 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fdllh" event={"ID":"44ddda60-ee4a-453c-82fb-bb99e16fc076","Type":"ContainerDied","Data":"393dce7db9322b509fb302ba474e3ea9935c5b1bfdb345463f95b243d0bc66af"} Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.860999 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="393dce7db9322b509fb302ba474e3ea9935c5b1bfdb345463f95b243d0bc66af" Jan 01 08:46:02 crc kubenswrapper[4867]: I0101 08:46:02.860517 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fdllh" Jan 01 08:46:03 crc kubenswrapper[4867]: I0101 08:46:03.008419 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8jl6r-config-cqkns" Jan 01 08:46:03 crc kubenswrapper[4867]: I0101 08:46:03.322243 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-hhzrt"] Jan 01 08:46:03 crc kubenswrapper[4867]: W0101 08:46:03.331349 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf85694b8_9e77_48d3_9338_4b65bfe5d21f.slice/crio-a23d31c71e249b01abbe2104e708b2d486872273d7a241b7b0e45b3b8ce26f7d WatchSource:0}: Error finding container a23d31c71e249b01abbe2104e708b2d486872273d7a241b7b0e45b3b8ce26f7d: Status 404 returned error can't find the container with id a23d31c71e249b01abbe2104e708b2d486872273d7a241b7b0e45b3b8ce26f7d Jan 01 08:46:03 crc kubenswrapper[4867]: I0101 08:46:03.425203 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8jl6r-config-cqkns"] Jan 01 08:46:03 crc kubenswrapper[4867]: I0101 08:46:03.876849 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1f687f2-3229-401c-b5cb-f79e96311c45","Type":"ContainerStarted","Data":"15a53dd61a838436c8cc640222228f29611c350aafbcd4c7ebd7e7d0037f6c08"} Jan 01 08:46:03 crc kubenswrapper[4867]: I0101 08:46:03.878855 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-hhzrt" event={"ID":"f85694b8-9e77-48d3-9338-4b65bfe5d21f","Type":"ContainerStarted","Data":"a23d31c71e249b01abbe2104e708b2d486872273d7a241b7b0e45b3b8ce26f7d"} Jan 01 08:46:03 crc kubenswrapper[4867]: I0101 08:46:03.880992 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8jl6r-config-cqkns" event={"ID":"90146b6b-3eb2-41f2-92f1-7410152155a0","Type":"ContainerStarted","Data":"e76f85e1ac6a5b72f9751971e1db16afce532dc787a80afdc738f41e023e7b04"} Jan 01 08:46:03 crc kubenswrapper[4867]: I0101 08:46:03.881013 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8jl6r-config-cqkns" event={"ID":"90146b6b-3eb2-41f2-92f1-7410152155a0","Type":"ContainerStarted","Data":"cf2fcffca6301dab616b697afc0b8fff6530c1569fb3c9ef0170c41e44514ce8"} Jan 01 08:46:03 crc kubenswrapper[4867]: I0101 08:46:03.899984 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-8jl6r-config-cqkns" podStartSLOduration=1.8999667439999999 podStartE2EDuration="1.899966744s" podCreationTimestamp="2026-01-01 08:46:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:46:03.897507405 +0000 UTC m=+1173.032776174" watchObservedRunningTime="2026-01-01 08:46:03.899966744 +0000 UTC m=+1173.035235513" Jan 01 08:46:04 crc kubenswrapper[4867]: I0101 08:46:04.893433 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1f687f2-3229-401c-b5cb-f79e96311c45","Type":"ContainerStarted","Data":"e10bde6d2681fb218ae735e5c9c2775d890ddf6556d173f40a07de5845ae619f"} Jan 01 08:46:04 crc kubenswrapper[4867]: I0101 08:46:04.893680 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1f687f2-3229-401c-b5cb-f79e96311c45","Type":"ContainerStarted","Data":"93e3d74fe5bc76c92026ce2db2261c34e59a9fd78d02af2361f795c93327e0c2"} Jan 01 08:46:04 crc kubenswrapper[4867]: I0101 08:46:04.893691 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1f687f2-3229-401c-b5cb-f79e96311c45","Type":"ContainerStarted","Data":"86b047d36ab3b40416494324d7472e6b2de70172c969ce8457a8c077f86da142"} Jan 01 08:46:04 crc kubenswrapper[4867]: I0101 08:46:04.895354 4867 generic.go:334] "Generic (PLEG): container finished" podID="90146b6b-3eb2-41f2-92f1-7410152155a0" containerID="e76f85e1ac6a5b72f9751971e1db16afce532dc787a80afdc738f41e023e7b04" exitCode=0 Jan 01 08:46:04 crc kubenswrapper[4867]: I0101 08:46:04.895378 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8jl6r-config-cqkns" event={"ID":"90146b6b-3eb2-41f2-92f1-7410152155a0","Type":"ContainerDied","Data":"e76f85e1ac6a5b72f9751971e1db16afce532dc787a80afdc738f41e023e7b04"} Jan 01 08:46:05 crc kubenswrapper[4867]: I0101 08:46:05.908134 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1f687f2-3229-401c-b5cb-f79e96311c45","Type":"ContainerStarted","Data":"c025cb17cbfb32add357e55d8877da3e20770d7e20434859f494dda30db32f6a"} Jan 01 08:46:05 crc kubenswrapper[4867]: I0101 08:46:05.908651 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1f687f2-3229-401c-b5cb-f79e96311c45","Type":"ContainerStarted","Data":"5bafde574302da2dbe87cd0e8e0471bb6c66be5019e2607894d57645ead89abd"} Jan 01 08:46:05 crc kubenswrapper[4867]: I0101 08:46:05.908664 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1f687f2-3229-401c-b5cb-f79e96311c45","Type":"ContainerStarted","Data":"2fdf59f8f13262498df0f837fac962ddddedc8b945cc894391e0ea1e2818f2f8"} Jan 01 08:46:05 crc kubenswrapper[4867]: I0101 08:46:05.908676 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1f687f2-3229-401c-b5cb-f79e96311c45","Type":"ContainerStarted","Data":"6ea6d3eb9e4320ec6933e61e262598b312bd6de3e889f0b82a2feeb2cb268787"} Jan 01 08:46:05 crc kubenswrapper[4867]: I0101 08:46:05.912012 4867 generic.go:334] "Generic (PLEG): container finished" podID="1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99" containerID="dc368467d4b3d995dcecfe0ff1d3410bbb5d37c4caf4fae784cd19c720d828d6" exitCode=0 Jan 01 08:46:05 crc kubenswrapper[4867]: I0101 08:46:05.912099 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99","Type":"ContainerDied","Data":"dc368467d4b3d995dcecfe0ff1d3410bbb5d37c4caf4fae784cd19c720d828d6"} Jan 01 08:46:06 crc kubenswrapper[4867]: I0101 08:46:06.267487 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8jl6r-config-cqkns" Jan 01 08:46:06 crc kubenswrapper[4867]: I0101 08:46:06.361208 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/90146b6b-3eb2-41f2-92f1-7410152155a0-var-log-ovn\") pod \"90146b6b-3eb2-41f2-92f1-7410152155a0\" (UID: \"90146b6b-3eb2-41f2-92f1-7410152155a0\") " Jan 01 08:46:06 crc kubenswrapper[4867]: I0101 08:46:06.361274 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90146b6b-3eb2-41f2-92f1-7410152155a0-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "90146b6b-3eb2-41f2-92f1-7410152155a0" (UID: "90146b6b-3eb2-41f2-92f1-7410152155a0"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:46:06 crc kubenswrapper[4867]: I0101 08:46:06.361313 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90146b6b-3eb2-41f2-92f1-7410152155a0-scripts\") pod \"90146b6b-3eb2-41f2-92f1-7410152155a0\" (UID: \"90146b6b-3eb2-41f2-92f1-7410152155a0\") " Jan 01 08:46:06 crc kubenswrapper[4867]: I0101 08:46:06.362199 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90146b6b-3eb2-41f2-92f1-7410152155a0-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "90146b6b-3eb2-41f2-92f1-7410152155a0" (UID: "90146b6b-3eb2-41f2-92f1-7410152155a0"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:46:06 crc kubenswrapper[4867]: I0101 08:46:06.363143 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90146b6b-3eb2-41f2-92f1-7410152155a0-scripts" (OuterVolumeSpecName: "scripts") pod "90146b6b-3eb2-41f2-92f1-7410152155a0" (UID: "90146b6b-3eb2-41f2-92f1-7410152155a0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:46:06 crc kubenswrapper[4867]: I0101 08:46:06.363253 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/90146b6b-3eb2-41f2-92f1-7410152155a0-additional-scripts\") pod \"90146b6b-3eb2-41f2-92f1-7410152155a0\" (UID: \"90146b6b-3eb2-41f2-92f1-7410152155a0\") " Jan 01 08:46:06 crc kubenswrapper[4867]: I0101 08:46:06.363322 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22zdc\" (UniqueName: \"kubernetes.io/projected/90146b6b-3eb2-41f2-92f1-7410152155a0-kube-api-access-22zdc\") pod \"90146b6b-3eb2-41f2-92f1-7410152155a0\" (UID: \"90146b6b-3eb2-41f2-92f1-7410152155a0\") " Jan 01 08:46:06 crc kubenswrapper[4867]: I0101 08:46:06.363580 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90146b6b-3eb2-41f2-92f1-7410152155a0-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "90146b6b-3eb2-41f2-92f1-7410152155a0" (UID: "90146b6b-3eb2-41f2-92f1-7410152155a0"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:46:06 crc kubenswrapper[4867]: I0101 08:46:06.363353 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/90146b6b-3eb2-41f2-92f1-7410152155a0-var-run-ovn\") pod \"90146b6b-3eb2-41f2-92f1-7410152155a0\" (UID: \"90146b6b-3eb2-41f2-92f1-7410152155a0\") " Jan 01 08:46:06 crc kubenswrapper[4867]: I0101 08:46:06.364058 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/90146b6b-3eb2-41f2-92f1-7410152155a0-var-run\") pod \"90146b6b-3eb2-41f2-92f1-7410152155a0\" (UID: \"90146b6b-3eb2-41f2-92f1-7410152155a0\") " Jan 01 08:46:06 crc kubenswrapper[4867]: I0101 08:46:06.364229 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90146b6b-3eb2-41f2-92f1-7410152155a0-var-run" (OuterVolumeSpecName: "var-run") pod "90146b6b-3eb2-41f2-92f1-7410152155a0" (UID: "90146b6b-3eb2-41f2-92f1-7410152155a0"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:46:06 crc kubenswrapper[4867]: I0101 08:46:06.365715 4867 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/90146b6b-3eb2-41f2-92f1-7410152155a0-var-run\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:06 crc kubenswrapper[4867]: I0101 08:46:06.365790 4867 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/90146b6b-3eb2-41f2-92f1-7410152155a0-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:06 crc kubenswrapper[4867]: I0101 08:46:06.365801 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90146b6b-3eb2-41f2-92f1-7410152155a0-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:06 crc kubenswrapper[4867]: I0101 08:46:06.365812 4867 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/90146b6b-3eb2-41f2-92f1-7410152155a0-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:06 crc kubenswrapper[4867]: I0101 08:46:06.366107 4867 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/90146b6b-3eb2-41f2-92f1-7410152155a0-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:06 crc kubenswrapper[4867]: I0101 08:46:06.369636 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90146b6b-3eb2-41f2-92f1-7410152155a0-kube-api-access-22zdc" (OuterVolumeSpecName: "kube-api-access-22zdc") pod "90146b6b-3eb2-41f2-92f1-7410152155a0" (UID: "90146b6b-3eb2-41f2-92f1-7410152155a0"). InnerVolumeSpecName "kube-api-access-22zdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:46:06 crc kubenswrapper[4867]: I0101 08:46:06.468175 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22zdc\" (UniqueName: \"kubernetes.io/projected/90146b6b-3eb2-41f2-92f1-7410152155a0-kube-api-access-22zdc\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:06 crc kubenswrapper[4867]: I0101 08:46:06.922484 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8jl6r-config-cqkns" event={"ID":"90146b6b-3eb2-41f2-92f1-7410152155a0","Type":"ContainerDied","Data":"cf2fcffca6301dab616b697afc0b8fff6530c1569fb3c9ef0170c41e44514ce8"} Jan 01 08:46:06 crc kubenswrapper[4867]: I0101 08:46:06.922525 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf2fcffca6301dab616b697afc0b8fff6530c1569fb3c9ef0170c41e44514ce8" Jan 01 08:46:06 crc kubenswrapper[4867]: I0101 08:46:06.922578 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8jl6r-config-cqkns" Jan 01 08:46:07 crc kubenswrapper[4867]: I0101 08:46:07.006459 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-8jl6r-config-cqkns"] Jan 01 08:46:07 crc kubenswrapper[4867]: I0101 08:46:07.024616 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-8jl6r-config-cqkns"] Jan 01 08:46:07 crc kubenswrapper[4867]: I0101 08:46:07.117502 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-8jl6r-config-kr77j"] Jan 01 08:46:07 crc kubenswrapper[4867]: E0101 08:46:07.117854 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90146b6b-3eb2-41f2-92f1-7410152155a0" containerName="ovn-config" Jan 01 08:46:07 crc kubenswrapper[4867]: I0101 08:46:07.117869 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="90146b6b-3eb2-41f2-92f1-7410152155a0" containerName="ovn-config" Jan 01 08:46:07 crc kubenswrapper[4867]: I0101 08:46:07.118076 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="90146b6b-3eb2-41f2-92f1-7410152155a0" containerName="ovn-config" Jan 01 08:46:07 crc kubenswrapper[4867]: I0101 08:46:07.118563 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8jl6r-config-kr77j" Jan 01 08:46:07 crc kubenswrapper[4867]: I0101 08:46:07.123170 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 01 08:46:07 crc kubenswrapper[4867]: I0101 08:46:07.127314 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8jl6r-config-kr77j"] Jan 01 08:46:07 crc kubenswrapper[4867]: I0101 08:46:07.155126 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90146b6b-3eb2-41f2-92f1-7410152155a0" path="/var/lib/kubelet/pods/90146b6b-3eb2-41f2-92f1-7410152155a0/volumes" Jan 01 08:46:07 crc kubenswrapper[4867]: I0101 08:46:07.179361 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-8jl6r" Jan 01 08:46:07 crc kubenswrapper[4867]: I0101 08:46:07.180819 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fbe6de6f-0262-49b8-b480-2c407ad1d517-scripts\") pod \"ovn-controller-8jl6r-config-kr77j\" (UID: \"fbe6de6f-0262-49b8-b480-2c407ad1d517\") " pod="openstack/ovn-controller-8jl6r-config-kr77j" Jan 01 08:46:07 crc kubenswrapper[4867]: I0101 08:46:07.180861 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fbe6de6f-0262-49b8-b480-2c407ad1d517-var-run-ovn\") pod \"ovn-controller-8jl6r-config-kr77j\" (UID: \"fbe6de6f-0262-49b8-b480-2c407ad1d517\") " pod="openstack/ovn-controller-8jl6r-config-kr77j" Jan 01 08:46:07 crc kubenswrapper[4867]: I0101 08:46:07.180914 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fbe6de6f-0262-49b8-b480-2c407ad1d517-additional-scripts\") pod \"ovn-controller-8jl6r-config-kr77j\" (UID: \"fbe6de6f-0262-49b8-b480-2c407ad1d517\") " pod="openstack/ovn-controller-8jl6r-config-kr77j" Jan 01 08:46:07 crc kubenswrapper[4867]: I0101 08:46:07.180938 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z55mx\" (UniqueName: \"kubernetes.io/projected/fbe6de6f-0262-49b8-b480-2c407ad1d517-kube-api-access-z55mx\") pod \"ovn-controller-8jl6r-config-kr77j\" (UID: \"fbe6de6f-0262-49b8-b480-2c407ad1d517\") " pod="openstack/ovn-controller-8jl6r-config-kr77j" Jan 01 08:46:07 crc kubenswrapper[4867]: I0101 08:46:07.180968 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fbe6de6f-0262-49b8-b480-2c407ad1d517-var-run\") pod \"ovn-controller-8jl6r-config-kr77j\" (UID: \"fbe6de6f-0262-49b8-b480-2c407ad1d517\") " pod="openstack/ovn-controller-8jl6r-config-kr77j" Jan 01 08:46:07 crc kubenswrapper[4867]: I0101 08:46:07.181008 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fbe6de6f-0262-49b8-b480-2c407ad1d517-var-log-ovn\") pod \"ovn-controller-8jl6r-config-kr77j\" (UID: \"fbe6de6f-0262-49b8-b480-2c407ad1d517\") " pod="openstack/ovn-controller-8jl6r-config-kr77j" Jan 01 08:46:07 crc kubenswrapper[4867]: I0101 08:46:07.286367 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fbe6de6f-0262-49b8-b480-2c407ad1d517-additional-scripts\") pod \"ovn-controller-8jl6r-config-kr77j\" (UID: \"fbe6de6f-0262-49b8-b480-2c407ad1d517\") " pod="openstack/ovn-controller-8jl6r-config-kr77j" Jan 01 08:46:07 crc kubenswrapper[4867]: I0101 08:46:07.286437 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z55mx\" (UniqueName: \"kubernetes.io/projected/fbe6de6f-0262-49b8-b480-2c407ad1d517-kube-api-access-z55mx\") pod \"ovn-controller-8jl6r-config-kr77j\" (UID: \"fbe6de6f-0262-49b8-b480-2c407ad1d517\") " pod="openstack/ovn-controller-8jl6r-config-kr77j" Jan 01 08:46:07 crc kubenswrapper[4867]: I0101 08:46:07.286531 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fbe6de6f-0262-49b8-b480-2c407ad1d517-var-run\") pod \"ovn-controller-8jl6r-config-kr77j\" (UID: \"fbe6de6f-0262-49b8-b480-2c407ad1d517\") " pod="openstack/ovn-controller-8jl6r-config-kr77j" Jan 01 08:46:07 crc kubenswrapper[4867]: I0101 08:46:07.286580 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fbe6de6f-0262-49b8-b480-2c407ad1d517-var-log-ovn\") pod \"ovn-controller-8jl6r-config-kr77j\" (UID: \"fbe6de6f-0262-49b8-b480-2c407ad1d517\") " pod="openstack/ovn-controller-8jl6r-config-kr77j" Jan 01 08:46:07 crc kubenswrapper[4867]: I0101 08:46:07.286702 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fbe6de6f-0262-49b8-b480-2c407ad1d517-scripts\") pod \"ovn-controller-8jl6r-config-kr77j\" (UID: \"fbe6de6f-0262-49b8-b480-2c407ad1d517\") " pod="openstack/ovn-controller-8jl6r-config-kr77j" Jan 01 08:46:07 crc kubenswrapper[4867]: I0101 08:46:07.286738 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fbe6de6f-0262-49b8-b480-2c407ad1d517-var-run-ovn\") pod \"ovn-controller-8jl6r-config-kr77j\" (UID: \"fbe6de6f-0262-49b8-b480-2c407ad1d517\") " pod="openstack/ovn-controller-8jl6r-config-kr77j" Jan 01 08:46:07 crc kubenswrapper[4867]: I0101 08:46:07.287078 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fbe6de6f-0262-49b8-b480-2c407ad1d517-var-log-ovn\") pod \"ovn-controller-8jl6r-config-kr77j\" (UID: \"fbe6de6f-0262-49b8-b480-2c407ad1d517\") " pod="openstack/ovn-controller-8jl6r-config-kr77j" Jan 01 08:46:07 crc kubenswrapper[4867]: I0101 08:46:07.287263 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fbe6de6f-0262-49b8-b480-2c407ad1d517-var-run-ovn\") pod \"ovn-controller-8jl6r-config-kr77j\" (UID: \"fbe6de6f-0262-49b8-b480-2c407ad1d517\") " pod="openstack/ovn-controller-8jl6r-config-kr77j" Jan 01 08:46:07 crc kubenswrapper[4867]: I0101 08:46:07.287664 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fbe6de6f-0262-49b8-b480-2c407ad1d517-additional-scripts\") pod \"ovn-controller-8jl6r-config-kr77j\" (UID: \"fbe6de6f-0262-49b8-b480-2c407ad1d517\") " pod="openstack/ovn-controller-8jl6r-config-kr77j" Jan 01 08:46:07 crc kubenswrapper[4867]: I0101 08:46:07.287802 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fbe6de6f-0262-49b8-b480-2c407ad1d517-var-run\") pod \"ovn-controller-8jl6r-config-kr77j\" (UID: \"fbe6de6f-0262-49b8-b480-2c407ad1d517\") " pod="openstack/ovn-controller-8jl6r-config-kr77j" Jan 01 08:46:07 crc kubenswrapper[4867]: I0101 08:46:07.290172 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fbe6de6f-0262-49b8-b480-2c407ad1d517-scripts\") pod \"ovn-controller-8jl6r-config-kr77j\" (UID: \"fbe6de6f-0262-49b8-b480-2c407ad1d517\") " pod="openstack/ovn-controller-8jl6r-config-kr77j" Jan 01 08:46:07 crc kubenswrapper[4867]: I0101 08:46:07.304479 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z55mx\" (UniqueName: \"kubernetes.io/projected/fbe6de6f-0262-49b8-b480-2c407ad1d517-kube-api-access-z55mx\") pod \"ovn-controller-8jl6r-config-kr77j\" (UID: \"fbe6de6f-0262-49b8-b480-2c407ad1d517\") " pod="openstack/ovn-controller-8jl6r-config-kr77j" Jan 01 08:46:07 crc kubenswrapper[4867]: I0101 08:46:07.441587 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8jl6r-config-kr77j" Jan 01 08:46:07 crc kubenswrapper[4867]: I0101 08:46:07.932492 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99","Type":"ContainerStarted","Data":"bc5390d4bcf01426a28783738a2f8a8259143f42fe7c013e5c96ae09dbf77b55"} Jan 01 08:46:07 crc kubenswrapper[4867]: I0101 08:46:07.932933 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:46:07 crc kubenswrapper[4867]: I0101 08:46:07.964892 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.350643557 podStartE2EDuration="1m11.964858867s" podCreationTimestamp="2026-01-01 08:44:56 +0000 UTC" firstStartedPulling="2026-01-01 08:44:58.909123985 +0000 UTC m=+1108.044392754" lastFinishedPulling="2026-01-01 08:45:32.523339295 +0000 UTC m=+1141.658608064" observedRunningTime="2026-01-01 08:46:07.953443156 +0000 UTC m=+1177.088711925" watchObservedRunningTime="2026-01-01 08:46:07.964858867 +0000 UTC m=+1177.100127636" Jan 01 08:46:08 crc kubenswrapper[4867]: I0101 08:46:08.071324 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8jl6r-config-kr77j"] Jan 01 08:46:08 crc kubenswrapper[4867]: I0101 08:46:08.946609 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1f687f2-3229-401c-b5cb-f79e96311c45","Type":"ContainerStarted","Data":"fa319fce7bd6ca10e6ad88adaf6f5948cf915cc2e7db3b5bac5c37f79dc9e5b9"} Jan 01 08:46:08 crc kubenswrapper[4867]: I0101 08:46:08.947052 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1f687f2-3229-401c-b5cb-f79e96311c45","Type":"ContainerStarted","Data":"b2f2f0cace6bd82b9e6b696c3fd61a7a16620e0c17129fb46f6aa38276e62493"} Jan 01 08:46:08 crc kubenswrapper[4867]: I0101 08:46:08.947062 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1f687f2-3229-401c-b5cb-f79e96311c45","Type":"ContainerStarted","Data":"5211eb74bef91f578c4b43a263d9289855c0625a768d9b2a86f8757c91730985"} Jan 01 08:46:08 crc kubenswrapper[4867]: I0101 08:46:08.947072 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1f687f2-3229-401c-b5cb-f79e96311c45","Type":"ContainerStarted","Data":"8e53230183aa290b06a69f94b13b5239aeff13215b5020e77a00e699fab2b615"} Jan 01 08:46:08 crc kubenswrapper[4867]: I0101 08:46:08.950262 4867 generic.go:334] "Generic (PLEG): container finished" podID="fbe6de6f-0262-49b8-b480-2c407ad1d517" containerID="f133afbf0ec3f1bc8e7cda6da7ec25adcc5fedc08b643a231a16e6ba3a90ede2" exitCode=0 Jan 01 08:46:08 crc kubenswrapper[4867]: I0101 08:46:08.950345 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8jl6r-config-kr77j" event={"ID":"fbe6de6f-0262-49b8-b480-2c407ad1d517","Type":"ContainerDied","Data":"f133afbf0ec3f1bc8e7cda6da7ec25adcc5fedc08b643a231a16e6ba3a90ede2"} Jan 01 08:46:08 crc kubenswrapper[4867]: I0101 08:46:08.950373 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8jl6r-config-kr77j" event={"ID":"fbe6de6f-0262-49b8-b480-2c407ad1d517","Type":"ContainerStarted","Data":"a64524e3e6c38736fe94bd9c6c996bed338a5f9bcddce3a041a527026dd25cae"} Jan 01 08:46:08 crc kubenswrapper[4867]: I0101 08:46:08.952052 4867 generic.go:334] "Generic (PLEG): container finished" podID="84d7aac6-1073-41c0-acff-169e36ec197d" containerID="ccf5ec4f83d69a7451d4e4e6f25b8108ea8d0370b161ff5d8a9669794a1fb386" exitCode=0 Jan 01 08:46:08 crc kubenswrapper[4867]: I0101 08:46:08.952141 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"84d7aac6-1073-41c0-acff-169e36ec197d","Type":"ContainerDied","Data":"ccf5ec4f83d69a7451d4e4e6f25b8108ea8d0370b161ff5d8a9669794a1fb386"} Jan 01 08:46:16 crc kubenswrapper[4867]: I0101 08:46:16.017921 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8jl6r-config-kr77j" event={"ID":"fbe6de6f-0262-49b8-b480-2c407ad1d517","Type":"ContainerDied","Data":"a64524e3e6c38736fe94bd9c6c996bed338a5f9bcddce3a041a527026dd25cae"} Jan 01 08:46:16 crc kubenswrapper[4867]: I0101 08:46:16.018495 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a64524e3e6c38736fe94bd9c6c996bed338a5f9bcddce3a041a527026dd25cae" Jan 01 08:46:16 crc kubenswrapper[4867]: I0101 08:46:16.126227 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8jl6r-config-kr77j" Jan 01 08:46:16 crc kubenswrapper[4867]: I0101 08:46:16.153575 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fbe6de6f-0262-49b8-b480-2c407ad1d517-var-run-ovn\") pod \"fbe6de6f-0262-49b8-b480-2c407ad1d517\" (UID: \"fbe6de6f-0262-49b8-b480-2c407ad1d517\") " Jan 01 08:46:16 crc kubenswrapper[4867]: I0101 08:46:16.153649 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z55mx\" (UniqueName: \"kubernetes.io/projected/fbe6de6f-0262-49b8-b480-2c407ad1d517-kube-api-access-z55mx\") pod \"fbe6de6f-0262-49b8-b480-2c407ad1d517\" (UID: \"fbe6de6f-0262-49b8-b480-2c407ad1d517\") " Jan 01 08:46:16 crc kubenswrapper[4867]: I0101 08:46:16.153672 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fbe6de6f-0262-49b8-b480-2c407ad1d517-var-log-ovn\") pod \"fbe6de6f-0262-49b8-b480-2c407ad1d517\" (UID: \"fbe6de6f-0262-49b8-b480-2c407ad1d517\") " Jan 01 08:46:16 crc kubenswrapper[4867]: I0101 08:46:16.153734 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbe6de6f-0262-49b8-b480-2c407ad1d517-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "fbe6de6f-0262-49b8-b480-2c407ad1d517" (UID: "fbe6de6f-0262-49b8-b480-2c407ad1d517"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:46:16 crc kubenswrapper[4867]: I0101 08:46:16.153805 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fbe6de6f-0262-49b8-b480-2c407ad1d517-var-run\") pod \"fbe6de6f-0262-49b8-b480-2c407ad1d517\" (UID: \"fbe6de6f-0262-49b8-b480-2c407ad1d517\") " Jan 01 08:46:16 crc kubenswrapper[4867]: I0101 08:46:16.153854 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fbe6de6f-0262-49b8-b480-2c407ad1d517-scripts\") pod \"fbe6de6f-0262-49b8-b480-2c407ad1d517\" (UID: \"fbe6de6f-0262-49b8-b480-2c407ad1d517\") " Jan 01 08:46:16 crc kubenswrapper[4867]: I0101 08:46:16.153910 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fbe6de6f-0262-49b8-b480-2c407ad1d517-additional-scripts\") pod \"fbe6de6f-0262-49b8-b480-2c407ad1d517\" (UID: \"fbe6de6f-0262-49b8-b480-2c407ad1d517\") " Jan 01 08:46:16 crc kubenswrapper[4867]: I0101 08:46:16.154482 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbe6de6f-0262-49b8-b480-2c407ad1d517-var-run" (OuterVolumeSpecName: "var-run") pod "fbe6de6f-0262-49b8-b480-2c407ad1d517" (UID: "fbe6de6f-0262-49b8-b480-2c407ad1d517"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:46:16 crc kubenswrapper[4867]: I0101 08:46:16.154536 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbe6de6f-0262-49b8-b480-2c407ad1d517-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "fbe6de6f-0262-49b8-b480-2c407ad1d517" (UID: "fbe6de6f-0262-49b8-b480-2c407ad1d517"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:46:16 crc kubenswrapper[4867]: I0101 08:46:16.155289 4867 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fbe6de6f-0262-49b8-b480-2c407ad1d517-var-run\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:16 crc kubenswrapper[4867]: I0101 08:46:16.155315 4867 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fbe6de6f-0262-49b8-b480-2c407ad1d517-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:16 crc kubenswrapper[4867]: I0101 08:46:16.155330 4867 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fbe6de6f-0262-49b8-b480-2c407ad1d517-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:16 crc kubenswrapper[4867]: I0101 08:46:16.155425 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbe6de6f-0262-49b8-b480-2c407ad1d517-scripts" (OuterVolumeSpecName: "scripts") pod "fbe6de6f-0262-49b8-b480-2c407ad1d517" (UID: "fbe6de6f-0262-49b8-b480-2c407ad1d517"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:46:16 crc kubenswrapper[4867]: I0101 08:46:16.156724 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbe6de6f-0262-49b8-b480-2c407ad1d517-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "fbe6de6f-0262-49b8-b480-2c407ad1d517" (UID: "fbe6de6f-0262-49b8-b480-2c407ad1d517"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:46:16 crc kubenswrapper[4867]: I0101 08:46:16.158820 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbe6de6f-0262-49b8-b480-2c407ad1d517-kube-api-access-z55mx" (OuterVolumeSpecName: "kube-api-access-z55mx") pod "fbe6de6f-0262-49b8-b480-2c407ad1d517" (UID: "fbe6de6f-0262-49b8-b480-2c407ad1d517"). InnerVolumeSpecName "kube-api-access-z55mx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:46:16 crc kubenswrapper[4867]: I0101 08:46:16.257228 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z55mx\" (UniqueName: \"kubernetes.io/projected/fbe6de6f-0262-49b8-b480-2c407ad1d517-kube-api-access-z55mx\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:16 crc kubenswrapper[4867]: I0101 08:46:16.257258 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fbe6de6f-0262-49b8-b480-2c407ad1d517-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:16 crc kubenswrapper[4867]: I0101 08:46:16.257272 4867 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fbe6de6f-0262-49b8-b480-2c407ad1d517-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:17 crc kubenswrapper[4867]: I0101 08:46:17.027279 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"84d7aac6-1073-41c0-acff-169e36ec197d","Type":"ContainerStarted","Data":"8eae07fdea9c0953b3fdfc9cbf9df315288333bc30f11ac89880d48e2c61ac1b"} Jan 01 08:46:17 crc kubenswrapper[4867]: I0101 08:46:17.027470 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 01 08:46:17 crc kubenswrapper[4867]: I0101 08:46:17.035524 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1f687f2-3229-401c-b5cb-f79e96311c45","Type":"ContainerStarted","Data":"772bcfc85f46d71696e64d9fee0b787dd32f23fc9a773f22afa70be5798f659e"} Jan 01 08:46:17 crc kubenswrapper[4867]: I0101 08:46:17.035588 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1f687f2-3229-401c-b5cb-f79e96311c45","Type":"ContainerStarted","Data":"fce464d833a7c6c6c7e37a1de3b906e406e554021eb3b0eefbac9bedb4d2be70"} Jan 01 08:46:17 crc kubenswrapper[4867]: I0101 08:46:17.035612 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1f687f2-3229-401c-b5cb-f79e96311c45","Type":"ContainerStarted","Data":"b55659f4cef0ce86dc0aded13b57bc4bfb2f19e8bc919819f7fd767199f41066"} Jan 01 08:46:17 crc kubenswrapper[4867]: I0101 08:46:17.037942 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8jl6r-config-kr77j" Jan 01 08:46:17 crc kubenswrapper[4867]: I0101 08:46:17.037942 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-hhzrt" event={"ID":"f85694b8-9e77-48d3-9338-4b65bfe5d21f","Type":"ContainerStarted","Data":"6dfaece66e1fdc989b0ac443c52ee434f3a3f5bd5a2577a0c1f5bba039a354c7"} Jan 01 08:46:17 crc kubenswrapper[4867]: I0101 08:46:17.061253 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371956.79355 podStartE2EDuration="1m20.061226204s" podCreationTimestamp="2026-01-01 08:44:57 +0000 UTC" firstStartedPulling="2026-01-01 08:44:59.094477982 +0000 UTC m=+1108.229746751" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:46:17.052338554 +0000 UTC m=+1186.187607333" watchObservedRunningTime="2026-01-01 08:46:17.061226204 +0000 UTC m=+1186.196494983" Jan 01 08:46:17 crc kubenswrapper[4867]: I0101 08:46:17.070295 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-hhzrt" podStartSLOduration=2.40636064 podStartE2EDuration="15.070275878s" podCreationTimestamp="2026-01-01 08:46:02 +0000 UTC" firstStartedPulling="2026-01-01 08:46:03.333367607 +0000 UTC m=+1172.468636376" lastFinishedPulling="2026-01-01 08:46:15.997282835 +0000 UTC m=+1185.132551614" observedRunningTime="2026-01-01 08:46:17.067951763 +0000 UTC m=+1186.203220562" watchObservedRunningTime="2026-01-01 08:46:17.070275878 +0000 UTC m=+1186.205544647" Jan 01 08:46:17 crc kubenswrapper[4867]: I0101 08:46:17.131736 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=27.822409001 podStartE2EDuration="34.131706824s" podCreationTimestamp="2026-01-01 08:45:43 +0000 UTC" firstStartedPulling="2026-01-01 08:46:01.353134078 +0000 UTC m=+1170.488402847" lastFinishedPulling="2026-01-01 08:46:07.662431901 +0000 UTC m=+1176.797700670" observedRunningTime="2026-01-01 08:46:17.115829558 +0000 UTC m=+1186.251098367" watchObservedRunningTime="2026-01-01 08:46:17.131706824 +0000 UTC m=+1186.266975613" Jan 01 08:46:17 crc kubenswrapper[4867]: I0101 08:46:17.214919 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-8jl6r-config-kr77j"] Jan 01 08:46:17 crc kubenswrapper[4867]: I0101 08:46:17.223415 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-8jl6r-config-kr77j"] Jan 01 08:46:17 crc kubenswrapper[4867]: I0101 08:46:17.397715 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-64766d4dcc-5fqnd"] Jan 01 08:46:17 crc kubenswrapper[4867]: E0101 08:46:17.398042 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbe6de6f-0262-49b8-b480-2c407ad1d517" containerName="ovn-config" Jan 01 08:46:17 crc kubenswrapper[4867]: I0101 08:46:17.398058 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbe6de6f-0262-49b8-b480-2c407ad1d517" containerName="ovn-config" Jan 01 08:46:17 crc kubenswrapper[4867]: I0101 08:46:17.398214 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbe6de6f-0262-49b8-b480-2c407ad1d517" containerName="ovn-config" Jan 01 08:46:17 crc kubenswrapper[4867]: I0101 08:46:17.398997 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64766d4dcc-5fqnd" Jan 01 08:46:17 crc kubenswrapper[4867]: I0101 08:46:17.401228 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 01 08:46:17 crc kubenswrapper[4867]: I0101 08:46:17.424312 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64766d4dcc-5fqnd"] Jan 01 08:46:17 crc kubenswrapper[4867]: I0101 08:46:17.579139 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79ebb723-b63c-4ca5-800e-5a2b30633213-dns-svc\") pod \"dnsmasq-dns-64766d4dcc-5fqnd\" (UID: \"79ebb723-b63c-4ca5-800e-5a2b30633213\") " pod="openstack/dnsmasq-dns-64766d4dcc-5fqnd" Jan 01 08:46:17 crc kubenswrapper[4867]: I0101 08:46:17.579212 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5x2k\" (UniqueName: \"kubernetes.io/projected/79ebb723-b63c-4ca5-800e-5a2b30633213-kube-api-access-n5x2k\") pod \"dnsmasq-dns-64766d4dcc-5fqnd\" (UID: \"79ebb723-b63c-4ca5-800e-5a2b30633213\") " pod="openstack/dnsmasq-dns-64766d4dcc-5fqnd" Jan 01 08:46:17 crc kubenswrapper[4867]: I0101 08:46:17.579381 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79ebb723-b63c-4ca5-800e-5a2b30633213-ovsdbserver-sb\") pod \"dnsmasq-dns-64766d4dcc-5fqnd\" (UID: \"79ebb723-b63c-4ca5-800e-5a2b30633213\") " pod="openstack/dnsmasq-dns-64766d4dcc-5fqnd" Jan 01 08:46:17 crc kubenswrapper[4867]: I0101 08:46:17.579551 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79ebb723-b63c-4ca5-800e-5a2b30633213-config\") pod \"dnsmasq-dns-64766d4dcc-5fqnd\" (UID: \"79ebb723-b63c-4ca5-800e-5a2b30633213\") " pod="openstack/dnsmasq-dns-64766d4dcc-5fqnd" Jan 01 08:46:17 crc kubenswrapper[4867]: I0101 08:46:17.579634 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79ebb723-b63c-4ca5-800e-5a2b30633213-ovsdbserver-nb\") pod \"dnsmasq-dns-64766d4dcc-5fqnd\" (UID: \"79ebb723-b63c-4ca5-800e-5a2b30633213\") " pod="openstack/dnsmasq-dns-64766d4dcc-5fqnd" Jan 01 08:46:17 crc kubenswrapper[4867]: I0101 08:46:17.579742 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79ebb723-b63c-4ca5-800e-5a2b30633213-dns-swift-storage-0\") pod \"dnsmasq-dns-64766d4dcc-5fqnd\" (UID: \"79ebb723-b63c-4ca5-800e-5a2b30633213\") " pod="openstack/dnsmasq-dns-64766d4dcc-5fqnd" Jan 01 08:46:17 crc kubenswrapper[4867]: I0101 08:46:17.681475 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79ebb723-b63c-4ca5-800e-5a2b30633213-ovsdbserver-sb\") pod \"dnsmasq-dns-64766d4dcc-5fqnd\" (UID: \"79ebb723-b63c-4ca5-800e-5a2b30633213\") " pod="openstack/dnsmasq-dns-64766d4dcc-5fqnd" Jan 01 08:46:17 crc kubenswrapper[4867]: I0101 08:46:17.681527 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79ebb723-b63c-4ca5-800e-5a2b30633213-config\") pod \"dnsmasq-dns-64766d4dcc-5fqnd\" (UID: \"79ebb723-b63c-4ca5-800e-5a2b30633213\") " pod="openstack/dnsmasq-dns-64766d4dcc-5fqnd" Jan 01 08:46:17 crc kubenswrapper[4867]: I0101 08:46:17.681549 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79ebb723-b63c-4ca5-800e-5a2b30633213-ovsdbserver-nb\") pod \"dnsmasq-dns-64766d4dcc-5fqnd\" (UID: \"79ebb723-b63c-4ca5-800e-5a2b30633213\") " pod="openstack/dnsmasq-dns-64766d4dcc-5fqnd" Jan 01 08:46:17 crc kubenswrapper[4867]: I0101 08:46:17.681580 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79ebb723-b63c-4ca5-800e-5a2b30633213-dns-swift-storage-0\") pod \"dnsmasq-dns-64766d4dcc-5fqnd\" (UID: \"79ebb723-b63c-4ca5-800e-5a2b30633213\") " pod="openstack/dnsmasq-dns-64766d4dcc-5fqnd" Jan 01 08:46:17 crc kubenswrapper[4867]: I0101 08:46:17.681625 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79ebb723-b63c-4ca5-800e-5a2b30633213-dns-svc\") pod \"dnsmasq-dns-64766d4dcc-5fqnd\" (UID: \"79ebb723-b63c-4ca5-800e-5a2b30633213\") " pod="openstack/dnsmasq-dns-64766d4dcc-5fqnd" Jan 01 08:46:17 crc kubenswrapper[4867]: I0101 08:46:17.681645 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5x2k\" (UniqueName: \"kubernetes.io/projected/79ebb723-b63c-4ca5-800e-5a2b30633213-kube-api-access-n5x2k\") pod \"dnsmasq-dns-64766d4dcc-5fqnd\" (UID: \"79ebb723-b63c-4ca5-800e-5a2b30633213\") " pod="openstack/dnsmasq-dns-64766d4dcc-5fqnd" Jan 01 08:46:17 crc kubenswrapper[4867]: I0101 08:46:17.682703 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79ebb723-b63c-4ca5-800e-5a2b30633213-ovsdbserver-nb\") pod \"dnsmasq-dns-64766d4dcc-5fqnd\" (UID: \"79ebb723-b63c-4ca5-800e-5a2b30633213\") " pod="openstack/dnsmasq-dns-64766d4dcc-5fqnd" Jan 01 08:46:17 crc kubenswrapper[4867]: I0101 08:46:17.682741 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79ebb723-b63c-4ca5-800e-5a2b30633213-ovsdbserver-sb\") pod \"dnsmasq-dns-64766d4dcc-5fqnd\" (UID: \"79ebb723-b63c-4ca5-800e-5a2b30633213\") " pod="openstack/dnsmasq-dns-64766d4dcc-5fqnd" Jan 01 08:46:17 crc kubenswrapper[4867]: I0101 08:46:17.683162 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79ebb723-b63c-4ca5-800e-5a2b30633213-dns-swift-storage-0\") pod \"dnsmasq-dns-64766d4dcc-5fqnd\" (UID: \"79ebb723-b63c-4ca5-800e-5a2b30633213\") " pod="openstack/dnsmasq-dns-64766d4dcc-5fqnd" Jan 01 08:46:17 crc kubenswrapper[4867]: I0101 08:46:17.683565 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79ebb723-b63c-4ca5-800e-5a2b30633213-config\") pod \"dnsmasq-dns-64766d4dcc-5fqnd\" (UID: \"79ebb723-b63c-4ca5-800e-5a2b30633213\") " pod="openstack/dnsmasq-dns-64766d4dcc-5fqnd" Jan 01 08:46:17 crc kubenswrapper[4867]: I0101 08:46:17.683766 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79ebb723-b63c-4ca5-800e-5a2b30633213-dns-svc\") pod \"dnsmasq-dns-64766d4dcc-5fqnd\" (UID: \"79ebb723-b63c-4ca5-800e-5a2b30633213\") " pod="openstack/dnsmasq-dns-64766d4dcc-5fqnd" Jan 01 08:46:17 crc kubenswrapper[4867]: I0101 08:46:17.701072 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5x2k\" (UniqueName: \"kubernetes.io/projected/79ebb723-b63c-4ca5-800e-5a2b30633213-kube-api-access-n5x2k\") pod \"dnsmasq-dns-64766d4dcc-5fqnd\" (UID: \"79ebb723-b63c-4ca5-800e-5a2b30633213\") " pod="openstack/dnsmasq-dns-64766d4dcc-5fqnd" Jan 01 08:46:17 crc kubenswrapper[4867]: I0101 08:46:17.713374 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64766d4dcc-5fqnd" Jan 01 08:46:18 crc kubenswrapper[4867]: I0101 08:46:18.492703 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:46:18 crc kubenswrapper[4867]: I0101 08:46:18.863078 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64766d4dcc-5fqnd"] Jan 01 08:46:19 crc kubenswrapper[4867]: I0101 08:46:19.054053 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64766d4dcc-5fqnd" event={"ID":"79ebb723-b63c-4ca5-800e-5a2b30633213","Type":"ContainerStarted","Data":"48afedfc38a9399e1cdbd0848dbd8e32362b9a148d78b1ab7ea70f0dc21a7e23"} Jan 01 08:46:19 crc kubenswrapper[4867]: I0101 08:46:19.138446 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbe6de6f-0262-49b8-b480-2c407ad1d517" path="/var/lib/kubelet/pods/fbe6de6f-0262-49b8-b480-2c407ad1d517/volumes" Jan 01 08:46:20 crc kubenswrapper[4867]: I0101 08:46:20.064759 4867 generic.go:334] "Generic (PLEG): container finished" podID="79ebb723-b63c-4ca5-800e-5a2b30633213" containerID="960aa163fe8a41ee581f2511d96f7323c7607a03eb55cf3d0e75d607d2effb6c" exitCode=0 Jan 01 08:46:20 crc kubenswrapper[4867]: I0101 08:46:20.064808 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64766d4dcc-5fqnd" event={"ID":"79ebb723-b63c-4ca5-800e-5a2b30633213","Type":"ContainerDied","Data":"960aa163fe8a41ee581f2511d96f7323c7607a03eb55cf3d0e75d607d2effb6c"} Jan 01 08:46:21 crc kubenswrapper[4867]: I0101 08:46:21.073188 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64766d4dcc-5fqnd" event={"ID":"79ebb723-b63c-4ca5-800e-5a2b30633213","Type":"ContainerStarted","Data":"4b154cb3669ae4620dfde2e5bbce6d6d6c9063f6e201a703c3c7d265ab4f4bb9"} Jan 01 08:46:21 crc kubenswrapper[4867]: I0101 08:46:21.073687 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-64766d4dcc-5fqnd" Jan 01 08:46:21 crc kubenswrapper[4867]: I0101 08:46:21.097410 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-64766d4dcc-5fqnd" podStartSLOduration=4.097390219 podStartE2EDuration="4.097390219s" podCreationTimestamp="2026-01-01 08:46:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:46:21.092274865 +0000 UTC m=+1190.227543644" watchObservedRunningTime="2026-01-01 08:46:21.097390219 +0000 UTC m=+1190.232658988" Jan 01 08:46:23 crc kubenswrapper[4867]: I0101 08:46:23.093981 4867 generic.go:334] "Generic (PLEG): container finished" podID="f85694b8-9e77-48d3-9338-4b65bfe5d21f" containerID="6dfaece66e1fdc989b0ac443c52ee434f3a3f5bd5a2577a0c1f5bba039a354c7" exitCode=0 Jan 01 08:46:23 crc kubenswrapper[4867]: I0101 08:46:23.094060 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-hhzrt" event={"ID":"f85694b8-9e77-48d3-9338-4b65bfe5d21f","Type":"ContainerDied","Data":"6dfaece66e1fdc989b0ac443c52ee434f3a3f5bd5a2577a0c1f5bba039a354c7"} Jan 01 08:46:24 crc kubenswrapper[4867]: I0101 08:46:24.476874 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-hhzrt" Jan 01 08:46:24 crc kubenswrapper[4867]: I0101 08:46:24.600105 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85694b8-9e77-48d3-9338-4b65bfe5d21f-combined-ca-bundle\") pod \"f85694b8-9e77-48d3-9338-4b65bfe5d21f\" (UID: \"f85694b8-9e77-48d3-9338-4b65bfe5d21f\") " Jan 01 08:46:24 crc kubenswrapper[4867]: I0101 08:46:24.600152 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85694b8-9e77-48d3-9338-4b65bfe5d21f-config-data\") pod \"f85694b8-9e77-48d3-9338-4b65bfe5d21f\" (UID: \"f85694b8-9e77-48d3-9338-4b65bfe5d21f\") " Jan 01 08:46:24 crc kubenswrapper[4867]: I0101 08:46:24.600199 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f85694b8-9e77-48d3-9338-4b65bfe5d21f-db-sync-config-data\") pod \"f85694b8-9e77-48d3-9338-4b65bfe5d21f\" (UID: \"f85694b8-9e77-48d3-9338-4b65bfe5d21f\") " Jan 01 08:46:24 crc kubenswrapper[4867]: I0101 08:46:24.600278 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7622\" (UniqueName: \"kubernetes.io/projected/f85694b8-9e77-48d3-9338-4b65bfe5d21f-kube-api-access-v7622\") pod \"f85694b8-9e77-48d3-9338-4b65bfe5d21f\" (UID: \"f85694b8-9e77-48d3-9338-4b65bfe5d21f\") " Jan 01 08:46:24 crc kubenswrapper[4867]: I0101 08:46:24.607298 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85694b8-9e77-48d3-9338-4b65bfe5d21f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f85694b8-9e77-48d3-9338-4b65bfe5d21f" (UID: "f85694b8-9e77-48d3-9338-4b65bfe5d21f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:46:24 crc kubenswrapper[4867]: I0101 08:46:24.608244 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f85694b8-9e77-48d3-9338-4b65bfe5d21f-kube-api-access-v7622" (OuterVolumeSpecName: "kube-api-access-v7622") pod "f85694b8-9e77-48d3-9338-4b65bfe5d21f" (UID: "f85694b8-9e77-48d3-9338-4b65bfe5d21f"). InnerVolumeSpecName "kube-api-access-v7622". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:46:24 crc kubenswrapper[4867]: I0101 08:46:24.631188 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85694b8-9e77-48d3-9338-4b65bfe5d21f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f85694b8-9e77-48d3-9338-4b65bfe5d21f" (UID: "f85694b8-9e77-48d3-9338-4b65bfe5d21f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:46:24 crc kubenswrapper[4867]: I0101 08:46:24.662070 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85694b8-9e77-48d3-9338-4b65bfe5d21f-config-data" (OuterVolumeSpecName: "config-data") pod "f85694b8-9e77-48d3-9338-4b65bfe5d21f" (UID: "f85694b8-9e77-48d3-9338-4b65bfe5d21f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:46:24 crc kubenswrapper[4867]: I0101 08:46:24.702749 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7622\" (UniqueName: \"kubernetes.io/projected/f85694b8-9e77-48d3-9338-4b65bfe5d21f-kube-api-access-v7622\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:24 crc kubenswrapper[4867]: I0101 08:46:24.702800 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85694b8-9e77-48d3-9338-4b65bfe5d21f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:24 crc kubenswrapper[4867]: I0101 08:46:24.702814 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85694b8-9e77-48d3-9338-4b65bfe5d21f-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:24 crc kubenswrapper[4867]: I0101 08:46:24.702825 4867 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f85694b8-9e77-48d3-9338-4b65bfe5d21f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:25 crc kubenswrapper[4867]: I0101 08:46:25.123933 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-hhzrt" event={"ID":"f85694b8-9e77-48d3-9338-4b65bfe5d21f","Type":"ContainerDied","Data":"a23d31c71e249b01abbe2104e708b2d486872273d7a241b7b0e45b3b8ce26f7d"} Jan 01 08:46:25 crc kubenswrapper[4867]: I0101 08:46:25.123989 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a23d31c71e249b01abbe2104e708b2d486872273d7a241b7b0e45b3b8ce26f7d" Jan 01 08:46:25 crc kubenswrapper[4867]: I0101 08:46:25.124091 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-hhzrt" Jan 01 08:46:25 crc kubenswrapper[4867]: I0101 08:46:25.504771 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64766d4dcc-5fqnd"] Jan 01 08:46:25 crc kubenswrapper[4867]: I0101 08:46:25.505088 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-64766d4dcc-5fqnd" podUID="79ebb723-b63c-4ca5-800e-5a2b30633213" containerName="dnsmasq-dns" containerID="cri-o://4b154cb3669ae4620dfde2e5bbce6d6d6c9063f6e201a703c3c7d265ab4f4bb9" gracePeriod=10 Jan 01 08:46:25 crc kubenswrapper[4867]: I0101 08:46:25.506057 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-64766d4dcc-5fqnd" Jan 01 08:46:25 crc kubenswrapper[4867]: I0101 08:46:25.535134 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-786cc75955-lpfwv"] Jan 01 08:46:25 crc kubenswrapper[4867]: E0101 08:46:25.535562 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85694b8-9e77-48d3-9338-4b65bfe5d21f" containerName="glance-db-sync" Jan 01 08:46:25 crc kubenswrapper[4867]: I0101 08:46:25.535581 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85694b8-9e77-48d3-9338-4b65bfe5d21f" containerName="glance-db-sync" Jan 01 08:46:25 crc kubenswrapper[4867]: I0101 08:46:25.535785 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85694b8-9e77-48d3-9338-4b65bfe5d21f" containerName="glance-db-sync" Jan 01 08:46:25 crc kubenswrapper[4867]: I0101 08:46:25.536811 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-786cc75955-lpfwv" Jan 01 08:46:25 crc kubenswrapper[4867]: I0101 08:46:25.557750 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-786cc75955-lpfwv"] Jan 01 08:46:25 crc kubenswrapper[4867]: I0101 08:46:25.624737 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d-dns-swift-storage-0\") pod \"dnsmasq-dns-786cc75955-lpfwv\" (UID: \"49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d\") " pod="openstack/dnsmasq-dns-786cc75955-lpfwv" Jan 01 08:46:25 crc kubenswrapper[4867]: I0101 08:46:25.624775 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d-config\") pod \"dnsmasq-dns-786cc75955-lpfwv\" (UID: \"49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d\") " pod="openstack/dnsmasq-dns-786cc75955-lpfwv" Jan 01 08:46:25 crc kubenswrapper[4867]: I0101 08:46:25.624844 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d-ovsdbserver-sb\") pod \"dnsmasq-dns-786cc75955-lpfwv\" (UID: \"49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d\") " pod="openstack/dnsmasq-dns-786cc75955-lpfwv" Jan 01 08:46:25 crc kubenswrapper[4867]: I0101 08:46:25.624871 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d-ovsdbserver-nb\") pod \"dnsmasq-dns-786cc75955-lpfwv\" (UID: \"49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d\") " pod="openstack/dnsmasq-dns-786cc75955-lpfwv" Jan 01 08:46:25 crc kubenswrapper[4867]: I0101 08:46:25.624911 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d-dns-svc\") pod \"dnsmasq-dns-786cc75955-lpfwv\" (UID: \"49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d\") " pod="openstack/dnsmasq-dns-786cc75955-lpfwv" Jan 01 08:46:25 crc kubenswrapper[4867]: I0101 08:46:25.624936 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2gh7\" (UniqueName: \"kubernetes.io/projected/49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d-kube-api-access-f2gh7\") pod \"dnsmasq-dns-786cc75955-lpfwv\" (UID: \"49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d\") " pod="openstack/dnsmasq-dns-786cc75955-lpfwv" Jan 01 08:46:25 crc kubenswrapper[4867]: I0101 08:46:25.726487 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d-dns-swift-storage-0\") pod \"dnsmasq-dns-786cc75955-lpfwv\" (UID: \"49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d\") " pod="openstack/dnsmasq-dns-786cc75955-lpfwv" Jan 01 08:46:25 crc kubenswrapper[4867]: I0101 08:46:25.726765 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d-config\") pod \"dnsmasq-dns-786cc75955-lpfwv\" (UID: \"49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d\") " pod="openstack/dnsmasq-dns-786cc75955-lpfwv" Jan 01 08:46:25 crc kubenswrapper[4867]: I0101 08:46:25.726861 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d-ovsdbserver-sb\") pod \"dnsmasq-dns-786cc75955-lpfwv\" (UID: \"49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d\") " pod="openstack/dnsmasq-dns-786cc75955-lpfwv" Jan 01 08:46:25 crc kubenswrapper[4867]: I0101 08:46:25.726936 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d-ovsdbserver-nb\") pod \"dnsmasq-dns-786cc75955-lpfwv\" (UID: \"49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d\") " pod="openstack/dnsmasq-dns-786cc75955-lpfwv" Jan 01 08:46:25 crc kubenswrapper[4867]: I0101 08:46:25.726975 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d-dns-svc\") pod \"dnsmasq-dns-786cc75955-lpfwv\" (UID: \"49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d\") " pod="openstack/dnsmasq-dns-786cc75955-lpfwv" Jan 01 08:46:25 crc kubenswrapper[4867]: I0101 08:46:25.727022 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2gh7\" (UniqueName: \"kubernetes.io/projected/49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d-kube-api-access-f2gh7\") pod \"dnsmasq-dns-786cc75955-lpfwv\" (UID: \"49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d\") " pod="openstack/dnsmasq-dns-786cc75955-lpfwv" Jan 01 08:46:25 crc kubenswrapper[4867]: I0101 08:46:25.727450 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d-dns-swift-storage-0\") pod \"dnsmasq-dns-786cc75955-lpfwv\" (UID: \"49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d\") " pod="openstack/dnsmasq-dns-786cc75955-lpfwv" Jan 01 08:46:25 crc kubenswrapper[4867]: I0101 08:46:25.727769 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d-config\") pod \"dnsmasq-dns-786cc75955-lpfwv\" (UID: \"49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d\") " pod="openstack/dnsmasq-dns-786cc75955-lpfwv" Jan 01 08:46:25 crc kubenswrapper[4867]: I0101 08:46:25.727806 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d-dns-svc\") pod \"dnsmasq-dns-786cc75955-lpfwv\" (UID: \"49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d\") " pod="openstack/dnsmasq-dns-786cc75955-lpfwv" Jan 01 08:46:25 crc kubenswrapper[4867]: I0101 08:46:25.727850 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d-ovsdbserver-sb\") pod \"dnsmasq-dns-786cc75955-lpfwv\" (UID: \"49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d\") " pod="openstack/dnsmasq-dns-786cc75955-lpfwv" Jan 01 08:46:25 crc kubenswrapper[4867]: I0101 08:46:25.728003 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d-ovsdbserver-nb\") pod \"dnsmasq-dns-786cc75955-lpfwv\" (UID: \"49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d\") " pod="openstack/dnsmasq-dns-786cc75955-lpfwv" Jan 01 08:46:25 crc kubenswrapper[4867]: I0101 08:46:25.747594 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2gh7\" (UniqueName: \"kubernetes.io/projected/49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d-kube-api-access-f2gh7\") pod \"dnsmasq-dns-786cc75955-lpfwv\" (UID: \"49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d\") " pod="openstack/dnsmasq-dns-786cc75955-lpfwv" Jan 01 08:46:25 crc kubenswrapper[4867]: I0101 08:46:25.862753 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-786cc75955-lpfwv" Jan 01 08:46:25 crc kubenswrapper[4867]: I0101 08:46:25.975301 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64766d4dcc-5fqnd" Jan 01 08:46:26 crc kubenswrapper[4867]: I0101 08:46:26.031405 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79ebb723-b63c-4ca5-800e-5a2b30633213-ovsdbserver-sb\") pod \"79ebb723-b63c-4ca5-800e-5a2b30633213\" (UID: \"79ebb723-b63c-4ca5-800e-5a2b30633213\") " Jan 01 08:46:26 crc kubenswrapper[4867]: I0101 08:46:26.031492 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79ebb723-b63c-4ca5-800e-5a2b30633213-dns-svc\") pod \"79ebb723-b63c-4ca5-800e-5a2b30633213\" (UID: \"79ebb723-b63c-4ca5-800e-5a2b30633213\") " Jan 01 08:46:26 crc kubenswrapper[4867]: I0101 08:46:26.031577 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79ebb723-b63c-4ca5-800e-5a2b30633213-dns-swift-storage-0\") pod \"79ebb723-b63c-4ca5-800e-5a2b30633213\" (UID: \"79ebb723-b63c-4ca5-800e-5a2b30633213\") " Jan 01 08:46:26 crc kubenswrapper[4867]: I0101 08:46:26.031605 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79ebb723-b63c-4ca5-800e-5a2b30633213-config\") pod \"79ebb723-b63c-4ca5-800e-5a2b30633213\" (UID: \"79ebb723-b63c-4ca5-800e-5a2b30633213\") " Jan 01 08:46:26 crc kubenswrapper[4867]: I0101 08:46:26.031629 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79ebb723-b63c-4ca5-800e-5a2b30633213-ovsdbserver-nb\") pod \"79ebb723-b63c-4ca5-800e-5a2b30633213\" (UID: \"79ebb723-b63c-4ca5-800e-5a2b30633213\") " Jan 01 08:46:26 crc kubenswrapper[4867]: I0101 08:46:26.031709 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5x2k\" (UniqueName: \"kubernetes.io/projected/79ebb723-b63c-4ca5-800e-5a2b30633213-kube-api-access-n5x2k\") pod \"79ebb723-b63c-4ca5-800e-5a2b30633213\" (UID: \"79ebb723-b63c-4ca5-800e-5a2b30633213\") " Jan 01 08:46:26 crc kubenswrapper[4867]: I0101 08:46:26.046078 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79ebb723-b63c-4ca5-800e-5a2b30633213-kube-api-access-n5x2k" (OuterVolumeSpecName: "kube-api-access-n5x2k") pod "79ebb723-b63c-4ca5-800e-5a2b30633213" (UID: "79ebb723-b63c-4ca5-800e-5a2b30633213"). InnerVolumeSpecName "kube-api-access-n5x2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:46:26 crc kubenswrapper[4867]: I0101 08:46:26.080838 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79ebb723-b63c-4ca5-800e-5a2b30633213-config" (OuterVolumeSpecName: "config") pod "79ebb723-b63c-4ca5-800e-5a2b30633213" (UID: "79ebb723-b63c-4ca5-800e-5a2b30633213"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:46:26 crc kubenswrapper[4867]: I0101 08:46:26.081240 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79ebb723-b63c-4ca5-800e-5a2b30633213-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "79ebb723-b63c-4ca5-800e-5a2b30633213" (UID: "79ebb723-b63c-4ca5-800e-5a2b30633213"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:46:26 crc kubenswrapper[4867]: I0101 08:46:26.082311 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79ebb723-b63c-4ca5-800e-5a2b30633213-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "79ebb723-b63c-4ca5-800e-5a2b30633213" (UID: "79ebb723-b63c-4ca5-800e-5a2b30633213"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:46:26 crc kubenswrapper[4867]: I0101 08:46:26.094923 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79ebb723-b63c-4ca5-800e-5a2b30633213-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "79ebb723-b63c-4ca5-800e-5a2b30633213" (UID: "79ebb723-b63c-4ca5-800e-5a2b30633213"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:46:26 crc kubenswrapper[4867]: I0101 08:46:26.097295 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79ebb723-b63c-4ca5-800e-5a2b30633213-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "79ebb723-b63c-4ca5-800e-5a2b30633213" (UID: "79ebb723-b63c-4ca5-800e-5a2b30633213"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:46:26 crc kubenswrapper[4867]: I0101 08:46:26.133110 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79ebb723-b63c-4ca5-800e-5a2b30633213-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:26 crc kubenswrapper[4867]: I0101 08:46:26.133141 4867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79ebb723-b63c-4ca5-800e-5a2b30633213-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:26 crc kubenswrapper[4867]: I0101 08:46:26.133157 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79ebb723-b63c-4ca5-800e-5a2b30633213-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:26 crc kubenswrapper[4867]: I0101 08:46:26.133169 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79ebb723-b63c-4ca5-800e-5a2b30633213-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:26 crc kubenswrapper[4867]: I0101 08:46:26.133181 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5x2k\" (UniqueName: \"kubernetes.io/projected/79ebb723-b63c-4ca5-800e-5a2b30633213-kube-api-access-n5x2k\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:26 crc kubenswrapper[4867]: I0101 08:46:26.133191 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79ebb723-b63c-4ca5-800e-5a2b30633213-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:26 crc kubenswrapper[4867]: I0101 08:46:26.137941 4867 generic.go:334] "Generic (PLEG): container finished" podID="79ebb723-b63c-4ca5-800e-5a2b30633213" containerID="4b154cb3669ae4620dfde2e5bbce6d6d6c9063f6e201a703c3c7d265ab4f4bb9" exitCode=0 Jan 01 08:46:26 crc kubenswrapper[4867]: I0101 08:46:26.138011 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64766d4dcc-5fqnd" event={"ID":"79ebb723-b63c-4ca5-800e-5a2b30633213","Type":"ContainerDied","Data":"4b154cb3669ae4620dfde2e5bbce6d6d6c9063f6e201a703c3c7d265ab4f4bb9"} Jan 01 08:46:26 crc kubenswrapper[4867]: I0101 08:46:26.138042 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64766d4dcc-5fqnd" event={"ID":"79ebb723-b63c-4ca5-800e-5a2b30633213","Type":"ContainerDied","Data":"48afedfc38a9399e1cdbd0848dbd8e32362b9a148d78b1ab7ea70f0dc21a7e23"} Jan 01 08:46:26 crc kubenswrapper[4867]: I0101 08:46:26.138057 4867 scope.go:117] "RemoveContainer" containerID="4b154cb3669ae4620dfde2e5bbce6d6d6c9063f6e201a703c3c7d265ab4f4bb9" Jan 01 08:46:26 crc kubenswrapper[4867]: I0101 08:46:26.138194 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64766d4dcc-5fqnd" Jan 01 08:46:26 crc kubenswrapper[4867]: I0101 08:46:26.171330 4867 scope.go:117] "RemoveContainer" containerID="960aa163fe8a41ee581f2511d96f7323c7607a03eb55cf3d0e75d607d2effb6c" Jan 01 08:46:26 crc kubenswrapper[4867]: I0101 08:46:26.183475 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64766d4dcc-5fqnd"] Jan 01 08:46:26 crc kubenswrapper[4867]: I0101 08:46:26.191706 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-64766d4dcc-5fqnd"] Jan 01 08:46:26 crc kubenswrapper[4867]: I0101 08:46:26.223178 4867 scope.go:117] "RemoveContainer" containerID="4b154cb3669ae4620dfde2e5bbce6d6d6c9063f6e201a703c3c7d265ab4f4bb9" Jan 01 08:46:26 crc kubenswrapper[4867]: E0101 08:46:26.223525 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b154cb3669ae4620dfde2e5bbce6d6d6c9063f6e201a703c3c7d265ab4f4bb9\": container with ID starting with 4b154cb3669ae4620dfde2e5bbce6d6d6c9063f6e201a703c3c7d265ab4f4bb9 not found: ID does not exist" containerID="4b154cb3669ae4620dfde2e5bbce6d6d6c9063f6e201a703c3c7d265ab4f4bb9" Jan 01 08:46:26 crc kubenswrapper[4867]: I0101 08:46:26.223655 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b154cb3669ae4620dfde2e5bbce6d6d6c9063f6e201a703c3c7d265ab4f4bb9"} err="failed to get container status \"4b154cb3669ae4620dfde2e5bbce6d6d6c9063f6e201a703c3c7d265ab4f4bb9\": rpc error: code = NotFound desc = could not find container \"4b154cb3669ae4620dfde2e5bbce6d6d6c9063f6e201a703c3c7d265ab4f4bb9\": container with ID starting with 4b154cb3669ae4620dfde2e5bbce6d6d6c9063f6e201a703c3c7d265ab4f4bb9 not found: ID does not exist" Jan 01 08:46:26 crc kubenswrapper[4867]: I0101 08:46:26.223743 4867 scope.go:117] "RemoveContainer" containerID="960aa163fe8a41ee581f2511d96f7323c7607a03eb55cf3d0e75d607d2effb6c" Jan 01 08:46:26 crc kubenswrapper[4867]: E0101 08:46:26.224228 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"960aa163fe8a41ee581f2511d96f7323c7607a03eb55cf3d0e75d607d2effb6c\": container with ID starting with 960aa163fe8a41ee581f2511d96f7323c7607a03eb55cf3d0e75d607d2effb6c not found: ID does not exist" containerID="960aa163fe8a41ee581f2511d96f7323c7607a03eb55cf3d0e75d607d2effb6c" Jan 01 08:46:26 crc kubenswrapper[4867]: I0101 08:46:26.224260 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"960aa163fe8a41ee581f2511d96f7323c7607a03eb55cf3d0e75d607d2effb6c"} err="failed to get container status \"960aa163fe8a41ee581f2511d96f7323c7607a03eb55cf3d0e75d607d2effb6c\": rpc error: code = NotFound desc = could not find container \"960aa163fe8a41ee581f2511d96f7323c7607a03eb55cf3d0e75d607d2effb6c\": container with ID starting with 960aa163fe8a41ee581f2511d96f7323c7607a03eb55cf3d0e75d607d2effb6c not found: ID does not exist" Jan 01 08:46:26 crc kubenswrapper[4867]: I0101 08:46:26.283598 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-786cc75955-lpfwv"] Jan 01 08:46:26 crc kubenswrapper[4867]: W0101 08:46:26.286062 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49b4a3f5_7ac8_4bb6_866a_5b6ca795ce0d.slice/crio-c4f1f714b0f0850cc6f3d54752698bc5c0032b855b8a65b1bfb9721d5b852c34 WatchSource:0}: Error finding container c4f1f714b0f0850cc6f3d54752698bc5c0032b855b8a65b1bfb9721d5b852c34: Status 404 returned error can't find the container with id c4f1f714b0f0850cc6f3d54752698bc5c0032b855b8a65b1bfb9721d5b852c34 Jan 01 08:46:27 crc kubenswrapper[4867]: I0101 08:46:27.152947 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79ebb723-b63c-4ca5-800e-5a2b30633213" path="/var/lib/kubelet/pods/79ebb723-b63c-4ca5-800e-5a2b30633213/volumes" Jan 01 08:46:27 crc kubenswrapper[4867]: I0101 08:46:27.166811 4867 generic.go:334] "Generic (PLEG): container finished" podID="49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d" containerID="93180d09bcd9a1c91babba1163bcf4b7e516da99bb04040d573f2226d479ec8d" exitCode=0 Jan 01 08:46:27 crc kubenswrapper[4867]: I0101 08:46:27.166879 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-786cc75955-lpfwv" event={"ID":"49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d","Type":"ContainerDied","Data":"93180d09bcd9a1c91babba1163bcf4b7e516da99bb04040d573f2226d479ec8d"} Jan 01 08:46:27 crc kubenswrapper[4867]: I0101 08:46:27.166917 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-786cc75955-lpfwv" event={"ID":"49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d","Type":"ContainerStarted","Data":"c4f1f714b0f0850cc6f3d54752698bc5c0032b855b8a65b1bfb9721d5b852c34"} Jan 01 08:46:28 crc kubenswrapper[4867]: I0101 08:46:28.180676 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-786cc75955-lpfwv" event={"ID":"49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d","Type":"ContainerStarted","Data":"3d48a7cc1f09c73f7654e35066bc60bfc3818573839a37b4f6363c4e57c8dfb4"} Jan 01 08:46:28 crc kubenswrapper[4867]: I0101 08:46:28.181144 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-786cc75955-lpfwv" Jan 01 08:46:28 crc kubenswrapper[4867]: I0101 08:46:28.213747 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-786cc75955-lpfwv" podStartSLOduration=3.213726554 podStartE2EDuration="3.213726554s" podCreationTimestamp="2026-01-01 08:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:46:28.199379061 +0000 UTC m=+1197.334647840" watchObservedRunningTime="2026-01-01 08:46:28.213726554 +0000 UTC m=+1197.348995363" Jan 01 08:46:28 crc kubenswrapper[4867]: I0101 08:46:28.484064 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 01 08:46:28 crc kubenswrapper[4867]: I0101 08:46:28.806035 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-xsp84"] Jan 01 08:46:28 crc kubenswrapper[4867]: E0101 08:46:28.806570 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79ebb723-b63c-4ca5-800e-5a2b30633213" containerName="dnsmasq-dns" Jan 01 08:46:28 crc kubenswrapper[4867]: I0101 08:46:28.806627 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="79ebb723-b63c-4ca5-800e-5a2b30633213" containerName="dnsmasq-dns" Jan 01 08:46:28 crc kubenswrapper[4867]: E0101 08:46:28.806714 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79ebb723-b63c-4ca5-800e-5a2b30633213" containerName="init" Jan 01 08:46:28 crc kubenswrapper[4867]: I0101 08:46:28.806759 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="79ebb723-b63c-4ca5-800e-5a2b30633213" containerName="init" Jan 01 08:46:28 crc kubenswrapper[4867]: I0101 08:46:28.806959 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="79ebb723-b63c-4ca5-800e-5a2b30633213" containerName="dnsmasq-dns" Jan 01 08:46:28 crc kubenswrapper[4867]: I0101 08:46:28.807543 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xsp84" Jan 01 08:46:28 crc kubenswrapper[4867]: I0101 08:46:28.818106 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-xsp84"] Jan 01 08:46:28 crc kubenswrapper[4867]: I0101 08:46:28.840442 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-c514-account-create-update-qfw8m"] Jan 01 08:46:28 crc kubenswrapper[4867]: I0101 08:46:28.841637 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c514-account-create-update-qfw8m" Jan 01 08:46:28 crc kubenswrapper[4867]: I0101 08:46:28.846030 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 01 08:46:28 crc kubenswrapper[4867]: I0101 08:46:28.849057 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c514-account-create-update-qfw8m"] Jan 01 08:46:28 crc kubenswrapper[4867]: I0101 08:46:28.881354 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bb0c901-c8bf-4767-ba12-56111931051e-operator-scripts\") pod \"cinder-c514-account-create-update-qfw8m\" (UID: \"1bb0c901-c8bf-4767-ba12-56111931051e\") " pod="openstack/cinder-c514-account-create-update-qfw8m" Jan 01 08:46:28 crc kubenswrapper[4867]: I0101 08:46:28.881417 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3428c4a3-12ce-4407-8c12-1fa0241c29a5-operator-scripts\") pod \"cinder-db-create-xsp84\" (UID: \"3428c4a3-12ce-4407-8c12-1fa0241c29a5\") " pod="openstack/cinder-db-create-xsp84" Jan 01 08:46:28 crc kubenswrapper[4867]: I0101 08:46:28.881439 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbndv\" (UniqueName: \"kubernetes.io/projected/3428c4a3-12ce-4407-8c12-1fa0241c29a5-kube-api-access-gbndv\") pod \"cinder-db-create-xsp84\" (UID: \"3428c4a3-12ce-4407-8c12-1fa0241c29a5\") " pod="openstack/cinder-db-create-xsp84" Jan 01 08:46:28 crc kubenswrapper[4867]: I0101 08:46:28.881468 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nchm8\" (UniqueName: \"kubernetes.io/projected/1bb0c901-c8bf-4767-ba12-56111931051e-kube-api-access-nchm8\") pod \"cinder-c514-account-create-update-qfw8m\" (UID: \"1bb0c901-c8bf-4767-ba12-56111931051e\") " pod="openstack/cinder-c514-account-create-update-qfw8m" Jan 01 08:46:28 crc kubenswrapper[4867]: I0101 08:46:28.908813 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-xsk47"] Jan 01 08:46:28 crc kubenswrapper[4867]: I0101 08:46:28.911335 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xsk47" Jan 01 08:46:28 crc kubenswrapper[4867]: I0101 08:46:28.930093 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-93ba-account-create-update-q4kzs"] Jan 01 08:46:28 crc kubenswrapper[4867]: I0101 08:46:28.931257 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-93ba-account-create-update-q4kzs" Jan 01 08:46:28 crc kubenswrapper[4867]: I0101 08:46:28.934382 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 01 08:46:28 crc kubenswrapper[4867]: I0101 08:46:28.952928 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-xsk47"] Jan 01 08:46:28 crc kubenswrapper[4867]: I0101 08:46:28.966144 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-93ba-account-create-update-q4kzs"] Jan 01 08:46:28 crc kubenswrapper[4867]: I0101 08:46:28.983216 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d29ab84-247c-4e1b-b199-3fa0bcf59771-operator-scripts\") pod \"barbican-93ba-account-create-update-q4kzs\" (UID: \"3d29ab84-247c-4e1b-b199-3fa0bcf59771\") " pod="openstack/barbican-93ba-account-create-update-q4kzs" Jan 01 08:46:28 crc kubenswrapper[4867]: I0101 08:46:28.983392 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rdxl\" (UniqueName: \"kubernetes.io/projected/6dde095a-4ecb-477d-9699-9867084e2d00-kube-api-access-6rdxl\") pod \"barbican-db-create-xsk47\" (UID: \"6dde095a-4ecb-477d-9699-9867084e2d00\") " pod="openstack/barbican-db-create-xsk47" Jan 01 08:46:28 crc kubenswrapper[4867]: I0101 08:46:28.983536 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bb0c901-c8bf-4767-ba12-56111931051e-operator-scripts\") pod \"cinder-c514-account-create-update-qfw8m\" (UID: \"1bb0c901-c8bf-4767-ba12-56111931051e\") " pod="openstack/cinder-c514-account-create-update-qfw8m" Jan 01 08:46:28 crc kubenswrapper[4867]: I0101 08:46:28.983586 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3428c4a3-12ce-4407-8c12-1fa0241c29a5-operator-scripts\") pod \"cinder-db-create-xsp84\" (UID: \"3428c4a3-12ce-4407-8c12-1fa0241c29a5\") " pod="openstack/cinder-db-create-xsp84" Jan 01 08:46:28 crc kubenswrapper[4867]: I0101 08:46:28.983609 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbndv\" (UniqueName: \"kubernetes.io/projected/3428c4a3-12ce-4407-8c12-1fa0241c29a5-kube-api-access-gbndv\") pod \"cinder-db-create-xsp84\" (UID: \"3428c4a3-12ce-4407-8c12-1fa0241c29a5\") " pod="openstack/cinder-db-create-xsp84" Jan 01 08:46:28 crc kubenswrapper[4867]: I0101 08:46:28.983633 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dde095a-4ecb-477d-9699-9867084e2d00-operator-scripts\") pod \"barbican-db-create-xsk47\" (UID: \"6dde095a-4ecb-477d-9699-9867084e2d00\") " pod="openstack/barbican-db-create-xsk47" Jan 01 08:46:28 crc kubenswrapper[4867]: I0101 08:46:28.983661 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nchm8\" (UniqueName: \"kubernetes.io/projected/1bb0c901-c8bf-4767-ba12-56111931051e-kube-api-access-nchm8\") pod \"cinder-c514-account-create-update-qfw8m\" (UID: \"1bb0c901-c8bf-4767-ba12-56111931051e\") " pod="openstack/cinder-c514-account-create-update-qfw8m" Jan 01 08:46:28 crc kubenswrapper[4867]: I0101 08:46:28.983695 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77rpc\" (UniqueName: \"kubernetes.io/projected/3d29ab84-247c-4e1b-b199-3fa0bcf59771-kube-api-access-77rpc\") pod \"barbican-93ba-account-create-update-q4kzs\" (UID: \"3d29ab84-247c-4e1b-b199-3fa0bcf59771\") " pod="openstack/barbican-93ba-account-create-update-q4kzs" Jan 01 08:46:28 crc kubenswrapper[4867]: I0101 08:46:28.984426 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bb0c901-c8bf-4767-ba12-56111931051e-operator-scripts\") pod \"cinder-c514-account-create-update-qfw8m\" (UID: \"1bb0c901-c8bf-4767-ba12-56111931051e\") " pod="openstack/cinder-c514-account-create-update-qfw8m" Jan 01 08:46:28 crc kubenswrapper[4867]: I0101 08:46:28.984445 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3428c4a3-12ce-4407-8c12-1fa0241c29a5-operator-scripts\") pod \"cinder-db-create-xsp84\" (UID: \"3428c4a3-12ce-4407-8c12-1fa0241c29a5\") " pod="openstack/cinder-db-create-xsp84" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.005533 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nchm8\" (UniqueName: \"kubernetes.io/projected/1bb0c901-c8bf-4767-ba12-56111931051e-kube-api-access-nchm8\") pod \"cinder-c514-account-create-update-qfw8m\" (UID: \"1bb0c901-c8bf-4767-ba12-56111931051e\") " pod="openstack/cinder-c514-account-create-update-qfw8m" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.021706 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbndv\" (UniqueName: \"kubernetes.io/projected/3428c4a3-12ce-4407-8c12-1fa0241c29a5-kube-api-access-gbndv\") pod \"cinder-db-create-xsp84\" (UID: \"3428c4a3-12ce-4407-8c12-1fa0241c29a5\") " pod="openstack/cinder-db-create-xsp84" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.042766 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-j5kc9"] Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.043785 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-j5kc9" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.054979 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-j5kc9"] Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.084826 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3ae1a6d-6f42-40a0-87a2-488ee05a0c09-operator-scripts\") pod \"neutron-db-create-j5kc9\" (UID: \"b3ae1a6d-6f42-40a0-87a2-488ee05a0c09\") " pod="openstack/neutron-db-create-j5kc9" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.084966 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77rpc\" (UniqueName: \"kubernetes.io/projected/3d29ab84-247c-4e1b-b199-3fa0bcf59771-kube-api-access-77rpc\") pod \"barbican-93ba-account-create-update-q4kzs\" (UID: \"3d29ab84-247c-4e1b-b199-3fa0bcf59771\") " pod="openstack/barbican-93ba-account-create-update-q4kzs" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.085036 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d29ab84-247c-4e1b-b199-3fa0bcf59771-operator-scripts\") pod \"barbican-93ba-account-create-update-q4kzs\" (UID: \"3d29ab84-247c-4e1b-b199-3fa0bcf59771\") " pod="openstack/barbican-93ba-account-create-update-q4kzs" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.085111 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p7dh\" (UniqueName: \"kubernetes.io/projected/b3ae1a6d-6f42-40a0-87a2-488ee05a0c09-kube-api-access-6p7dh\") pod \"neutron-db-create-j5kc9\" (UID: \"b3ae1a6d-6f42-40a0-87a2-488ee05a0c09\") " pod="openstack/neutron-db-create-j5kc9" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.085163 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rdxl\" (UniqueName: \"kubernetes.io/projected/6dde095a-4ecb-477d-9699-9867084e2d00-kube-api-access-6rdxl\") pod \"barbican-db-create-xsk47\" (UID: \"6dde095a-4ecb-477d-9699-9867084e2d00\") " pod="openstack/barbican-db-create-xsk47" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.085368 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dde095a-4ecb-477d-9699-9867084e2d00-operator-scripts\") pod \"barbican-db-create-xsk47\" (UID: \"6dde095a-4ecb-477d-9699-9867084e2d00\") " pod="openstack/barbican-db-create-xsk47" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.086093 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dde095a-4ecb-477d-9699-9867084e2d00-operator-scripts\") pod \"barbican-db-create-xsk47\" (UID: \"6dde095a-4ecb-477d-9699-9867084e2d00\") " pod="openstack/barbican-db-create-xsk47" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.086336 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d29ab84-247c-4e1b-b199-3fa0bcf59771-operator-scripts\") pod \"barbican-93ba-account-create-update-q4kzs\" (UID: \"3d29ab84-247c-4e1b-b199-3fa0bcf59771\") " pod="openstack/barbican-93ba-account-create-update-q4kzs" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.114022 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rdxl\" (UniqueName: \"kubernetes.io/projected/6dde095a-4ecb-477d-9699-9867084e2d00-kube-api-access-6rdxl\") pod \"barbican-db-create-xsk47\" (UID: \"6dde095a-4ecb-477d-9699-9867084e2d00\") " pod="openstack/barbican-db-create-xsk47" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.121292 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77rpc\" (UniqueName: \"kubernetes.io/projected/3d29ab84-247c-4e1b-b199-3fa0bcf59771-kube-api-access-77rpc\") pod \"barbican-93ba-account-create-update-q4kzs\" (UID: \"3d29ab84-247c-4e1b-b199-3fa0bcf59771\") " pod="openstack/barbican-93ba-account-create-update-q4kzs" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.122127 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xsp84" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.123983 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5a0b-account-create-update-dfp4s"] Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.124905 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5a0b-account-create-update-dfp4s" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.126922 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.156726 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c514-account-create-update-qfw8m" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.175958 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-zqg5l"] Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.176990 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zqg5l" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.185920 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.186152 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.186311 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-67p9k" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.186981 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3ae1a6d-6f42-40a0-87a2-488ee05a0c09-operator-scripts\") pod \"neutron-db-create-j5kc9\" (UID: \"b3ae1a6d-6f42-40a0-87a2-488ee05a0c09\") " pod="openstack/neutron-db-create-j5kc9" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.187046 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.187076 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9jx2\" (UniqueName: \"kubernetes.io/projected/03b75a91-f656-4340-9b36-3b95732d5138-kube-api-access-f9jx2\") pod \"neutron-5a0b-account-create-update-dfp4s\" (UID: \"03b75a91-f656-4340-9b36-3b95732d5138\") " pod="openstack/neutron-5a0b-account-create-update-dfp4s" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.187108 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03b75a91-f656-4340-9b36-3b95732d5138-operator-scripts\") pod \"neutron-5a0b-account-create-update-dfp4s\" (UID: \"03b75a91-f656-4340-9b36-3b95732d5138\") " pod="openstack/neutron-5a0b-account-create-update-dfp4s" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.187168 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p7dh\" (UniqueName: \"kubernetes.io/projected/b3ae1a6d-6f42-40a0-87a2-488ee05a0c09-kube-api-access-6p7dh\") pod \"neutron-db-create-j5kc9\" (UID: \"b3ae1a6d-6f42-40a0-87a2-488ee05a0c09\") " pod="openstack/neutron-db-create-j5kc9" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.190787 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3ae1a6d-6f42-40a0-87a2-488ee05a0c09-operator-scripts\") pod \"neutron-db-create-j5kc9\" (UID: \"b3ae1a6d-6f42-40a0-87a2-488ee05a0c09\") " pod="openstack/neutron-db-create-j5kc9" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.190835 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5a0b-account-create-update-dfp4s"] Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.200055 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-zqg5l"] Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.211591 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p7dh\" (UniqueName: \"kubernetes.io/projected/b3ae1a6d-6f42-40a0-87a2-488ee05a0c09-kube-api-access-6p7dh\") pod \"neutron-db-create-j5kc9\" (UID: \"b3ae1a6d-6f42-40a0-87a2-488ee05a0c09\") " pod="openstack/neutron-db-create-j5kc9" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.238739 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xsk47" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.261343 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-93ba-account-create-update-q4kzs" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.288431 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30fc9439-d9d2-4a19-9ffd-2b80a7269e77-config-data\") pod \"keystone-db-sync-zqg5l\" (UID: \"30fc9439-d9d2-4a19-9ffd-2b80a7269e77\") " pod="openstack/keystone-db-sync-zqg5l" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.291603 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30fc9439-d9d2-4a19-9ffd-2b80a7269e77-combined-ca-bundle\") pod \"keystone-db-sync-zqg5l\" (UID: \"30fc9439-d9d2-4a19-9ffd-2b80a7269e77\") " pod="openstack/keystone-db-sync-zqg5l" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.291951 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9jx2\" (UniqueName: \"kubernetes.io/projected/03b75a91-f656-4340-9b36-3b95732d5138-kube-api-access-f9jx2\") pod \"neutron-5a0b-account-create-update-dfp4s\" (UID: \"03b75a91-f656-4340-9b36-3b95732d5138\") " pod="openstack/neutron-5a0b-account-create-update-dfp4s" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.291989 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03b75a91-f656-4340-9b36-3b95732d5138-operator-scripts\") pod \"neutron-5a0b-account-create-update-dfp4s\" (UID: \"03b75a91-f656-4340-9b36-3b95732d5138\") " pod="openstack/neutron-5a0b-account-create-update-dfp4s" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.292020 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktblc\" (UniqueName: \"kubernetes.io/projected/30fc9439-d9d2-4a19-9ffd-2b80a7269e77-kube-api-access-ktblc\") pod \"keystone-db-sync-zqg5l\" (UID: \"30fc9439-d9d2-4a19-9ffd-2b80a7269e77\") " pod="openstack/keystone-db-sync-zqg5l" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.295875 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03b75a91-f656-4340-9b36-3b95732d5138-operator-scripts\") pod \"neutron-5a0b-account-create-update-dfp4s\" (UID: \"03b75a91-f656-4340-9b36-3b95732d5138\") " pod="openstack/neutron-5a0b-account-create-update-dfp4s" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.316473 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9jx2\" (UniqueName: \"kubernetes.io/projected/03b75a91-f656-4340-9b36-3b95732d5138-kube-api-access-f9jx2\") pod \"neutron-5a0b-account-create-update-dfp4s\" (UID: \"03b75a91-f656-4340-9b36-3b95732d5138\") " pod="openstack/neutron-5a0b-account-create-update-dfp4s" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.386509 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-j5kc9" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.393377 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30fc9439-d9d2-4a19-9ffd-2b80a7269e77-config-data\") pod \"keystone-db-sync-zqg5l\" (UID: \"30fc9439-d9d2-4a19-9ffd-2b80a7269e77\") " pod="openstack/keystone-db-sync-zqg5l" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.393421 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30fc9439-d9d2-4a19-9ffd-2b80a7269e77-combined-ca-bundle\") pod \"keystone-db-sync-zqg5l\" (UID: \"30fc9439-d9d2-4a19-9ffd-2b80a7269e77\") " pod="openstack/keystone-db-sync-zqg5l" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.393501 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktblc\" (UniqueName: \"kubernetes.io/projected/30fc9439-d9d2-4a19-9ffd-2b80a7269e77-kube-api-access-ktblc\") pod \"keystone-db-sync-zqg5l\" (UID: \"30fc9439-d9d2-4a19-9ffd-2b80a7269e77\") " pod="openstack/keystone-db-sync-zqg5l" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.405142 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30fc9439-d9d2-4a19-9ffd-2b80a7269e77-config-data\") pod \"keystone-db-sync-zqg5l\" (UID: \"30fc9439-d9d2-4a19-9ffd-2b80a7269e77\") " pod="openstack/keystone-db-sync-zqg5l" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.406070 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30fc9439-d9d2-4a19-9ffd-2b80a7269e77-combined-ca-bundle\") pod \"keystone-db-sync-zqg5l\" (UID: \"30fc9439-d9d2-4a19-9ffd-2b80a7269e77\") " pod="openstack/keystone-db-sync-zqg5l" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.423745 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktblc\" (UniqueName: \"kubernetes.io/projected/30fc9439-d9d2-4a19-9ffd-2b80a7269e77-kube-api-access-ktblc\") pod \"keystone-db-sync-zqg5l\" (UID: \"30fc9439-d9d2-4a19-9ffd-2b80a7269e77\") " pod="openstack/keystone-db-sync-zqg5l" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.534783 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5a0b-account-create-update-dfp4s" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.539669 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zqg5l" Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.746344 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-xsp84"] Jan 01 08:46:29 crc kubenswrapper[4867]: W0101 08:46:29.749923 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3428c4a3_12ce_4407_8c12_1fa0241c29a5.slice/crio-78d098c2bfaa6c8c3732dafe28c4a97908cf1c606ca5b0ab2b30c603a04a82c0 WatchSource:0}: Error finding container 78d098c2bfaa6c8c3732dafe28c4a97908cf1c606ca5b0ab2b30c603a04a82c0: Status 404 returned error can't find the container with id 78d098c2bfaa6c8c3732dafe28c4a97908cf1c606ca5b0ab2b30c603a04a82c0 Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.844847 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-xsk47"] Jan 01 08:46:29 crc kubenswrapper[4867]: W0101 08:46:29.850430 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6dde095a_4ecb_477d_9699_9867084e2d00.slice/crio-1362099d5a749f7749e362fad9686204e3d7b69841f3f160caf9a45f6bd0911f WatchSource:0}: Error finding container 1362099d5a749f7749e362fad9686204e3d7b69841f3f160caf9a45f6bd0911f: Status 404 returned error can't find the container with id 1362099d5a749f7749e362fad9686204e3d7b69841f3f160caf9a45f6bd0911f Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.863451 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c514-account-create-update-qfw8m"] Jan 01 08:46:29 crc kubenswrapper[4867]: W0101 08:46:29.864438 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bb0c901_c8bf_4767_ba12_56111931051e.slice/crio-660639faa945ab1f07976057b286023334ceaa3e8149e79bac4e169492de8f07 WatchSource:0}: Error finding container 660639faa945ab1f07976057b286023334ceaa3e8149e79bac4e169492de8f07: Status 404 returned error can't find the container with id 660639faa945ab1f07976057b286023334ceaa3e8149e79bac4e169492de8f07 Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.968423 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-93ba-account-create-update-q4kzs"] Jan 01 08:46:29 crc kubenswrapper[4867]: W0101 08:46:29.981460 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d29ab84_247c_4e1b_b199_3fa0bcf59771.slice/crio-9b74decc659170d0f66ce1125a127091d3ffc01d3d8bd0b0a52e9151130b4fd2 WatchSource:0}: Error finding container 9b74decc659170d0f66ce1125a127091d3ffc01d3d8bd0b0a52e9151130b4fd2: Status 404 returned error can't find the container with id 9b74decc659170d0f66ce1125a127091d3ffc01d3d8bd0b0a52e9151130b4fd2 Jan 01 08:46:29 crc kubenswrapper[4867]: I0101 08:46:29.982475 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-j5kc9"] Jan 01 08:46:29 crc kubenswrapper[4867]: W0101 08:46:29.984238 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3ae1a6d_6f42_40a0_87a2_488ee05a0c09.slice/crio-44c35cf55d797c161d375904abdc26522403e721904533f1c917dc0cc63e384a WatchSource:0}: Error finding container 44c35cf55d797c161d375904abdc26522403e721904533f1c917dc0cc63e384a: Status 404 returned error can't find the container with id 44c35cf55d797c161d375904abdc26522403e721904533f1c917dc0cc63e384a Jan 01 08:46:30 crc kubenswrapper[4867]: I0101 08:46:30.122779 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5a0b-account-create-update-dfp4s"] Jan 01 08:46:30 crc kubenswrapper[4867]: W0101 08:46:30.126996 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03b75a91_f656_4340_9b36_3b95732d5138.slice/crio-03c8119d007360dd8f495292c6114c751f62d208f56ceedf84dd4d8d4cbf2145 WatchSource:0}: Error finding container 03c8119d007360dd8f495292c6114c751f62d208f56ceedf84dd4d8d4cbf2145: Status 404 returned error can't find the container with id 03c8119d007360dd8f495292c6114c751f62d208f56ceedf84dd4d8d4cbf2145 Jan 01 08:46:30 crc kubenswrapper[4867]: I0101 08:46:30.195263 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-zqg5l"] Jan 01 08:46:30 crc kubenswrapper[4867]: I0101 08:46:30.206244 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-j5kc9" event={"ID":"b3ae1a6d-6f42-40a0-87a2-488ee05a0c09","Type":"ContainerStarted","Data":"44c35cf55d797c161d375904abdc26522403e721904533f1c917dc0cc63e384a"} Jan 01 08:46:30 crc kubenswrapper[4867]: I0101 08:46:30.207515 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-93ba-account-create-update-q4kzs" event={"ID":"3d29ab84-247c-4e1b-b199-3fa0bcf59771","Type":"ContainerStarted","Data":"9b74decc659170d0f66ce1125a127091d3ffc01d3d8bd0b0a52e9151130b4fd2"} Jan 01 08:46:30 crc kubenswrapper[4867]: I0101 08:46:30.209538 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xsk47" event={"ID":"6dde095a-4ecb-477d-9699-9867084e2d00","Type":"ContainerStarted","Data":"6da8d6827e35aa61466e5baf27b39069b8e735896ff0db9421f9f30c4950bc50"} Jan 01 08:46:30 crc kubenswrapper[4867]: I0101 08:46:30.209569 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xsk47" event={"ID":"6dde095a-4ecb-477d-9699-9867084e2d00","Type":"ContainerStarted","Data":"1362099d5a749f7749e362fad9686204e3d7b69841f3f160caf9a45f6bd0911f"} Jan 01 08:46:30 crc kubenswrapper[4867]: I0101 08:46:30.213735 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5a0b-account-create-update-dfp4s" event={"ID":"03b75a91-f656-4340-9b36-3b95732d5138","Type":"ContainerStarted","Data":"03c8119d007360dd8f495292c6114c751f62d208f56ceedf84dd4d8d4cbf2145"} Jan 01 08:46:30 crc kubenswrapper[4867]: I0101 08:46:30.221116 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c514-account-create-update-qfw8m" event={"ID":"1bb0c901-c8bf-4767-ba12-56111931051e","Type":"ContainerStarted","Data":"1019ef1d5ef9d449064f64b97f1872918f743f75ee494fc24a8b9c9b5d1bdb12"} Jan 01 08:46:30 crc kubenswrapper[4867]: I0101 08:46:30.221167 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c514-account-create-update-qfw8m" event={"ID":"1bb0c901-c8bf-4767-ba12-56111931051e","Type":"ContainerStarted","Data":"660639faa945ab1f07976057b286023334ceaa3e8149e79bac4e169492de8f07"} Jan 01 08:46:30 crc kubenswrapper[4867]: I0101 08:46:30.224556 4867 generic.go:334] "Generic (PLEG): container finished" podID="3428c4a3-12ce-4407-8c12-1fa0241c29a5" containerID="55b82754575fc6c9e726dd5ee34b5516da67786d9139b5d74b285932ea1f32ca" exitCode=0 Jan 01 08:46:30 crc kubenswrapper[4867]: I0101 08:46:30.224594 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xsp84" event={"ID":"3428c4a3-12ce-4407-8c12-1fa0241c29a5","Type":"ContainerDied","Data":"55b82754575fc6c9e726dd5ee34b5516da67786d9139b5d74b285932ea1f32ca"} Jan 01 08:46:30 crc kubenswrapper[4867]: I0101 08:46:30.224616 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xsp84" event={"ID":"3428c4a3-12ce-4407-8c12-1fa0241c29a5","Type":"ContainerStarted","Data":"78d098c2bfaa6c8c3732dafe28c4a97908cf1c606ca5b0ab2b30c603a04a82c0"} Jan 01 08:46:30 crc kubenswrapper[4867]: W0101 08:46:30.238835 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30fc9439_d9d2_4a19_9ffd_2b80a7269e77.slice/crio-af3adad6e955b33b1ed292fca391a271fe558343d48eeb1c6893c62c9e2bd4cf WatchSource:0}: Error finding container af3adad6e955b33b1ed292fca391a271fe558343d48eeb1c6893c62c9e2bd4cf: Status 404 returned error can't find the container with id af3adad6e955b33b1ed292fca391a271fe558343d48eeb1c6893c62c9e2bd4cf Jan 01 08:46:30 crc kubenswrapper[4867]: I0101 08:46:30.254318 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-c514-account-create-update-qfw8m" podStartSLOduration=2.254298668 podStartE2EDuration="2.254298668s" podCreationTimestamp="2026-01-01 08:46:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:46:30.253228587 +0000 UTC m=+1199.388497376" watchObservedRunningTime="2026-01-01 08:46:30.254298668 +0000 UTC m=+1199.389567457" Jan 01 08:46:30 crc kubenswrapper[4867]: I0101 08:46:30.256631 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-xsk47" podStartSLOduration=2.256616723 podStartE2EDuration="2.256616723s" podCreationTimestamp="2026-01-01 08:46:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:46:30.235447428 +0000 UTC m=+1199.370716197" watchObservedRunningTime="2026-01-01 08:46:30.256616723 +0000 UTC m=+1199.391885502" Jan 01 08:46:31 crc kubenswrapper[4867]: I0101 08:46:31.234067 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zqg5l" event={"ID":"30fc9439-d9d2-4a19-9ffd-2b80a7269e77","Type":"ContainerStarted","Data":"af3adad6e955b33b1ed292fca391a271fe558343d48eeb1c6893c62c9e2bd4cf"} Jan 01 08:46:31 crc kubenswrapper[4867]: I0101 08:46:31.238508 4867 generic.go:334] "Generic (PLEG): container finished" podID="1bb0c901-c8bf-4767-ba12-56111931051e" containerID="1019ef1d5ef9d449064f64b97f1872918f743f75ee494fc24a8b9c9b5d1bdb12" exitCode=0 Jan 01 08:46:31 crc kubenswrapper[4867]: I0101 08:46:31.238613 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c514-account-create-update-qfw8m" event={"ID":"1bb0c901-c8bf-4767-ba12-56111931051e","Type":"ContainerDied","Data":"1019ef1d5ef9d449064f64b97f1872918f743f75ee494fc24a8b9c9b5d1bdb12"} Jan 01 08:46:31 crc kubenswrapper[4867]: I0101 08:46:31.242856 4867 generic.go:334] "Generic (PLEG): container finished" podID="b3ae1a6d-6f42-40a0-87a2-488ee05a0c09" containerID="11198450002d61606555b27ce0d74c2698b23a1a3fbd35217506bd8a14f9de29" exitCode=0 Jan 01 08:46:31 crc kubenswrapper[4867]: I0101 08:46:31.243018 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-j5kc9" event={"ID":"b3ae1a6d-6f42-40a0-87a2-488ee05a0c09","Type":"ContainerDied","Data":"11198450002d61606555b27ce0d74c2698b23a1a3fbd35217506bd8a14f9de29"} Jan 01 08:46:31 crc kubenswrapper[4867]: I0101 08:46:31.244418 4867 generic.go:334] "Generic (PLEG): container finished" podID="3d29ab84-247c-4e1b-b199-3fa0bcf59771" containerID="47cb0673b389a01e11b37b3511a5b9946a15f3796633aee01d44a7b3a87da5dc" exitCode=0 Jan 01 08:46:31 crc kubenswrapper[4867]: I0101 08:46:31.244513 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-93ba-account-create-update-q4kzs" event={"ID":"3d29ab84-247c-4e1b-b199-3fa0bcf59771","Type":"ContainerDied","Data":"47cb0673b389a01e11b37b3511a5b9946a15f3796633aee01d44a7b3a87da5dc"} Jan 01 08:46:31 crc kubenswrapper[4867]: I0101 08:46:31.246819 4867 generic.go:334] "Generic (PLEG): container finished" podID="6dde095a-4ecb-477d-9699-9867084e2d00" containerID="6da8d6827e35aa61466e5baf27b39069b8e735896ff0db9421f9f30c4950bc50" exitCode=0 Jan 01 08:46:31 crc kubenswrapper[4867]: I0101 08:46:31.246862 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xsk47" event={"ID":"6dde095a-4ecb-477d-9699-9867084e2d00","Type":"ContainerDied","Data":"6da8d6827e35aa61466e5baf27b39069b8e735896ff0db9421f9f30c4950bc50"} Jan 01 08:46:31 crc kubenswrapper[4867]: I0101 08:46:31.253359 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5a0b-account-create-update-dfp4s" event={"ID":"03b75a91-f656-4340-9b36-3b95732d5138","Type":"ContainerDied","Data":"c9f0f002e87ad70f25a706911fd3098bc1fdeefab04560ef790da58f6517f5f2"} Jan 01 08:46:31 crc kubenswrapper[4867]: I0101 08:46:31.253320 4867 generic.go:334] "Generic (PLEG): container finished" podID="03b75a91-f656-4340-9b36-3b95732d5138" containerID="c9f0f002e87ad70f25a706911fd3098bc1fdeefab04560ef790da58f6517f5f2" exitCode=0 Jan 01 08:46:31 crc kubenswrapper[4867]: I0101 08:46:31.610311 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xsp84" Jan 01 08:46:31 crc kubenswrapper[4867]: I0101 08:46:31.751334 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbndv\" (UniqueName: \"kubernetes.io/projected/3428c4a3-12ce-4407-8c12-1fa0241c29a5-kube-api-access-gbndv\") pod \"3428c4a3-12ce-4407-8c12-1fa0241c29a5\" (UID: \"3428c4a3-12ce-4407-8c12-1fa0241c29a5\") " Jan 01 08:46:31 crc kubenswrapper[4867]: I0101 08:46:31.751500 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3428c4a3-12ce-4407-8c12-1fa0241c29a5-operator-scripts\") pod \"3428c4a3-12ce-4407-8c12-1fa0241c29a5\" (UID: \"3428c4a3-12ce-4407-8c12-1fa0241c29a5\") " Jan 01 08:46:31 crc kubenswrapper[4867]: I0101 08:46:31.752331 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3428c4a3-12ce-4407-8c12-1fa0241c29a5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3428c4a3-12ce-4407-8c12-1fa0241c29a5" (UID: "3428c4a3-12ce-4407-8c12-1fa0241c29a5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:46:31 crc kubenswrapper[4867]: I0101 08:46:31.758355 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3428c4a3-12ce-4407-8c12-1fa0241c29a5-kube-api-access-gbndv" (OuterVolumeSpecName: "kube-api-access-gbndv") pod "3428c4a3-12ce-4407-8c12-1fa0241c29a5" (UID: "3428c4a3-12ce-4407-8c12-1fa0241c29a5"). InnerVolumeSpecName "kube-api-access-gbndv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:46:31 crc kubenswrapper[4867]: I0101 08:46:31.853796 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3428c4a3-12ce-4407-8c12-1fa0241c29a5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:31 crc kubenswrapper[4867]: I0101 08:46:31.853830 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbndv\" (UniqueName: \"kubernetes.io/projected/3428c4a3-12ce-4407-8c12-1fa0241c29a5-kube-api-access-gbndv\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:31 crc kubenswrapper[4867]: I0101 08:46:31.932125 4867 scope.go:117] "RemoveContainer" containerID="94d1ad37452e5979c8b02a8ae22858efb4e41797b6b8368f3d94dbf1ba0c8f6b" Jan 01 08:46:31 crc kubenswrapper[4867]: I0101 08:46:31.954605 4867 scope.go:117] "RemoveContainer" containerID="dfa246c2bb32c45f517df66b12edc5b7137dd4fd11bf05e0d55f802b1f4e10c9" Jan 01 08:46:32 crc kubenswrapper[4867]: I0101 08:46:32.265604 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xsp84" Jan 01 08:46:32 crc kubenswrapper[4867]: I0101 08:46:32.265606 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xsp84" event={"ID":"3428c4a3-12ce-4407-8c12-1fa0241c29a5","Type":"ContainerDied","Data":"78d098c2bfaa6c8c3732dafe28c4a97908cf1c606ca5b0ab2b30c603a04a82c0"} Jan 01 08:46:32 crc kubenswrapper[4867]: I0101 08:46:32.265655 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78d098c2bfaa6c8c3732dafe28c4a97908cf1c606ca5b0ab2b30c603a04a82c0" Jan 01 08:46:34 crc kubenswrapper[4867]: I0101 08:46:34.578314 4867 scope.go:117] "RemoveContainer" containerID="09b75ca6c152730a04e0606c1d302b69dd692c85df356d55d52c369b61346c2b" Jan 01 08:46:34 crc kubenswrapper[4867]: I0101 08:46:34.836269 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5a0b-account-create-update-dfp4s" Jan 01 08:46:34 crc kubenswrapper[4867]: I0101 08:46:34.860248 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-j5kc9" Jan 01 08:46:34 crc kubenswrapper[4867]: I0101 08:46:34.879090 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xsk47" Jan 01 08:46:34 crc kubenswrapper[4867]: I0101 08:46:34.898563 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9jx2\" (UniqueName: \"kubernetes.io/projected/03b75a91-f656-4340-9b36-3b95732d5138-kube-api-access-f9jx2\") pod \"03b75a91-f656-4340-9b36-3b95732d5138\" (UID: \"03b75a91-f656-4340-9b36-3b95732d5138\") " Jan 01 08:46:34 crc kubenswrapper[4867]: I0101 08:46:34.898800 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03b75a91-f656-4340-9b36-3b95732d5138-operator-scripts\") pod \"03b75a91-f656-4340-9b36-3b95732d5138\" (UID: \"03b75a91-f656-4340-9b36-3b95732d5138\") " Jan 01 08:46:34 crc kubenswrapper[4867]: I0101 08:46:34.907807 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03b75a91-f656-4340-9b36-3b95732d5138-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "03b75a91-f656-4340-9b36-3b95732d5138" (UID: "03b75a91-f656-4340-9b36-3b95732d5138"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:46:34 crc kubenswrapper[4867]: I0101 08:46:34.912942 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03b75a91-f656-4340-9b36-3b95732d5138-kube-api-access-f9jx2" (OuterVolumeSpecName: "kube-api-access-f9jx2") pod "03b75a91-f656-4340-9b36-3b95732d5138" (UID: "03b75a91-f656-4340-9b36-3b95732d5138"). InnerVolumeSpecName "kube-api-access-f9jx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:46:34 crc kubenswrapper[4867]: I0101 08:46:34.917081 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-93ba-account-create-update-q4kzs" Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.000266 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c514-account-create-update-qfw8m" Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.000618 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3ae1a6d-6f42-40a0-87a2-488ee05a0c09-operator-scripts\") pod \"b3ae1a6d-6f42-40a0-87a2-488ee05a0c09\" (UID: \"b3ae1a6d-6f42-40a0-87a2-488ee05a0c09\") " Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.000687 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rdxl\" (UniqueName: \"kubernetes.io/projected/6dde095a-4ecb-477d-9699-9867084e2d00-kube-api-access-6rdxl\") pod \"6dde095a-4ecb-477d-9699-9867084e2d00\" (UID: \"6dde095a-4ecb-477d-9699-9867084e2d00\") " Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.001242 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3ae1a6d-6f42-40a0-87a2-488ee05a0c09-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b3ae1a6d-6f42-40a0-87a2-488ee05a0c09" (UID: "b3ae1a6d-6f42-40a0-87a2-488ee05a0c09"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.001680 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d29ab84-247c-4e1b-b199-3fa0bcf59771-operator-scripts\") pod \"3d29ab84-247c-4e1b-b199-3fa0bcf59771\" (UID: \"3d29ab84-247c-4e1b-b199-3fa0bcf59771\") " Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.001751 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dde095a-4ecb-477d-9699-9867084e2d00-operator-scripts\") pod \"6dde095a-4ecb-477d-9699-9867084e2d00\" (UID: \"6dde095a-4ecb-477d-9699-9867084e2d00\") " Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.001779 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6p7dh\" (UniqueName: \"kubernetes.io/projected/b3ae1a6d-6f42-40a0-87a2-488ee05a0c09-kube-api-access-6p7dh\") pod \"b3ae1a6d-6f42-40a0-87a2-488ee05a0c09\" (UID: \"b3ae1a6d-6f42-40a0-87a2-488ee05a0c09\") " Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.001800 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77rpc\" (UniqueName: \"kubernetes.io/projected/3d29ab84-247c-4e1b-b199-3fa0bcf59771-kube-api-access-77rpc\") pod \"3d29ab84-247c-4e1b-b199-3fa0bcf59771\" (UID: \"3d29ab84-247c-4e1b-b199-3fa0bcf59771\") " Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.002361 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9jx2\" (UniqueName: \"kubernetes.io/projected/03b75a91-f656-4340-9b36-3b95732d5138-kube-api-access-f9jx2\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.002385 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03b75a91-f656-4340-9b36-3b95732d5138-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.002414 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3ae1a6d-6f42-40a0-87a2-488ee05a0c09-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.003446 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dde095a-4ecb-477d-9699-9867084e2d00-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6dde095a-4ecb-477d-9699-9867084e2d00" (UID: "6dde095a-4ecb-477d-9699-9867084e2d00"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.003650 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dde095a-4ecb-477d-9699-9867084e2d00-kube-api-access-6rdxl" (OuterVolumeSpecName: "kube-api-access-6rdxl") pod "6dde095a-4ecb-477d-9699-9867084e2d00" (UID: "6dde095a-4ecb-477d-9699-9867084e2d00"). InnerVolumeSpecName "kube-api-access-6rdxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.003728 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d29ab84-247c-4e1b-b199-3fa0bcf59771-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3d29ab84-247c-4e1b-b199-3fa0bcf59771" (UID: "3d29ab84-247c-4e1b-b199-3fa0bcf59771"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.005653 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3ae1a6d-6f42-40a0-87a2-488ee05a0c09-kube-api-access-6p7dh" (OuterVolumeSpecName: "kube-api-access-6p7dh") pod "b3ae1a6d-6f42-40a0-87a2-488ee05a0c09" (UID: "b3ae1a6d-6f42-40a0-87a2-488ee05a0c09"). InnerVolumeSpecName "kube-api-access-6p7dh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.008588 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d29ab84-247c-4e1b-b199-3fa0bcf59771-kube-api-access-77rpc" (OuterVolumeSpecName: "kube-api-access-77rpc") pod "3d29ab84-247c-4e1b-b199-3fa0bcf59771" (UID: "3d29ab84-247c-4e1b-b199-3fa0bcf59771"). InnerVolumeSpecName "kube-api-access-77rpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.103654 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bb0c901-c8bf-4767-ba12-56111931051e-operator-scripts\") pod \"1bb0c901-c8bf-4767-ba12-56111931051e\" (UID: \"1bb0c901-c8bf-4767-ba12-56111931051e\") " Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.104119 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bb0c901-c8bf-4767-ba12-56111931051e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1bb0c901-c8bf-4767-ba12-56111931051e" (UID: "1bb0c901-c8bf-4767-ba12-56111931051e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.104318 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nchm8\" (UniqueName: \"kubernetes.io/projected/1bb0c901-c8bf-4767-ba12-56111931051e-kube-api-access-nchm8\") pod \"1bb0c901-c8bf-4767-ba12-56111931051e\" (UID: \"1bb0c901-c8bf-4767-ba12-56111931051e\") " Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.104789 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rdxl\" (UniqueName: \"kubernetes.io/projected/6dde095a-4ecb-477d-9699-9867084e2d00-kube-api-access-6rdxl\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.104809 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bb0c901-c8bf-4767-ba12-56111931051e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.104820 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d29ab84-247c-4e1b-b199-3fa0bcf59771-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.104849 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dde095a-4ecb-477d-9699-9867084e2d00-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.104862 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6p7dh\" (UniqueName: \"kubernetes.io/projected/b3ae1a6d-6f42-40a0-87a2-488ee05a0c09-kube-api-access-6p7dh\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.104872 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77rpc\" (UniqueName: \"kubernetes.io/projected/3d29ab84-247c-4e1b-b199-3fa0bcf59771-kube-api-access-77rpc\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.107261 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bb0c901-c8bf-4767-ba12-56111931051e-kube-api-access-nchm8" (OuterVolumeSpecName: "kube-api-access-nchm8") pod "1bb0c901-c8bf-4767-ba12-56111931051e" (UID: "1bb0c901-c8bf-4767-ba12-56111931051e"). InnerVolumeSpecName "kube-api-access-nchm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.206546 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nchm8\" (UniqueName: \"kubernetes.io/projected/1bb0c901-c8bf-4767-ba12-56111931051e-kube-api-access-nchm8\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.294819 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-j5kc9" event={"ID":"b3ae1a6d-6f42-40a0-87a2-488ee05a0c09","Type":"ContainerDied","Data":"44c35cf55d797c161d375904abdc26522403e721904533f1c917dc0cc63e384a"} Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.294873 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44c35cf55d797c161d375904abdc26522403e721904533f1c917dc0cc63e384a" Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.295082 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-j5kc9" Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.296948 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-93ba-account-create-update-q4kzs" Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.296953 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-93ba-account-create-update-q4kzs" event={"ID":"3d29ab84-247c-4e1b-b199-3fa0bcf59771","Type":"ContainerDied","Data":"9b74decc659170d0f66ce1125a127091d3ffc01d3d8bd0b0a52e9151130b4fd2"} Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.296987 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b74decc659170d0f66ce1125a127091d3ffc01d3d8bd0b0a52e9151130b4fd2" Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.299285 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xsk47" event={"ID":"6dde095a-4ecb-477d-9699-9867084e2d00","Type":"ContainerDied","Data":"1362099d5a749f7749e362fad9686204e3d7b69841f3f160caf9a45f6bd0911f"} Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.299311 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1362099d5a749f7749e362fad9686204e3d7b69841f3f160caf9a45f6bd0911f" Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.299337 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xsk47" Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.301812 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zqg5l" event={"ID":"30fc9439-d9d2-4a19-9ffd-2b80a7269e77","Type":"ContainerStarted","Data":"2059162a240a8916135e9afa7c5fdbf55bbc69712359831defcfc55e9745560a"} Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.306229 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c514-account-create-update-qfw8m" Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.306298 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c514-account-create-update-qfw8m" event={"ID":"1bb0c901-c8bf-4767-ba12-56111931051e","Type":"ContainerDied","Data":"660639faa945ab1f07976057b286023334ceaa3e8149e79bac4e169492de8f07"} Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.306430 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="660639faa945ab1f07976057b286023334ceaa3e8149e79bac4e169492de8f07" Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.311864 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5a0b-account-create-update-dfp4s" event={"ID":"03b75a91-f656-4340-9b36-3b95732d5138","Type":"ContainerDied","Data":"03c8119d007360dd8f495292c6114c751f62d208f56ceedf84dd4d8d4cbf2145"} Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.311920 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03c8119d007360dd8f495292c6114c751f62d208f56ceedf84dd4d8d4cbf2145" Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.311930 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5a0b-account-create-update-dfp4s" Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.329238 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-zqg5l" podStartSLOduration=1.94925681 podStartE2EDuration="6.329218904s" podCreationTimestamp="2026-01-01 08:46:29 +0000 UTC" firstStartedPulling="2026-01-01 08:46:30.241190859 +0000 UTC m=+1199.376459628" lastFinishedPulling="2026-01-01 08:46:34.621152933 +0000 UTC m=+1203.756421722" observedRunningTime="2026-01-01 08:46:35.321730534 +0000 UTC m=+1204.456999323" watchObservedRunningTime="2026-01-01 08:46:35.329218904 +0000 UTC m=+1204.464487673" Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.865037 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-786cc75955-lpfwv" Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.962048 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-x5wp2"] Jan 01 08:46:35 crc kubenswrapper[4867]: I0101 08:46:35.962536 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6cb545bd4c-x5wp2" podUID="e1293cec-4975-472a-adc0-8d14637a64fe" containerName="dnsmasq-dns" containerID="cri-o://0b406d5c43d02ac6453d2af0a9651013ddc04057462f18dba4ed864d0c258363" gracePeriod=10 Jan 01 08:46:36 crc kubenswrapper[4867]: I0101 08:46:36.327362 4867 generic.go:334] "Generic (PLEG): container finished" podID="e1293cec-4975-472a-adc0-8d14637a64fe" containerID="0b406d5c43d02ac6453d2af0a9651013ddc04057462f18dba4ed864d0c258363" exitCode=0 Jan 01 08:46:36 crc kubenswrapper[4867]: I0101 08:46:36.327758 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-x5wp2" event={"ID":"e1293cec-4975-472a-adc0-8d14637a64fe","Type":"ContainerDied","Data":"0b406d5c43d02ac6453d2af0a9651013ddc04057462f18dba4ed864d0c258363"} Jan 01 08:46:36 crc kubenswrapper[4867]: I0101 08:46:36.414428 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-x5wp2" Jan 01 08:46:36 crc kubenswrapper[4867]: I0101 08:46:36.530701 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1293cec-4975-472a-adc0-8d14637a64fe-ovsdbserver-nb\") pod \"e1293cec-4975-472a-adc0-8d14637a64fe\" (UID: \"e1293cec-4975-472a-adc0-8d14637a64fe\") " Jan 01 08:46:36 crc kubenswrapper[4867]: I0101 08:46:36.530749 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1293cec-4975-472a-adc0-8d14637a64fe-ovsdbserver-sb\") pod \"e1293cec-4975-472a-adc0-8d14637a64fe\" (UID: \"e1293cec-4975-472a-adc0-8d14637a64fe\") " Jan 01 08:46:36 crc kubenswrapper[4867]: I0101 08:46:36.530839 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1293cec-4975-472a-adc0-8d14637a64fe-config\") pod \"e1293cec-4975-472a-adc0-8d14637a64fe\" (UID: \"e1293cec-4975-472a-adc0-8d14637a64fe\") " Jan 01 08:46:36 crc kubenswrapper[4867]: I0101 08:46:36.530912 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1293cec-4975-472a-adc0-8d14637a64fe-dns-svc\") pod \"e1293cec-4975-472a-adc0-8d14637a64fe\" (UID: \"e1293cec-4975-472a-adc0-8d14637a64fe\") " Jan 01 08:46:36 crc kubenswrapper[4867]: I0101 08:46:36.531110 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmp96\" (UniqueName: \"kubernetes.io/projected/e1293cec-4975-472a-adc0-8d14637a64fe-kube-api-access-dmp96\") pod \"e1293cec-4975-472a-adc0-8d14637a64fe\" (UID: \"e1293cec-4975-472a-adc0-8d14637a64fe\") " Jan 01 08:46:36 crc kubenswrapper[4867]: I0101 08:46:36.542038 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1293cec-4975-472a-adc0-8d14637a64fe-kube-api-access-dmp96" (OuterVolumeSpecName: "kube-api-access-dmp96") pod "e1293cec-4975-472a-adc0-8d14637a64fe" (UID: "e1293cec-4975-472a-adc0-8d14637a64fe"). InnerVolumeSpecName "kube-api-access-dmp96". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:46:36 crc kubenswrapper[4867]: I0101 08:46:36.578662 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1293cec-4975-472a-adc0-8d14637a64fe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e1293cec-4975-472a-adc0-8d14637a64fe" (UID: "e1293cec-4975-472a-adc0-8d14637a64fe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:46:36 crc kubenswrapper[4867]: I0101 08:46:36.579544 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1293cec-4975-472a-adc0-8d14637a64fe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e1293cec-4975-472a-adc0-8d14637a64fe" (UID: "e1293cec-4975-472a-adc0-8d14637a64fe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:46:36 crc kubenswrapper[4867]: I0101 08:46:36.582473 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1293cec-4975-472a-adc0-8d14637a64fe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e1293cec-4975-472a-adc0-8d14637a64fe" (UID: "e1293cec-4975-472a-adc0-8d14637a64fe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:46:36 crc kubenswrapper[4867]: I0101 08:46:36.588229 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1293cec-4975-472a-adc0-8d14637a64fe-config" (OuterVolumeSpecName: "config") pod "e1293cec-4975-472a-adc0-8d14637a64fe" (UID: "e1293cec-4975-472a-adc0-8d14637a64fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:46:36 crc kubenswrapper[4867]: I0101 08:46:36.633261 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmp96\" (UniqueName: \"kubernetes.io/projected/e1293cec-4975-472a-adc0-8d14637a64fe-kube-api-access-dmp96\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:36 crc kubenswrapper[4867]: I0101 08:46:36.633295 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1293cec-4975-472a-adc0-8d14637a64fe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:36 crc kubenswrapper[4867]: I0101 08:46:36.633308 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1293cec-4975-472a-adc0-8d14637a64fe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:36 crc kubenswrapper[4867]: I0101 08:46:36.633321 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1293cec-4975-472a-adc0-8d14637a64fe-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:36 crc kubenswrapper[4867]: I0101 08:46:36.633333 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1293cec-4975-472a-adc0-8d14637a64fe-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:37 crc kubenswrapper[4867]: I0101 08:46:37.337120 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-x5wp2" event={"ID":"e1293cec-4975-472a-adc0-8d14637a64fe","Type":"ContainerDied","Data":"84044f6f92c0a380f8340b3831ce5ae52f63d3173d961a67ef42c3d73cc362b0"} Jan 01 08:46:37 crc kubenswrapper[4867]: I0101 08:46:37.337211 4867 scope.go:117] "RemoveContainer" containerID="0b406d5c43d02ac6453d2af0a9651013ddc04057462f18dba4ed864d0c258363" Jan 01 08:46:37 crc kubenswrapper[4867]: I0101 08:46:37.337240 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-x5wp2" Jan 01 08:46:37 crc kubenswrapper[4867]: I0101 08:46:37.375007 4867 scope.go:117] "RemoveContainer" containerID="6b49aef3bdc30e3be5984484878d48a8da496fe53750f664add0a142424f5d99" Jan 01 08:46:37 crc kubenswrapper[4867]: I0101 08:46:37.376165 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-x5wp2"] Jan 01 08:46:37 crc kubenswrapper[4867]: I0101 08:46:37.385652 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-x5wp2"] Jan 01 08:46:39 crc kubenswrapper[4867]: I0101 08:46:39.144239 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1293cec-4975-472a-adc0-8d14637a64fe" path="/var/lib/kubelet/pods/e1293cec-4975-472a-adc0-8d14637a64fe/volumes" Jan 01 08:46:39 crc kubenswrapper[4867]: I0101 08:46:39.363726 4867 generic.go:334] "Generic (PLEG): container finished" podID="30fc9439-d9d2-4a19-9ffd-2b80a7269e77" containerID="2059162a240a8916135e9afa7c5fdbf55bbc69712359831defcfc55e9745560a" exitCode=0 Jan 01 08:46:39 crc kubenswrapper[4867]: I0101 08:46:39.363777 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zqg5l" event={"ID":"30fc9439-d9d2-4a19-9ffd-2b80a7269e77","Type":"ContainerDied","Data":"2059162a240a8916135e9afa7c5fdbf55bbc69712359831defcfc55e9745560a"} Jan 01 08:46:40 crc kubenswrapper[4867]: I0101 08:46:40.717845 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zqg5l" Jan 01 08:46:40 crc kubenswrapper[4867]: I0101 08:46:40.805537 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30fc9439-d9d2-4a19-9ffd-2b80a7269e77-combined-ca-bundle\") pod \"30fc9439-d9d2-4a19-9ffd-2b80a7269e77\" (UID: \"30fc9439-d9d2-4a19-9ffd-2b80a7269e77\") " Jan 01 08:46:40 crc kubenswrapper[4867]: I0101 08:46:40.806137 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30fc9439-d9d2-4a19-9ffd-2b80a7269e77-config-data\") pod \"30fc9439-d9d2-4a19-9ffd-2b80a7269e77\" (UID: \"30fc9439-d9d2-4a19-9ffd-2b80a7269e77\") " Jan 01 08:46:40 crc kubenswrapper[4867]: I0101 08:46:40.806240 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktblc\" (UniqueName: \"kubernetes.io/projected/30fc9439-d9d2-4a19-9ffd-2b80a7269e77-kube-api-access-ktblc\") pod \"30fc9439-d9d2-4a19-9ffd-2b80a7269e77\" (UID: \"30fc9439-d9d2-4a19-9ffd-2b80a7269e77\") " Jan 01 08:46:40 crc kubenswrapper[4867]: I0101 08:46:40.810759 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30fc9439-d9d2-4a19-9ffd-2b80a7269e77-kube-api-access-ktblc" (OuterVolumeSpecName: "kube-api-access-ktblc") pod "30fc9439-d9d2-4a19-9ffd-2b80a7269e77" (UID: "30fc9439-d9d2-4a19-9ffd-2b80a7269e77"). InnerVolumeSpecName "kube-api-access-ktblc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:46:40 crc kubenswrapper[4867]: I0101 08:46:40.842423 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30fc9439-d9d2-4a19-9ffd-2b80a7269e77-config-data" (OuterVolumeSpecName: "config-data") pod "30fc9439-d9d2-4a19-9ffd-2b80a7269e77" (UID: "30fc9439-d9d2-4a19-9ffd-2b80a7269e77"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:46:40 crc kubenswrapper[4867]: I0101 08:46:40.850666 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30fc9439-d9d2-4a19-9ffd-2b80a7269e77-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30fc9439-d9d2-4a19-9ffd-2b80a7269e77" (UID: "30fc9439-d9d2-4a19-9ffd-2b80a7269e77"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:46:40 crc kubenswrapper[4867]: I0101 08:46:40.908780 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30fc9439-d9d2-4a19-9ffd-2b80a7269e77-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:40 crc kubenswrapper[4867]: I0101 08:46:40.908821 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30fc9439-d9d2-4a19-9ffd-2b80a7269e77-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:40 crc kubenswrapper[4867]: I0101 08:46:40.908839 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktblc\" (UniqueName: \"kubernetes.io/projected/30fc9439-d9d2-4a19-9ffd-2b80a7269e77-kube-api-access-ktblc\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.384506 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zqg5l" event={"ID":"30fc9439-d9d2-4a19-9ffd-2b80a7269e77","Type":"ContainerDied","Data":"af3adad6e955b33b1ed292fca391a271fe558343d48eeb1c6893c62c9e2bd4cf"} Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.384604 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af3adad6e955b33b1ed292fca391a271fe558343d48eeb1c6893c62c9e2bd4cf" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.384680 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zqg5l" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.689036 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-8qpzn"] Jan 01 08:46:41 crc kubenswrapper[4867]: E0101 08:46:41.689610 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dde095a-4ecb-477d-9699-9867084e2d00" containerName="mariadb-database-create" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.689633 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dde095a-4ecb-477d-9699-9867084e2d00" containerName="mariadb-database-create" Jan 01 08:46:41 crc kubenswrapper[4867]: E0101 08:46:41.689650 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ae1a6d-6f42-40a0-87a2-488ee05a0c09" containerName="mariadb-database-create" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.689660 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ae1a6d-6f42-40a0-87a2-488ee05a0c09" containerName="mariadb-database-create" Jan 01 08:46:41 crc kubenswrapper[4867]: E0101 08:46:41.689673 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bb0c901-c8bf-4767-ba12-56111931051e" containerName="mariadb-account-create-update" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.689680 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bb0c901-c8bf-4767-ba12-56111931051e" containerName="mariadb-account-create-update" Jan 01 08:46:41 crc kubenswrapper[4867]: E0101 08:46:41.689693 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30fc9439-d9d2-4a19-9ffd-2b80a7269e77" containerName="keystone-db-sync" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.689701 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="30fc9439-d9d2-4a19-9ffd-2b80a7269e77" containerName="keystone-db-sync" Jan 01 08:46:41 crc kubenswrapper[4867]: E0101 08:46:41.689714 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1293cec-4975-472a-adc0-8d14637a64fe" containerName="dnsmasq-dns" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.689721 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1293cec-4975-472a-adc0-8d14637a64fe" containerName="dnsmasq-dns" Jan 01 08:46:41 crc kubenswrapper[4867]: E0101 08:46:41.689731 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03b75a91-f656-4340-9b36-3b95732d5138" containerName="mariadb-account-create-update" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.689737 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="03b75a91-f656-4340-9b36-3b95732d5138" containerName="mariadb-account-create-update" Jan 01 08:46:41 crc kubenswrapper[4867]: E0101 08:46:41.689753 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3428c4a3-12ce-4407-8c12-1fa0241c29a5" containerName="mariadb-database-create" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.689761 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3428c4a3-12ce-4407-8c12-1fa0241c29a5" containerName="mariadb-database-create" Jan 01 08:46:41 crc kubenswrapper[4867]: E0101 08:46:41.689772 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d29ab84-247c-4e1b-b199-3fa0bcf59771" containerName="mariadb-account-create-update" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.689779 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d29ab84-247c-4e1b-b199-3fa0bcf59771" containerName="mariadb-account-create-update" Jan 01 08:46:41 crc kubenswrapper[4867]: E0101 08:46:41.689792 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1293cec-4975-472a-adc0-8d14637a64fe" containerName="init" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.689799 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1293cec-4975-472a-adc0-8d14637a64fe" containerName="init" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.689997 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d29ab84-247c-4e1b-b199-3fa0bcf59771" containerName="mariadb-account-create-update" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.690014 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bb0c901-c8bf-4767-ba12-56111931051e" containerName="mariadb-account-create-update" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.690026 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1293cec-4975-472a-adc0-8d14637a64fe" containerName="dnsmasq-dns" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.690034 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="03b75a91-f656-4340-9b36-3b95732d5138" containerName="mariadb-account-create-update" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.690045 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3ae1a6d-6f42-40a0-87a2-488ee05a0c09" containerName="mariadb-database-create" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.690054 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="3428c4a3-12ce-4407-8c12-1fa0241c29a5" containerName="mariadb-database-create" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.690062 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dde095a-4ecb-477d-9699-9867084e2d00" containerName="mariadb-database-create" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.690071 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="30fc9439-d9d2-4a19-9ffd-2b80a7269e77" containerName="keystone-db-sync" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.690823 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8qpzn" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.693184 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.693197 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.693225 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.696093 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.698034 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-67p9k" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.702966 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9bbd44f9c-fm26p"] Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.705086 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9bbd44f9c-fm26p" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.709056 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8qpzn"] Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.727268 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/258f483b-5c81-4662-a95f-402d907ebfcc-credential-keys\") pod \"keystone-bootstrap-8qpzn\" (UID: \"258f483b-5c81-4662-a95f-402d907ebfcc\") " pod="openstack/keystone-bootstrap-8qpzn" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.727347 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6tv8\" (UniqueName: \"kubernetes.io/projected/258f483b-5c81-4662-a95f-402d907ebfcc-kube-api-access-d6tv8\") pod \"keystone-bootstrap-8qpzn\" (UID: \"258f483b-5c81-4662-a95f-402d907ebfcc\") " pod="openstack/keystone-bootstrap-8qpzn" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.727381 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/258f483b-5c81-4662-a95f-402d907ebfcc-scripts\") pod \"keystone-bootstrap-8qpzn\" (UID: \"258f483b-5c81-4662-a95f-402d907ebfcc\") " pod="openstack/keystone-bootstrap-8qpzn" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.727406 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/258f483b-5c81-4662-a95f-402d907ebfcc-config-data\") pod \"keystone-bootstrap-8qpzn\" (UID: \"258f483b-5c81-4662-a95f-402d907ebfcc\") " pod="openstack/keystone-bootstrap-8qpzn" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.727420 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/258f483b-5c81-4662-a95f-402d907ebfcc-fernet-keys\") pod \"keystone-bootstrap-8qpzn\" (UID: \"258f483b-5c81-4662-a95f-402d907ebfcc\") " pod="openstack/keystone-bootstrap-8qpzn" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.727460 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/258f483b-5c81-4662-a95f-402d907ebfcc-combined-ca-bundle\") pod \"keystone-bootstrap-8qpzn\" (UID: \"258f483b-5c81-4662-a95f-402d907ebfcc\") " pod="openstack/keystone-bootstrap-8qpzn" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.768051 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9bbd44f9c-fm26p"] Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.828826 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6tv8\" (UniqueName: \"kubernetes.io/projected/258f483b-5c81-4662-a95f-402d907ebfcc-kube-api-access-d6tv8\") pod \"keystone-bootstrap-8qpzn\" (UID: \"258f483b-5c81-4662-a95f-402d907ebfcc\") " pod="openstack/keystone-bootstrap-8qpzn" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.828876 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/258f483b-5c81-4662-a95f-402d907ebfcc-scripts\") pod \"keystone-bootstrap-8qpzn\" (UID: \"258f483b-5c81-4662-a95f-402d907ebfcc\") " pod="openstack/keystone-bootstrap-8qpzn" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.828917 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9a485676-0927-4082-9329-aaacd9aabb0b-dns-swift-storage-0\") pod \"dnsmasq-dns-9bbd44f9c-fm26p\" (UID: \"9a485676-0927-4082-9329-aaacd9aabb0b\") " pod="openstack/dnsmasq-dns-9bbd44f9c-fm26p" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.828938 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7zbn\" (UniqueName: \"kubernetes.io/projected/9a485676-0927-4082-9329-aaacd9aabb0b-kube-api-access-w7zbn\") pod \"dnsmasq-dns-9bbd44f9c-fm26p\" (UID: \"9a485676-0927-4082-9329-aaacd9aabb0b\") " pod="openstack/dnsmasq-dns-9bbd44f9c-fm26p" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.828955 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a485676-0927-4082-9329-aaacd9aabb0b-dns-svc\") pod \"dnsmasq-dns-9bbd44f9c-fm26p\" (UID: \"9a485676-0927-4082-9329-aaacd9aabb0b\") " pod="openstack/dnsmasq-dns-9bbd44f9c-fm26p" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.828973 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/258f483b-5c81-4662-a95f-402d907ebfcc-config-data\") pod \"keystone-bootstrap-8qpzn\" (UID: \"258f483b-5c81-4662-a95f-402d907ebfcc\") " pod="openstack/keystone-bootstrap-8qpzn" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.828986 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/258f483b-5c81-4662-a95f-402d907ebfcc-fernet-keys\") pod \"keystone-bootstrap-8qpzn\" (UID: \"258f483b-5c81-4662-a95f-402d907ebfcc\") " pod="openstack/keystone-bootstrap-8qpzn" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.829026 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/258f483b-5c81-4662-a95f-402d907ebfcc-combined-ca-bundle\") pod \"keystone-bootstrap-8qpzn\" (UID: \"258f483b-5c81-4662-a95f-402d907ebfcc\") " pod="openstack/keystone-bootstrap-8qpzn" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.829047 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a485676-0927-4082-9329-aaacd9aabb0b-config\") pod \"dnsmasq-dns-9bbd44f9c-fm26p\" (UID: \"9a485676-0927-4082-9329-aaacd9aabb0b\") " pod="openstack/dnsmasq-dns-9bbd44f9c-fm26p" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.829080 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/258f483b-5c81-4662-a95f-402d907ebfcc-credential-keys\") pod \"keystone-bootstrap-8qpzn\" (UID: \"258f483b-5c81-4662-a95f-402d907ebfcc\") " pod="openstack/keystone-bootstrap-8qpzn" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.829099 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a485676-0927-4082-9329-aaacd9aabb0b-ovsdbserver-nb\") pod \"dnsmasq-dns-9bbd44f9c-fm26p\" (UID: \"9a485676-0927-4082-9329-aaacd9aabb0b\") " pod="openstack/dnsmasq-dns-9bbd44f9c-fm26p" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.829116 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a485676-0927-4082-9329-aaacd9aabb0b-ovsdbserver-sb\") pod \"dnsmasq-dns-9bbd44f9c-fm26p\" (UID: \"9a485676-0927-4082-9329-aaacd9aabb0b\") " pod="openstack/dnsmasq-dns-9bbd44f9c-fm26p" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.834426 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/258f483b-5c81-4662-a95f-402d907ebfcc-fernet-keys\") pod \"keystone-bootstrap-8qpzn\" (UID: \"258f483b-5c81-4662-a95f-402d907ebfcc\") " pod="openstack/keystone-bootstrap-8qpzn" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.835345 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/258f483b-5c81-4662-a95f-402d907ebfcc-config-data\") pod \"keystone-bootstrap-8qpzn\" (UID: \"258f483b-5c81-4662-a95f-402d907ebfcc\") " pod="openstack/keystone-bootstrap-8qpzn" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.851720 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/258f483b-5c81-4662-a95f-402d907ebfcc-combined-ca-bundle\") pod \"keystone-bootstrap-8qpzn\" (UID: \"258f483b-5c81-4662-a95f-402d907ebfcc\") " pod="openstack/keystone-bootstrap-8qpzn" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.852292 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/258f483b-5c81-4662-a95f-402d907ebfcc-credential-keys\") pod \"keystone-bootstrap-8qpzn\" (UID: \"258f483b-5c81-4662-a95f-402d907ebfcc\") " pod="openstack/keystone-bootstrap-8qpzn" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.859307 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/258f483b-5c81-4662-a95f-402d907ebfcc-scripts\") pod \"keystone-bootstrap-8qpzn\" (UID: \"258f483b-5c81-4662-a95f-402d907ebfcc\") " pod="openstack/keystone-bootstrap-8qpzn" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.887961 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-p6csz"] Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.889011 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-p6csz" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.891288 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.895419 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6tv8\" (UniqueName: \"kubernetes.io/projected/258f483b-5c81-4662-a95f-402d907ebfcc-kube-api-access-d6tv8\") pod \"keystone-bootstrap-8qpzn\" (UID: \"258f483b-5c81-4662-a95f-402d907ebfcc\") " pod="openstack/keystone-bootstrap-8qpzn" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.899265 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-jhfk9" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.899440 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.930708 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/978a99d3-4e55-4026-a329-5da06bf36c90-etc-machine-id\") pod \"cinder-db-sync-p6csz\" (UID: \"978a99d3-4e55-4026-a329-5da06bf36c90\") " pod="openstack/cinder-db-sync-p6csz" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.930762 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk2cl\" (UniqueName: \"kubernetes.io/projected/978a99d3-4e55-4026-a329-5da06bf36c90-kube-api-access-jk2cl\") pod \"cinder-db-sync-p6csz\" (UID: \"978a99d3-4e55-4026-a329-5da06bf36c90\") " pod="openstack/cinder-db-sync-p6csz" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.930795 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a485676-0927-4082-9329-aaacd9aabb0b-ovsdbserver-nb\") pod \"dnsmasq-dns-9bbd44f9c-fm26p\" (UID: \"9a485676-0927-4082-9329-aaacd9aabb0b\") " pod="openstack/dnsmasq-dns-9bbd44f9c-fm26p" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.930814 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/978a99d3-4e55-4026-a329-5da06bf36c90-db-sync-config-data\") pod \"cinder-db-sync-p6csz\" (UID: \"978a99d3-4e55-4026-a329-5da06bf36c90\") " pod="openstack/cinder-db-sync-p6csz" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.930829 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a485676-0927-4082-9329-aaacd9aabb0b-ovsdbserver-sb\") pod \"dnsmasq-dns-9bbd44f9c-fm26p\" (UID: \"9a485676-0927-4082-9329-aaacd9aabb0b\") " pod="openstack/dnsmasq-dns-9bbd44f9c-fm26p" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.930906 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/978a99d3-4e55-4026-a329-5da06bf36c90-combined-ca-bundle\") pod \"cinder-db-sync-p6csz\" (UID: \"978a99d3-4e55-4026-a329-5da06bf36c90\") " pod="openstack/cinder-db-sync-p6csz" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.930930 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9a485676-0927-4082-9329-aaacd9aabb0b-dns-swift-storage-0\") pod \"dnsmasq-dns-9bbd44f9c-fm26p\" (UID: \"9a485676-0927-4082-9329-aaacd9aabb0b\") " pod="openstack/dnsmasq-dns-9bbd44f9c-fm26p" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.930948 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7zbn\" (UniqueName: \"kubernetes.io/projected/9a485676-0927-4082-9329-aaacd9aabb0b-kube-api-access-w7zbn\") pod \"dnsmasq-dns-9bbd44f9c-fm26p\" (UID: \"9a485676-0927-4082-9329-aaacd9aabb0b\") " pod="openstack/dnsmasq-dns-9bbd44f9c-fm26p" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.930963 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a485676-0927-4082-9329-aaacd9aabb0b-dns-svc\") pod \"dnsmasq-dns-9bbd44f9c-fm26p\" (UID: \"9a485676-0927-4082-9329-aaacd9aabb0b\") " pod="openstack/dnsmasq-dns-9bbd44f9c-fm26p" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.930991 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/978a99d3-4e55-4026-a329-5da06bf36c90-config-data\") pod \"cinder-db-sync-p6csz\" (UID: \"978a99d3-4e55-4026-a329-5da06bf36c90\") " pod="openstack/cinder-db-sync-p6csz" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.931014 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/978a99d3-4e55-4026-a329-5da06bf36c90-scripts\") pod \"cinder-db-sync-p6csz\" (UID: \"978a99d3-4e55-4026-a329-5da06bf36c90\") " pod="openstack/cinder-db-sync-p6csz" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.931043 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a485676-0927-4082-9329-aaacd9aabb0b-config\") pod \"dnsmasq-dns-9bbd44f9c-fm26p\" (UID: \"9a485676-0927-4082-9329-aaacd9aabb0b\") " pod="openstack/dnsmasq-dns-9bbd44f9c-fm26p" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.932086 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a485676-0927-4082-9329-aaacd9aabb0b-ovsdbserver-nb\") pod \"dnsmasq-dns-9bbd44f9c-fm26p\" (UID: \"9a485676-0927-4082-9329-aaacd9aabb0b\") " pod="openstack/dnsmasq-dns-9bbd44f9c-fm26p" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.932131 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a485676-0927-4082-9329-aaacd9aabb0b-config\") pod \"dnsmasq-dns-9bbd44f9c-fm26p\" (UID: \"9a485676-0927-4082-9329-aaacd9aabb0b\") " pod="openstack/dnsmasq-dns-9bbd44f9c-fm26p" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.932686 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9a485676-0927-4082-9329-aaacd9aabb0b-dns-swift-storage-0\") pod \"dnsmasq-dns-9bbd44f9c-fm26p\" (UID: \"9a485676-0927-4082-9329-aaacd9aabb0b\") " pod="openstack/dnsmasq-dns-9bbd44f9c-fm26p" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.932845 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a485676-0927-4082-9329-aaacd9aabb0b-ovsdbserver-sb\") pod \"dnsmasq-dns-9bbd44f9c-fm26p\" (UID: \"9a485676-0927-4082-9329-aaacd9aabb0b\") " pod="openstack/dnsmasq-dns-9bbd44f9c-fm26p" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.933063 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a485676-0927-4082-9329-aaacd9aabb0b-dns-svc\") pod \"dnsmasq-dns-9bbd44f9c-fm26p\" (UID: \"9a485676-0927-4082-9329-aaacd9aabb0b\") " pod="openstack/dnsmasq-dns-9bbd44f9c-fm26p" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.953703 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-49xz8"] Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.955649 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-49xz8" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.959639 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.959639 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-vlnxp" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.959756 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.963135 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-p6csz"] Jan 01 08:46:41 crc kubenswrapper[4867]: I0101 08:46:41.978432 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7zbn\" (UniqueName: \"kubernetes.io/projected/9a485676-0927-4082-9329-aaacd9aabb0b-kube-api-access-w7zbn\") pod \"dnsmasq-dns-9bbd44f9c-fm26p\" (UID: \"9a485676-0927-4082-9329-aaacd9aabb0b\") " pod="openstack/dnsmasq-dns-9bbd44f9c-fm26p" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:41.999923 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-49xz8"] Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.014112 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8qpzn" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.027801 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.029852 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.030920 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9bbd44f9c-fm26p" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.032524 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e3c0e55-238e-4e4b-b5fb-da86a9948f01-combined-ca-bundle\") pod \"neutron-db-sync-49xz8\" (UID: \"8e3c0e55-238e-4e4b-b5fb-da86a9948f01\") " pod="openstack/neutron-db-sync-49xz8" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.032549 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/978a99d3-4e55-4026-a329-5da06bf36c90-combined-ca-bundle\") pod \"cinder-db-sync-p6csz\" (UID: \"978a99d3-4e55-4026-a329-5da06bf36c90\") " pod="openstack/cinder-db-sync-p6csz" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.041414 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/978a99d3-4e55-4026-a329-5da06bf36c90-config-data\") pod \"cinder-db-sync-p6csz\" (UID: \"978a99d3-4e55-4026-a329-5da06bf36c90\") " pod="openstack/cinder-db-sync-p6csz" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.041464 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/978a99d3-4e55-4026-a329-5da06bf36c90-scripts\") pod \"cinder-db-sync-p6csz\" (UID: \"978a99d3-4e55-4026-a329-5da06bf36c90\") " pod="openstack/cinder-db-sync-p6csz" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.041518 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8e3c0e55-238e-4e4b-b5fb-da86a9948f01-config\") pod \"neutron-db-sync-49xz8\" (UID: \"8e3c0e55-238e-4e4b-b5fb-da86a9948f01\") " pod="openstack/neutron-db-sync-49xz8" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.041540 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnjwg\" (UniqueName: \"kubernetes.io/projected/8e3c0e55-238e-4e4b-b5fb-da86a9948f01-kube-api-access-lnjwg\") pod \"neutron-db-sync-49xz8\" (UID: \"8e3c0e55-238e-4e4b-b5fb-da86a9948f01\") " pod="openstack/neutron-db-sync-49xz8" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.041587 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/978a99d3-4e55-4026-a329-5da06bf36c90-etc-machine-id\") pod \"cinder-db-sync-p6csz\" (UID: \"978a99d3-4e55-4026-a329-5da06bf36c90\") " pod="openstack/cinder-db-sync-p6csz" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.041632 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk2cl\" (UniqueName: \"kubernetes.io/projected/978a99d3-4e55-4026-a329-5da06bf36c90-kube-api-access-jk2cl\") pod \"cinder-db-sync-p6csz\" (UID: \"978a99d3-4e55-4026-a329-5da06bf36c90\") " pod="openstack/cinder-db-sync-p6csz" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.041674 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/978a99d3-4e55-4026-a329-5da06bf36c90-db-sync-config-data\") pod \"cinder-db-sync-p6csz\" (UID: \"978a99d3-4e55-4026-a329-5da06bf36c90\") " pod="openstack/cinder-db-sync-p6csz" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.047407 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/978a99d3-4e55-4026-a329-5da06bf36c90-etc-machine-id\") pod \"cinder-db-sync-p6csz\" (UID: \"978a99d3-4e55-4026-a329-5da06bf36c90\") " pod="openstack/cinder-db-sync-p6csz" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.049262 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.049452 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.072018 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.079158 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/978a99d3-4e55-4026-a329-5da06bf36c90-combined-ca-bundle\") pod \"cinder-db-sync-p6csz\" (UID: \"978a99d3-4e55-4026-a329-5da06bf36c90\") " pod="openstack/cinder-db-sync-p6csz" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.083676 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/978a99d3-4e55-4026-a329-5da06bf36c90-scripts\") pod \"cinder-db-sync-p6csz\" (UID: \"978a99d3-4e55-4026-a329-5da06bf36c90\") " pod="openstack/cinder-db-sync-p6csz" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.088122 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/978a99d3-4e55-4026-a329-5da06bf36c90-db-sync-config-data\") pod \"cinder-db-sync-p6csz\" (UID: \"978a99d3-4e55-4026-a329-5da06bf36c90\") " pod="openstack/cinder-db-sync-p6csz" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.089407 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk2cl\" (UniqueName: \"kubernetes.io/projected/978a99d3-4e55-4026-a329-5da06bf36c90-kube-api-access-jk2cl\") pod \"cinder-db-sync-p6csz\" (UID: \"978a99d3-4e55-4026-a329-5da06bf36c90\") " pod="openstack/cinder-db-sync-p6csz" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.089450 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-sx242"] Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.090425 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-sx242" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.098300 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-flkxl" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.099166 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/978a99d3-4e55-4026-a329-5da06bf36c90-config-data\") pod \"cinder-db-sync-p6csz\" (UID: \"978a99d3-4e55-4026-a329-5da06bf36c90\") " pod="openstack/cinder-db-sync-p6csz" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.099185 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.160466 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b51add02-d86c-4eb3-924d-1b2ac530e97b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b51add02-d86c-4eb3-924d-1b2ac530e97b\") " pod="openstack/ceilometer-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.160589 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b51add02-d86c-4eb3-924d-1b2ac530e97b-log-httpd\") pod \"ceilometer-0\" (UID: \"b51add02-d86c-4eb3-924d-1b2ac530e97b\") " pod="openstack/ceilometer-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.160624 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8e3c0e55-238e-4e4b-b5fb-da86a9948f01-config\") pod \"neutron-db-sync-49xz8\" (UID: \"8e3c0e55-238e-4e4b-b5fb-da86a9948f01\") " pod="openstack/neutron-db-sync-49xz8" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.160651 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23acd1c1-f4b4-4d70-be4e-ea07cbff8053-combined-ca-bundle\") pod \"barbican-db-sync-sx242\" (UID: \"23acd1c1-f4b4-4d70-be4e-ea07cbff8053\") " pod="openstack/barbican-db-sync-sx242" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.160686 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnjwg\" (UniqueName: \"kubernetes.io/projected/8e3c0e55-238e-4e4b-b5fb-da86a9948f01-kube-api-access-lnjwg\") pod \"neutron-db-sync-49xz8\" (UID: \"8e3c0e55-238e-4e4b-b5fb-da86a9948f01\") " pod="openstack/neutron-db-sync-49xz8" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.160730 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxdf7\" (UniqueName: \"kubernetes.io/projected/23acd1c1-f4b4-4d70-be4e-ea07cbff8053-kube-api-access-rxdf7\") pod \"barbican-db-sync-sx242\" (UID: \"23acd1c1-f4b4-4d70-be4e-ea07cbff8053\") " pod="openstack/barbican-db-sync-sx242" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.160752 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b51add02-d86c-4eb3-924d-1b2ac530e97b-run-httpd\") pod \"ceilometer-0\" (UID: \"b51add02-d86c-4eb3-924d-1b2ac530e97b\") " pod="openstack/ceilometer-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.160803 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8csl\" (UniqueName: \"kubernetes.io/projected/b51add02-d86c-4eb3-924d-1b2ac530e97b-kube-api-access-r8csl\") pod \"ceilometer-0\" (UID: \"b51add02-d86c-4eb3-924d-1b2ac530e97b\") " pod="openstack/ceilometer-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.160923 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b51add02-d86c-4eb3-924d-1b2ac530e97b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b51add02-d86c-4eb3-924d-1b2ac530e97b\") " pod="openstack/ceilometer-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.160946 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b51add02-d86c-4eb3-924d-1b2ac530e97b-scripts\") pod \"ceilometer-0\" (UID: \"b51add02-d86c-4eb3-924d-1b2ac530e97b\") " pod="openstack/ceilometer-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.160988 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/23acd1c1-f4b4-4d70-be4e-ea07cbff8053-db-sync-config-data\") pod \"barbican-db-sync-sx242\" (UID: \"23acd1c1-f4b4-4d70-be4e-ea07cbff8053\") " pod="openstack/barbican-db-sync-sx242" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.161083 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b51add02-d86c-4eb3-924d-1b2ac530e97b-config-data\") pod \"ceilometer-0\" (UID: \"b51add02-d86c-4eb3-924d-1b2ac530e97b\") " pod="openstack/ceilometer-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.161137 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e3c0e55-238e-4e4b-b5fb-da86a9948f01-combined-ca-bundle\") pod \"neutron-db-sync-49xz8\" (UID: \"8e3c0e55-238e-4e4b-b5fb-da86a9948f01\") " pod="openstack/neutron-db-sync-49xz8" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.181721 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e3c0e55-238e-4e4b-b5fb-da86a9948f01-combined-ca-bundle\") pod \"neutron-db-sync-49xz8\" (UID: \"8e3c0e55-238e-4e4b-b5fb-da86a9948f01\") " pod="openstack/neutron-db-sync-49xz8" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.188097 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-sx242"] Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.192960 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8e3c0e55-238e-4e4b-b5fb-da86a9948f01-config\") pod \"neutron-db-sync-49xz8\" (UID: \"8e3c0e55-238e-4e4b-b5fb-da86a9948f01\") " pod="openstack/neutron-db-sync-49xz8" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.195464 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnjwg\" (UniqueName: \"kubernetes.io/projected/8e3c0e55-238e-4e4b-b5fb-da86a9948f01-kube-api-access-lnjwg\") pod \"neutron-db-sync-49xz8\" (UID: \"8e3c0e55-238e-4e4b-b5fb-da86a9948f01\") " pod="openstack/neutron-db-sync-49xz8" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.207501 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9bbd44f9c-fm26p"] Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.226004 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-cdg7k"] Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.227055 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-cdg7k" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.228590 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.228842 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-smwhf" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.229131 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.230467 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-cdg7k"] Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.272314 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b51add02-d86c-4eb3-924d-1b2ac530e97b-config-data\") pod \"ceilometer-0\" (UID: \"b51add02-d86c-4eb3-924d-1b2ac530e97b\") " pod="openstack/ceilometer-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.272593 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b51add02-d86c-4eb3-924d-1b2ac530e97b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b51add02-d86c-4eb3-924d-1b2ac530e97b\") " pod="openstack/ceilometer-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.272636 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b51add02-d86c-4eb3-924d-1b2ac530e97b-log-httpd\") pod \"ceilometer-0\" (UID: \"b51add02-d86c-4eb3-924d-1b2ac530e97b\") " pod="openstack/ceilometer-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.272659 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23acd1c1-f4b4-4d70-be4e-ea07cbff8053-combined-ca-bundle\") pod \"barbican-db-sync-sx242\" (UID: \"23acd1c1-f4b4-4d70-be4e-ea07cbff8053\") " pod="openstack/barbican-db-sync-sx242" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.272684 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxdf7\" (UniqueName: \"kubernetes.io/projected/23acd1c1-f4b4-4d70-be4e-ea07cbff8053-kube-api-access-rxdf7\") pod \"barbican-db-sync-sx242\" (UID: \"23acd1c1-f4b4-4d70-be4e-ea07cbff8053\") " pod="openstack/barbican-db-sync-sx242" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.272703 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b51add02-d86c-4eb3-924d-1b2ac530e97b-run-httpd\") pod \"ceilometer-0\" (UID: \"b51add02-d86c-4eb3-924d-1b2ac530e97b\") " pod="openstack/ceilometer-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.272730 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8csl\" (UniqueName: \"kubernetes.io/projected/b51add02-d86c-4eb3-924d-1b2ac530e97b-kube-api-access-r8csl\") pod \"ceilometer-0\" (UID: \"b51add02-d86c-4eb3-924d-1b2ac530e97b\") " pod="openstack/ceilometer-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.272771 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b51add02-d86c-4eb3-924d-1b2ac530e97b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b51add02-d86c-4eb3-924d-1b2ac530e97b\") " pod="openstack/ceilometer-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.272793 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b51add02-d86c-4eb3-924d-1b2ac530e97b-scripts\") pod \"ceilometer-0\" (UID: \"b51add02-d86c-4eb3-924d-1b2ac530e97b\") " pod="openstack/ceilometer-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.272816 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/23acd1c1-f4b4-4d70-be4e-ea07cbff8053-db-sync-config-data\") pod \"barbican-db-sync-sx242\" (UID: \"23acd1c1-f4b4-4d70-be4e-ea07cbff8053\") " pod="openstack/barbican-db-sync-sx242" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.277603 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b51add02-d86c-4eb3-924d-1b2ac530e97b-run-httpd\") pod \"ceilometer-0\" (UID: \"b51add02-d86c-4eb3-924d-1b2ac530e97b\") " pod="openstack/ceilometer-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.284252 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b51add02-d86c-4eb3-924d-1b2ac530e97b-log-httpd\") pod \"ceilometer-0\" (UID: \"b51add02-d86c-4eb3-924d-1b2ac530e97b\") " pod="openstack/ceilometer-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.284313 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-647d8845b5-mtlg7"] Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.287120 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b51add02-d86c-4eb3-924d-1b2ac530e97b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b51add02-d86c-4eb3-924d-1b2ac530e97b\") " pod="openstack/ceilometer-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.292927 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647d8845b5-mtlg7" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.298801 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8csl\" (UniqueName: \"kubernetes.io/projected/b51add02-d86c-4eb3-924d-1b2ac530e97b-kube-api-access-r8csl\") pod \"ceilometer-0\" (UID: \"b51add02-d86c-4eb3-924d-1b2ac530e97b\") " pod="openstack/ceilometer-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.299932 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b51add02-d86c-4eb3-924d-1b2ac530e97b-scripts\") pod \"ceilometer-0\" (UID: \"b51add02-d86c-4eb3-924d-1b2ac530e97b\") " pod="openstack/ceilometer-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.309280 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-647d8845b5-mtlg7"] Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.313565 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23acd1c1-f4b4-4d70-be4e-ea07cbff8053-combined-ca-bundle\") pod \"barbican-db-sync-sx242\" (UID: \"23acd1c1-f4b4-4d70-be4e-ea07cbff8053\") " pod="openstack/barbican-db-sync-sx242" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.313832 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/23acd1c1-f4b4-4d70-be4e-ea07cbff8053-db-sync-config-data\") pod \"barbican-db-sync-sx242\" (UID: \"23acd1c1-f4b4-4d70-be4e-ea07cbff8053\") " pod="openstack/barbican-db-sync-sx242" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.314336 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b51add02-d86c-4eb3-924d-1b2ac530e97b-config-data\") pod \"ceilometer-0\" (UID: \"b51add02-d86c-4eb3-924d-1b2ac530e97b\") " pod="openstack/ceilometer-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.316100 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b51add02-d86c-4eb3-924d-1b2ac530e97b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b51add02-d86c-4eb3-924d-1b2ac530e97b\") " pod="openstack/ceilometer-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.325453 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxdf7\" (UniqueName: \"kubernetes.io/projected/23acd1c1-f4b4-4d70-be4e-ea07cbff8053-kube-api-access-rxdf7\") pod \"barbican-db-sync-sx242\" (UID: \"23acd1c1-f4b4-4d70-be4e-ea07cbff8053\") " pod="openstack/barbican-db-sync-sx242" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.326155 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-p6csz" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.349000 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-49xz8" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.375595 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eaa076e1-87b1-4118-b601-ba85239d1239-dns-swift-storage-0\") pod \"dnsmasq-dns-647d8845b5-mtlg7\" (UID: \"eaa076e1-87b1-4118-b601-ba85239d1239\") " pod="openstack/dnsmasq-dns-647d8845b5-mtlg7" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.375696 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eaa076e1-87b1-4118-b601-ba85239d1239-ovsdbserver-sb\") pod \"dnsmasq-dns-647d8845b5-mtlg7\" (UID: \"eaa076e1-87b1-4118-b601-ba85239d1239\") " pod="openstack/dnsmasq-dns-647d8845b5-mtlg7" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.375754 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeb7c774-4ae0-475c-a44a-138a917beac0-config-data\") pod \"placement-db-sync-cdg7k\" (UID: \"eeb7c774-4ae0-475c-a44a-138a917beac0\") " pod="openstack/placement-db-sync-cdg7k" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.375781 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eeb7c774-4ae0-475c-a44a-138a917beac0-logs\") pod \"placement-db-sync-cdg7k\" (UID: \"eeb7c774-4ae0-475c-a44a-138a917beac0\") " pod="openstack/placement-db-sync-cdg7k" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.375819 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eaa076e1-87b1-4118-b601-ba85239d1239-ovsdbserver-nb\") pod \"dnsmasq-dns-647d8845b5-mtlg7\" (UID: \"eaa076e1-87b1-4118-b601-ba85239d1239\") " pod="openstack/dnsmasq-dns-647d8845b5-mtlg7" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.375848 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwvtx\" (UniqueName: \"kubernetes.io/projected/eeb7c774-4ae0-475c-a44a-138a917beac0-kube-api-access-hwvtx\") pod \"placement-db-sync-cdg7k\" (UID: \"eeb7c774-4ae0-475c-a44a-138a917beac0\") " pod="openstack/placement-db-sync-cdg7k" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.375879 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeb7c774-4ae0-475c-a44a-138a917beac0-combined-ca-bundle\") pod \"placement-db-sync-cdg7k\" (UID: \"eeb7c774-4ae0-475c-a44a-138a917beac0\") " pod="openstack/placement-db-sync-cdg7k" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.375926 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhfht\" (UniqueName: \"kubernetes.io/projected/eaa076e1-87b1-4118-b601-ba85239d1239-kube-api-access-xhfht\") pod \"dnsmasq-dns-647d8845b5-mtlg7\" (UID: \"eaa076e1-87b1-4118-b601-ba85239d1239\") " pod="openstack/dnsmasq-dns-647d8845b5-mtlg7" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.375975 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eeb7c774-4ae0-475c-a44a-138a917beac0-scripts\") pod \"placement-db-sync-cdg7k\" (UID: \"eeb7c774-4ae0-475c-a44a-138a917beac0\") " pod="openstack/placement-db-sync-cdg7k" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.376011 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaa076e1-87b1-4118-b601-ba85239d1239-dns-svc\") pod \"dnsmasq-dns-647d8845b5-mtlg7\" (UID: \"eaa076e1-87b1-4118-b601-ba85239d1239\") " pod="openstack/dnsmasq-dns-647d8845b5-mtlg7" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.376049 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa076e1-87b1-4118-b601-ba85239d1239-config\") pod \"dnsmasq-dns-647d8845b5-mtlg7\" (UID: \"eaa076e1-87b1-4118-b601-ba85239d1239\") " pod="openstack/dnsmasq-dns-647d8845b5-mtlg7" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.473425 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.477446 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eeb7c774-4ae0-475c-a44a-138a917beac0-logs\") pod \"placement-db-sync-cdg7k\" (UID: \"eeb7c774-4ae0-475c-a44a-138a917beac0\") " pod="openstack/placement-db-sync-cdg7k" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.477503 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eaa076e1-87b1-4118-b601-ba85239d1239-ovsdbserver-nb\") pod \"dnsmasq-dns-647d8845b5-mtlg7\" (UID: \"eaa076e1-87b1-4118-b601-ba85239d1239\") " pod="openstack/dnsmasq-dns-647d8845b5-mtlg7" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.477532 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwvtx\" (UniqueName: \"kubernetes.io/projected/eeb7c774-4ae0-475c-a44a-138a917beac0-kube-api-access-hwvtx\") pod \"placement-db-sync-cdg7k\" (UID: \"eeb7c774-4ae0-475c-a44a-138a917beac0\") " pod="openstack/placement-db-sync-cdg7k" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.477556 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeb7c774-4ae0-475c-a44a-138a917beac0-combined-ca-bundle\") pod \"placement-db-sync-cdg7k\" (UID: \"eeb7c774-4ae0-475c-a44a-138a917beac0\") " pod="openstack/placement-db-sync-cdg7k" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.477570 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhfht\" (UniqueName: \"kubernetes.io/projected/eaa076e1-87b1-4118-b601-ba85239d1239-kube-api-access-xhfht\") pod \"dnsmasq-dns-647d8845b5-mtlg7\" (UID: \"eaa076e1-87b1-4118-b601-ba85239d1239\") " pod="openstack/dnsmasq-dns-647d8845b5-mtlg7" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.477598 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eeb7c774-4ae0-475c-a44a-138a917beac0-scripts\") pod \"placement-db-sync-cdg7k\" (UID: \"eeb7c774-4ae0-475c-a44a-138a917beac0\") " pod="openstack/placement-db-sync-cdg7k" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.477622 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaa076e1-87b1-4118-b601-ba85239d1239-dns-svc\") pod \"dnsmasq-dns-647d8845b5-mtlg7\" (UID: \"eaa076e1-87b1-4118-b601-ba85239d1239\") " pod="openstack/dnsmasq-dns-647d8845b5-mtlg7" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.477646 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa076e1-87b1-4118-b601-ba85239d1239-config\") pod \"dnsmasq-dns-647d8845b5-mtlg7\" (UID: \"eaa076e1-87b1-4118-b601-ba85239d1239\") " pod="openstack/dnsmasq-dns-647d8845b5-mtlg7" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.477688 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eaa076e1-87b1-4118-b601-ba85239d1239-dns-swift-storage-0\") pod \"dnsmasq-dns-647d8845b5-mtlg7\" (UID: \"eaa076e1-87b1-4118-b601-ba85239d1239\") " pod="openstack/dnsmasq-dns-647d8845b5-mtlg7" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.477714 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eaa076e1-87b1-4118-b601-ba85239d1239-ovsdbserver-sb\") pod \"dnsmasq-dns-647d8845b5-mtlg7\" (UID: \"eaa076e1-87b1-4118-b601-ba85239d1239\") " pod="openstack/dnsmasq-dns-647d8845b5-mtlg7" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.477733 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeb7c774-4ae0-475c-a44a-138a917beac0-config-data\") pod \"placement-db-sync-cdg7k\" (UID: \"eeb7c774-4ae0-475c-a44a-138a917beac0\") " pod="openstack/placement-db-sync-cdg7k" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.483021 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eaa076e1-87b1-4118-b601-ba85239d1239-dns-swift-storage-0\") pod \"dnsmasq-dns-647d8845b5-mtlg7\" (UID: \"eaa076e1-87b1-4118-b601-ba85239d1239\") " pod="openstack/dnsmasq-dns-647d8845b5-mtlg7" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.483533 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eaa076e1-87b1-4118-b601-ba85239d1239-ovsdbserver-sb\") pod \"dnsmasq-dns-647d8845b5-mtlg7\" (UID: \"eaa076e1-87b1-4118-b601-ba85239d1239\") " pod="openstack/dnsmasq-dns-647d8845b5-mtlg7" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.484499 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeb7c774-4ae0-475c-a44a-138a917beac0-config-data\") pod \"placement-db-sync-cdg7k\" (UID: \"eeb7c774-4ae0-475c-a44a-138a917beac0\") " pod="openstack/placement-db-sync-cdg7k" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.484593 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa076e1-87b1-4118-b601-ba85239d1239-config\") pod \"dnsmasq-dns-647d8845b5-mtlg7\" (UID: \"eaa076e1-87b1-4118-b601-ba85239d1239\") " pod="openstack/dnsmasq-dns-647d8845b5-mtlg7" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.484745 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eeb7c774-4ae0-475c-a44a-138a917beac0-logs\") pod \"placement-db-sync-cdg7k\" (UID: \"eeb7c774-4ae0-475c-a44a-138a917beac0\") " pod="openstack/placement-db-sync-cdg7k" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.485440 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eaa076e1-87b1-4118-b601-ba85239d1239-ovsdbserver-nb\") pod \"dnsmasq-dns-647d8845b5-mtlg7\" (UID: \"eaa076e1-87b1-4118-b601-ba85239d1239\") " pod="openstack/dnsmasq-dns-647d8845b5-mtlg7" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.492011 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-sx242" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.492808 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeb7c774-4ae0-475c-a44a-138a917beac0-combined-ca-bundle\") pod \"placement-db-sync-cdg7k\" (UID: \"eeb7c774-4ae0-475c-a44a-138a917beac0\") " pod="openstack/placement-db-sync-cdg7k" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.497261 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaa076e1-87b1-4118-b601-ba85239d1239-dns-svc\") pod \"dnsmasq-dns-647d8845b5-mtlg7\" (UID: \"eaa076e1-87b1-4118-b601-ba85239d1239\") " pod="openstack/dnsmasq-dns-647d8845b5-mtlg7" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.500453 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eeb7c774-4ae0-475c-a44a-138a917beac0-scripts\") pod \"placement-db-sync-cdg7k\" (UID: \"eeb7c774-4ae0-475c-a44a-138a917beac0\") " pod="openstack/placement-db-sync-cdg7k" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.502616 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhfht\" (UniqueName: \"kubernetes.io/projected/eaa076e1-87b1-4118-b601-ba85239d1239-kube-api-access-xhfht\") pod \"dnsmasq-dns-647d8845b5-mtlg7\" (UID: \"eaa076e1-87b1-4118-b601-ba85239d1239\") " pod="openstack/dnsmasq-dns-647d8845b5-mtlg7" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.504326 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwvtx\" (UniqueName: \"kubernetes.io/projected/eeb7c774-4ae0-475c-a44a-138a917beac0-kube-api-access-hwvtx\") pod \"placement-db-sync-cdg7k\" (UID: \"eeb7c774-4ae0-475c-a44a-138a917beac0\") " pod="openstack/placement-db-sync-cdg7k" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.563264 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-cdg7k" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.622689 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647d8845b5-mtlg7" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.665434 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8qpzn"] Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.676746 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9bbd44f9c-fm26p"] Jan 01 08:46:42 crc kubenswrapper[4867]: W0101 08:46:42.685095 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a485676_0927_4082_9329_aaacd9aabb0b.slice/crio-c82837f295a4831d27c8e2ab8af4620f2db495a46dbc5cd36c034dab225fef59 WatchSource:0}: Error finding container c82837f295a4831d27c8e2ab8af4620f2db495a46dbc5cd36c034dab225fef59: Status 404 returned error can't find the container with id c82837f295a4831d27c8e2ab8af4620f2db495a46dbc5cd36c034dab225fef59 Jan 01 08:46:42 crc kubenswrapper[4867]: W0101 08:46:42.685409 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod258f483b_5c81_4662_a95f_402d907ebfcc.slice/crio-626a1f363701eb844a5c23d2d3ab81a7ae0597a1aa702f3e9f06ba93650b955e WatchSource:0}: Error finding container 626a1f363701eb844a5c23d2d3ab81a7ae0597a1aa702f3e9f06ba93650b955e: Status 404 returned error can't find the container with id 626a1f363701eb844a5c23d2d3ab81a7ae0597a1aa702f3e9f06ba93650b955e Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.792200 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.793656 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.803242 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-sg4nb" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.803257 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.803488 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.803615 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.814316 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.852894 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-49xz8"] Jan 01 08:46:42 crc kubenswrapper[4867]: W0101 08:46:42.867948 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e3c0e55_238e_4e4b_b5fb_da86a9948f01.slice/crio-b56b1e4bfc9cb063d1630945033f7857eab14f35aea0f527f4132ee285b7ba10 WatchSource:0}: Error finding container b56b1e4bfc9cb063d1630945033f7857eab14f35aea0f527f4132ee285b7ba10: Status 404 returned error can't find the container with id b56b1e4bfc9cb063d1630945033f7857eab14f35aea0f527f4132ee285b7ba10 Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.875575 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-p6csz"] Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.886069 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49n78\" (UniqueName: \"kubernetes.io/projected/862f6004-f33b-4866-be38-15e5e3227b0e-kube-api-access-49n78\") pod \"glance-default-internal-api-0\" (UID: \"862f6004-f33b-4866-be38-15e5e3227b0e\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.886161 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"862f6004-f33b-4866-be38-15e5e3227b0e\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.886190 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/862f6004-f33b-4866-be38-15e5e3227b0e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"862f6004-f33b-4866-be38-15e5e3227b0e\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.886250 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/862f6004-f33b-4866-be38-15e5e3227b0e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"862f6004-f33b-4866-be38-15e5e3227b0e\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.886268 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/862f6004-f33b-4866-be38-15e5e3227b0e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"862f6004-f33b-4866-be38-15e5e3227b0e\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.886303 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/862f6004-f33b-4866-be38-15e5e3227b0e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"862f6004-f33b-4866-be38-15e5e3227b0e\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.886319 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/862f6004-f33b-4866-be38-15e5e3227b0e-logs\") pod \"glance-default-internal-api-0\" (UID: \"862f6004-f33b-4866-be38-15e5e3227b0e\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.886350 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/862f6004-f33b-4866-be38-15e5e3227b0e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"862f6004-f33b-4866-be38-15e5e3227b0e\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:46:42 crc kubenswrapper[4867]: W0101 08:46:42.886451 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod978a99d3_4e55_4026_a329_5da06bf36c90.slice/crio-18026ff58ad1ae737e98df65f047b46427fa933ffc7b8f59634a8e902ffcc426 WatchSource:0}: Error finding container 18026ff58ad1ae737e98df65f047b46427fa933ffc7b8f59634a8e902ffcc426: Status 404 returned error can't find the container with id 18026ff58ad1ae737e98df65f047b46427fa933ffc7b8f59634a8e902ffcc426 Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.931833 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.935107 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.939138 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.939646 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.949345 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.987311 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/862f6004-f33b-4866-be38-15e5e3227b0e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"862f6004-f33b-4866-be38-15e5e3227b0e\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.987343 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/862f6004-f33b-4866-be38-15e5e3227b0e-logs\") pod \"glance-default-internal-api-0\" (UID: \"862f6004-f33b-4866-be38-15e5e3227b0e\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.987377 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/862f6004-f33b-4866-be38-15e5e3227b0e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"862f6004-f33b-4866-be38-15e5e3227b0e\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.987404 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-logs\") pod \"glance-default-external-api-0\" (UID: \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\") " pod="openstack/glance-default-external-api-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.987423 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49n78\" (UniqueName: \"kubernetes.io/projected/862f6004-f33b-4866-be38-15e5e3227b0e-kube-api-access-49n78\") pod \"glance-default-internal-api-0\" (UID: \"862f6004-f33b-4866-be38-15e5e3227b0e\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.987440 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-config-data\") pod \"glance-default-external-api-0\" (UID: \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\") " pod="openstack/glance-default-external-api-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.987473 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\") " pod="openstack/glance-default-external-api-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.987495 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9qcm\" (UniqueName: \"kubernetes.io/projected/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-kube-api-access-z9qcm\") pod \"glance-default-external-api-0\" (UID: \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\") " pod="openstack/glance-default-external-api-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.987518 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"862f6004-f33b-4866-be38-15e5e3227b0e\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.987534 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\") " pod="openstack/glance-default-external-api-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.987557 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/862f6004-f33b-4866-be38-15e5e3227b0e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"862f6004-f33b-4866-be38-15e5e3227b0e\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.987575 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\") " pod="openstack/glance-default-external-api-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.987604 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\") " pod="openstack/glance-default-external-api-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.987623 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-scripts\") pod \"glance-default-external-api-0\" (UID: \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\") " pod="openstack/glance-default-external-api-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.987659 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/862f6004-f33b-4866-be38-15e5e3227b0e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"862f6004-f33b-4866-be38-15e5e3227b0e\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.987676 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/862f6004-f33b-4866-be38-15e5e3227b0e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"862f6004-f33b-4866-be38-15e5e3227b0e\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.988466 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"862f6004-f33b-4866-be38-15e5e3227b0e\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.988508 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/862f6004-f33b-4866-be38-15e5e3227b0e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"862f6004-f33b-4866-be38-15e5e3227b0e\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.988753 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/862f6004-f33b-4866-be38-15e5e3227b0e-logs\") pod \"glance-default-internal-api-0\" (UID: \"862f6004-f33b-4866-be38-15e5e3227b0e\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.991925 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/862f6004-f33b-4866-be38-15e5e3227b0e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"862f6004-f33b-4866-be38-15e5e3227b0e\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.992619 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/862f6004-f33b-4866-be38-15e5e3227b0e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"862f6004-f33b-4866-be38-15e5e3227b0e\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.996709 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/862f6004-f33b-4866-be38-15e5e3227b0e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"862f6004-f33b-4866-be38-15e5e3227b0e\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:46:42 crc kubenswrapper[4867]: I0101 08:46:42.999081 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/862f6004-f33b-4866-be38-15e5e3227b0e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"862f6004-f33b-4866-be38-15e5e3227b0e\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.004455 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-sx242"] Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.017652 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.019554 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49n78\" (UniqueName: \"kubernetes.io/projected/862f6004-f33b-4866-be38-15e5e3227b0e-kube-api-access-49n78\") pod \"glance-default-internal-api-0\" (UID: \"862f6004-f33b-4866-be38-15e5e3227b0e\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:46:43 crc kubenswrapper[4867]: W0101 08:46:43.021139 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23acd1c1_f4b4_4d70_be4e_ea07cbff8053.slice/crio-8177d642c3d58eb781af0f63b55181622f7956b45ffda266211965604abccb6e WatchSource:0}: Error finding container 8177d642c3d58eb781af0f63b55181622f7956b45ffda266211965604abccb6e: Status 404 returned error can't find the container with id 8177d642c3d58eb781af0f63b55181622f7956b45ffda266211965604abccb6e Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.038866 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"862f6004-f33b-4866-be38-15e5e3227b0e\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.092928 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\") " pod="openstack/glance-default-external-api-0" Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.092967 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-scripts\") pod \"glance-default-external-api-0\" (UID: \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\") " pod="openstack/glance-default-external-api-0" Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.093047 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-logs\") pod \"glance-default-external-api-0\" (UID: \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\") " pod="openstack/glance-default-external-api-0" Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.093072 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-config-data\") pod \"glance-default-external-api-0\" (UID: \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\") " pod="openstack/glance-default-external-api-0" Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.093106 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\") " pod="openstack/glance-default-external-api-0" Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.093128 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9qcm\" (UniqueName: \"kubernetes.io/projected/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-kube-api-access-z9qcm\") pod \"glance-default-external-api-0\" (UID: \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\") " pod="openstack/glance-default-external-api-0" Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.093153 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\") " pod="openstack/glance-default-external-api-0" Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.093552 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\") " pod="openstack/glance-default-external-api-0" Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.093268 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.093628 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\") " pod="openstack/glance-default-external-api-0" Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.093989 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-logs\") pod \"glance-default-external-api-0\" (UID: \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\") " pod="openstack/glance-default-external-api-0" Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.103478 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\") " pod="openstack/glance-default-external-api-0" Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.103586 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-scripts\") pod \"glance-default-external-api-0\" (UID: \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\") " pod="openstack/glance-default-external-api-0" Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.104672 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\") " pod="openstack/glance-default-external-api-0" Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.107980 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-config-data\") pod \"glance-default-external-api-0\" (UID: \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\") " pod="openstack/glance-default-external-api-0" Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.110306 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9qcm\" (UniqueName: \"kubernetes.io/projected/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-kube-api-access-z9qcm\") pod \"glance-default-external-api-0\" (UID: \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\") " pod="openstack/glance-default-external-api-0" Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.124570 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.136582 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\") " pod="openstack/glance-default-external-api-0" Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.200052 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-cdg7k"] Jan 01 08:46:43 crc kubenswrapper[4867]: W0101 08:46:43.209300 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeeb7c774_4ae0_475c_a44a_138a917beac0.slice/crio-c45c42b74c6f608b40077b8e79b0d6279d0548a40f095a1a141e3585c5c6c38e WatchSource:0}: Error finding container c45c42b74c6f608b40077b8e79b0d6279d0548a40f095a1a141e3585c5c6c38e: Status 404 returned error can't find the container with id c45c42b74c6f608b40077b8e79b0d6279d0548a40f095a1a141e3585c5c6c38e Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.210790 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-647d8845b5-mtlg7"] Jan 01 08:46:43 crc kubenswrapper[4867]: W0101 08:46:43.213669 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaa076e1_87b1_4118_b601_ba85239d1239.slice/crio-7d0a4e0216f0d87b49a3b7e5a6654da96c6bc6892454e3a906121a364a900e08 WatchSource:0}: Error finding container 7d0a4e0216f0d87b49a3b7e5a6654da96c6bc6892454e3a906121a364a900e08: Status 404 returned error can't find the container with id 7d0a4e0216f0d87b49a3b7e5a6654da96c6bc6892454e3a906121a364a900e08 Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.421177 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-cdg7k" event={"ID":"eeb7c774-4ae0-475c-a44a-138a917beac0","Type":"ContainerStarted","Data":"c45c42b74c6f608b40077b8e79b0d6279d0548a40f095a1a141e3585c5c6c38e"} Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.422072 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.424517 4867 generic.go:334] "Generic (PLEG): container finished" podID="9a485676-0927-4082-9329-aaacd9aabb0b" containerID="2018e812b8aba4ba97dad73733ab000df12c5fb70d75e1cdfe44b25d1ce9407c" exitCode=0 Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.424565 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9bbd44f9c-fm26p" event={"ID":"9a485676-0927-4082-9329-aaacd9aabb0b","Type":"ContainerDied","Data":"2018e812b8aba4ba97dad73733ab000df12c5fb70d75e1cdfe44b25d1ce9407c"} Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.424582 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9bbd44f9c-fm26p" event={"ID":"9a485676-0927-4082-9329-aaacd9aabb0b","Type":"ContainerStarted","Data":"c82837f295a4831d27c8e2ab8af4620f2db495a46dbc5cd36c034dab225fef59"} Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.426176 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-sx242" event={"ID":"23acd1c1-f4b4-4d70-be4e-ea07cbff8053","Type":"ContainerStarted","Data":"8177d642c3d58eb781af0f63b55181622f7956b45ffda266211965604abccb6e"} Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.452921 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-49xz8" event={"ID":"8e3c0e55-238e-4e4b-b5fb-da86a9948f01","Type":"ContainerStarted","Data":"ad2a6a74c002016bb0dabc7c9ffad35550dacca301a44244c6a61f60a5d320af"} Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.452962 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-49xz8" event={"ID":"8e3c0e55-238e-4e4b-b5fb-da86a9948f01","Type":"ContainerStarted","Data":"b56b1e4bfc9cb063d1630945033f7857eab14f35aea0f527f4132ee285b7ba10"} Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.455322 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-p6csz" event={"ID":"978a99d3-4e55-4026-a329-5da06bf36c90","Type":"ContainerStarted","Data":"18026ff58ad1ae737e98df65f047b46427fa933ffc7b8f59634a8e902ffcc426"} Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.457278 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647d8845b5-mtlg7" event={"ID":"eaa076e1-87b1-4118-b601-ba85239d1239","Type":"ContainerStarted","Data":"7d0a4e0216f0d87b49a3b7e5a6654da96c6bc6892454e3a906121a364a900e08"} Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.459282 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8qpzn" event={"ID":"258f483b-5c81-4662-a95f-402d907ebfcc","Type":"ContainerStarted","Data":"c075a54d107c469a73bb0798878da75177f69ca6ebfdff598d2c839397f64a6d"} Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.459312 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8qpzn" event={"ID":"258f483b-5c81-4662-a95f-402d907ebfcc","Type":"ContainerStarted","Data":"626a1f363701eb844a5c23d2d3ab81a7ae0597a1aa702f3e9f06ba93650b955e"} Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.472616 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b51add02-d86c-4eb3-924d-1b2ac530e97b","Type":"ContainerStarted","Data":"f7ea4af71f99e6d8df6157f41a790842e230c2afeaaaac388b08575584fc52e6"} Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.473562 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-49xz8" podStartSLOduration=2.473542157 podStartE2EDuration="2.473542157s" podCreationTimestamp="2026-01-01 08:46:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:46:43.472872068 +0000 UTC m=+1212.608140847" watchObservedRunningTime="2026-01-01 08:46:43.473542157 +0000 UTC m=+1212.608810926" Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.782234 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-8qpzn" podStartSLOduration=2.782214088 podStartE2EDuration="2.782214088s" podCreationTimestamp="2026-01-01 08:46:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:46:43.558409521 +0000 UTC m=+1212.693678300" watchObservedRunningTime="2026-01-01 08:46:43.782214088 +0000 UTC m=+1212.917482857" Jan 01 08:46:43 crc kubenswrapper[4867]: I0101 08:46:43.795333 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 01 08:46:44 crc kubenswrapper[4867]: I0101 08:46:44.067183 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9bbd44f9c-fm26p" Jan 01 08:46:44 crc kubenswrapper[4867]: I0101 08:46:44.118819 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a485676-0927-4082-9329-aaacd9aabb0b-ovsdbserver-sb\") pod \"9a485676-0927-4082-9329-aaacd9aabb0b\" (UID: \"9a485676-0927-4082-9329-aaacd9aabb0b\") " Jan 01 08:46:44 crc kubenswrapper[4867]: I0101 08:46:44.118938 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9a485676-0927-4082-9329-aaacd9aabb0b-dns-swift-storage-0\") pod \"9a485676-0927-4082-9329-aaacd9aabb0b\" (UID: \"9a485676-0927-4082-9329-aaacd9aabb0b\") " Jan 01 08:46:44 crc kubenswrapper[4867]: I0101 08:46:44.119081 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7zbn\" (UniqueName: \"kubernetes.io/projected/9a485676-0927-4082-9329-aaacd9aabb0b-kube-api-access-w7zbn\") pod \"9a485676-0927-4082-9329-aaacd9aabb0b\" (UID: \"9a485676-0927-4082-9329-aaacd9aabb0b\") " Jan 01 08:46:44 crc kubenswrapper[4867]: I0101 08:46:44.119114 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a485676-0927-4082-9329-aaacd9aabb0b-dns-svc\") pod \"9a485676-0927-4082-9329-aaacd9aabb0b\" (UID: \"9a485676-0927-4082-9329-aaacd9aabb0b\") " Jan 01 08:46:44 crc kubenswrapper[4867]: I0101 08:46:44.119190 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a485676-0927-4082-9329-aaacd9aabb0b-ovsdbserver-nb\") pod \"9a485676-0927-4082-9329-aaacd9aabb0b\" (UID: \"9a485676-0927-4082-9329-aaacd9aabb0b\") " Jan 01 08:46:44 crc kubenswrapper[4867]: I0101 08:46:44.119252 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a485676-0927-4082-9329-aaacd9aabb0b-config\") pod \"9a485676-0927-4082-9329-aaacd9aabb0b\" (UID: \"9a485676-0927-4082-9329-aaacd9aabb0b\") " Jan 01 08:46:44 crc kubenswrapper[4867]: I0101 08:46:44.140911 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a485676-0927-4082-9329-aaacd9aabb0b-kube-api-access-w7zbn" (OuterVolumeSpecName: "kube-api-access-w7zbn") pod "9a485676-0927-4082-9329-aaacd9aabb0b" (UID: "9a485676-0927-4082-9329-aaacd9aabb0b"). InnerVolumeSpecName "kube-api-access-w7zbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:46:44 crc kubenswrapper[4867]: I0101 08:46:44.143217 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a485676-0927-4082-9329-aaacd9aabb0b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9a485676-0927-4082-9329-aaacd9aabb0b" (UID: "9a485676-0927-4082-9329-aaacd9aabb0b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:46:44 crc kubenswrapper[4867]: I0101 08:46:44.148401 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a485676-0927-4082-9329-aaacd9aabb0b-config" (OuterVolumeSpecName: "config") pod "9a485676-0927-4082-9329-aaacd9aabb0b" (UID: "9a485676-0927-4082-9329-aaacd9aabb0b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:46:44 crc kubenswrapper[4867]: I0101 08:46:44.153724 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a485676-0927-4082-9329-aaacd9aabb0b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9a485676-0927-4082-9329-aaacd9aabb0b" (UID: "9a485676-0927-4082-9329-aaacd9aabb0b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:46:44 crc kubenswrapper[4867]: I0101 08:46:44.163007 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a485676-0927-4082-9329-aaacd9aabb0b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9a485676-0927-4082-9329-aaacd9aabb0b" (UID: "9a485676-0927-4082-9329-aaacd9aabb0b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:46:44 crc kubenswrapper[4867]: I0101 08:46:44.177031 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a485676-0927-4082-9329-aaacd9aabb0b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9a485676-0927-4082-9329-aaacd9aabb0b" (UID: "9a485676-0927-4082-9329-aaacd9aabb0b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:46:44 crc kubenswrapper[4867]: I0101 08:46:44.220907 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a485676-0927-4082-9329-aaacd9aabb0b-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:44 crc kubenswrapper[4867]: I0101 08:46:44.220932 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a485676-0927-4082-9329-aaacd9aabb0b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:44 crc kubenswrapper[4867]: I0101 08:46:44.220944 4867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9a485676-0927-4082-9329-aaacd9aabb0b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:44 crc kubenswrapper[4867]: I0101 08:46:44.220954 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7zbn\" (UniqueName: \"kubernetes.io/projected/9a485676-0927-4082-9329-aaacd9aabb0b-kube-api-access-w7zbn\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:44 crc kubenswrapper[4867]: I0101 08:46:44.220963 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a485676-0927-4082-9329-aaacd9aabb0b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:44 crc kubenswrapper[4867]: I0101 08:46:44.220972 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a485676-0927-4082-9329-aaacd9aabb0b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:44 crc kubenswrapper[4867]: I0101 08:46:44.274003 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 01 08:46:44 crc kubenswrapper[4867]: W0101 08:46:44.289592 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae35dc22_012d_4c8e_8f97_5c5ee3596d96.slice/crio-977ae911b4ca6b78aa207fc27eb93397aa307f60652b9c0adf8bf5922ac67068 WatchSource:0}: Error finding container 977ae911b4ca6b78aa207fc27eb93397aa307f60652b9c0adf8bf5922ac67068: Status 404 returned error can't find the container with id 977ae911b4ca6b78aa207fc27eb93397aa307f60652b9c0adf8bf5922ac67068 Jan 01 08:46:44 crc kubenswrapper[4867]: I0101 08:46:44.490635 4867 generic.go:334] "Generic (PLEG): container finished" podID="eaa076e1-87b1-4118-b601-ba85239d1239" containerID="dbf530f0282d8b762efc12f6f66f07fd30d84690c654b9848c9a7932080aee66" exitCode=0 Jan 01 08:46:44 crc kubenswrapper[4867]: I0101 08:46:44.490809 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647d8845b5-mtlg7" event={"ID":"eaa076e1-87b1-4118-b601-ba85239d1239","Type":"ContainerDied","Data":"dbf530f0282d8b762efc12f6f66f07fd30d84690c654b9848c9a7932080aee66"} Jan 01 08:46:44 crc kubenswrapper[4867]: I0101 08:46:44.522197 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"862f6004-f33b-4866-be38-15e5e3227b0e","Type":"ContainerStarted","Data":"5516bbecfb8cf5922116809ce5df1c91b50e9467bdfade9b41c601986a4fe235"} Jan 01 08:46:44 crc kubenswrapper[4867]: I0101 08:46:44.544643 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9bbd44f9c-fm26p" event={"ID":"9a485676-0927-4082-9329-aaacd9aabb0b","Type":"ContainerDied","Data":"c82837f295a4831d27c8e2ab8af4620f2db495a46dbc5cd36c034dab225fef59"} Jan 01 08:46:44 crc kubenswrapper[4867]: I0101 08:46:44.544703 4867 scope.go:117] "RemoveContainer" containerID="2018e812b8aba4ba97dad73733ab000df12c5fb70d75e1cdfe44b25d1ce9407c" Jan 01 08:46:44 crc kubenswrapper[4867]: I0101 08:46:44.544860 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9bbd44f9c-fm26p" Jan 01 08:46:44 crc kubenswrapper[4867]: I0101 08:46:44.550942 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ae35dc22-012d-4c8e-8f97-5c5ee3596d96","Type":"ContainerStarted","Data":"977ae911b4ca6b78aa207fc27eb93397aa307f60652b9c0adf8bf5922ac67068"} Jan 01 08:46:44 crc kubenswrapper[4867]: I0101 08:46:44.653431 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9bbd44f9c-fm26p"] Jan 01 08:46:44 crc kubenswrapper[4867]: I0101 08:46:44.679877 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9bbd44f9c-fm26p"] Jan 01 08:46:45 crc kubenswrapper[4867]: I0101 08:46:45.067869 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 01 08:46:45 crc kubenswrapper[4867]: I0101 08:46:45.118516 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:46:45 crc kubenswrapper[4867]: I0101 08:46:45.154730 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a485676-0927-4082-9329-aaacd9aabb0b" path="/var/lib/kubelet/pods/9a485676-0927-4082-9329-aaacd9aabb0b/volumes" Jan 01 08:46:45 crc kubenswrapper[4867]: I0101 08:46:45.188125 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 01 08:46:45 crc kubenswrapper[4867]: I0101 08:46:45.569030 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"862f6004-f33b-4866-be38-15e5e3227b0e","Type":"ContainerStarted","Data":"469ab3b044082bed1128855ca4c5b95270d7a3a654f826dcc1be8ff880e3ff8d"} Jan 01 08:46:46 crc kubenswrapper[4867]: I0101 08:46:46.585285 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ae35dc22-012d-4c8e-8f97-5c5ee3596d96","Type":"ContainerStarted","Data":"4aac487634100db53b9df3d997c1eadf5155484f48b0f8d42bf7e03073bf5f1c"} Jan 01 08:46:46 crc kubenswrapper[4867]: I0101 08:46:46.585821 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ae35dc22-012d-4c8e-8f97-5c5ee3596d96","Type":"ContainerStarted","Data":"4e7dd44b068f40d1255226fdd522a880ca586de2e24e313ca8aa9cec93f2de45"} Jan 01 08:46:46 crc kubenswrapper[4867]: I0101 08:46:46.585606 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ae35dc22-012d-4c8e-8f97-5c5ee3596d96" containerName="glance-log" containerID="cri-o://4e7dd44b068f40d1255226fdd522a880ca586de2e24e313ca8aa9cec93f2de45" gracePeriod=30 Jan 01 08:46:46 crc kubenswrapper[4867]: I0101 08:46:46.585981 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ae35dc22-012d-4c8e-8f97-5c5ee3596d96" containerName="glance-httpd" containerID="cri-o://4aac487634100db53b9df3d997c1eadf5155484f48b0f8d42bf7e03073bf5f1c" gracePeriod=30 Jan 01 08:46:46 crc kubenswrapper[4867]: I0101 08:46:46.588856 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647d8845b5-mtlg7" event={"ID":"eaa076e1-87b1-4118-b601-ba85239d1239","Type":"ContainerStarted","Data":"0b06e88cc4d2f8b2777aa114152eca62a31a01c35345a95786744159b492aacf"} Jan 01 08:46:46 crc kubenswrapper[4867]: I0101 08:46:46.589715 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-647d8845b5-mtlg7" Jan 01 08:46:46 crc kubenswrapper[4867]: I0101 08:46:46.619342 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"862f6004-f33b-4866-be38-15e5e3227b0e","Type":"ContainerStarted","Data":"06a8468ca6be7f521713b92451687c271153a8d83f2396ac9b176ecca2471334"} Jan 01 08:46:46 crc kubenswrapper[4867]: I0101 08:46:46.619986 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="862f6004-f33b-4866-be38-15e5e3227b0e" containerName="glance-httpd" containerID="cri-o://06a8468ca6be7f521713b92451687c271153a8d83f2396ac9b176ecca2471334" gracePeriod=30 Jan 01 08:46:46 crc kubenswrapper[4867]: I0101 08:46:46.622648 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="862f6004-f33b-4866-be38-15e5e3227b0e" containerName="glance-log" containerID="cri-o://469ab3b044082bed1128855ca4c5b95270d7a3a654f826dcc1be8ff880e3ff8d" gracePeriod=30 Jan 01 08:46:46 crc kubenswrapper[4867]: I0101 08:46:46.627645 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.627622512 podStartE2EDuration="5.627622512s" podCreationTimestamp="2026-01-01 08:46:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:46:46.603863414 +0000 UTC m=+1215.739132203" watchObservedRunningTime="2026-01-01 08:46:46.627622512 +0000 UTC m=+1215.762891281" Jan 01 08:46:46 crc kubenswrapper[4867]: I0101 08:46:46.634608 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-647d8845b5-mtlg7" podStartSLOduration=4.634589157 podStartE2EDuration="4.634589157s" podCreationTimestamp="2026-01-01 08:46:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:46:46.630989076 +0000 UTC m=+1215.766257865" watchObservedRunningTime="2026-01-01 08:46:46.634589157 +0000 UTC m=+1215.769857926" Jan 01 08:46:46 crc kubenswrapper[4867]: I0101 08:46:46.654439 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.654421445 podStartE2EDuration="5.654421445s" podCreationTimestamp="2026-01-01 08:46:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:46:46.651456121 +0000 UTC m=+1215.786724910" watchObservedRunningTime="2026-01-01 08:46:46.654421445 +0000 UTC m=+1215.789690214" Jan 01 08:46:47 crc kubenswrapper[4867]: I0101 08:46:47.631787 4867 generic.go:334] "Generic (PLEG): container finished" podID="862f6004-f33b-4866-be38-15e5e3227b0e" containerID="06a8468ca6be7f521713b92451687c271153a8d83f2396ac9b176ecca2471334" exitCode=143 Jan 01 08:46:47 crc kubenswrapper[4867]: I0101 08:46:47.631836 4867 generic.go:334] "Generic (PLEG): container finished" podID="862f6004-f33b-4866-be38-15e5e3227b0e" containerID="469ab3b044082bed1128855ca4c5b95270d7a3a654f826dcc1be8ff880e3ff8d" exitCode=143 Jan 01 08:46:47 crc kubenswrapper[4867]: I0101 08:46:47.631867 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"862f6004-f33b-4866-be38-15e5e3227b0e","Type":"ContainerDied","Data":"06a8468ca6be7f521713b92451687c271153a8d83f2396ac9b176ecca2471334"} Jan 01 08:46:47 crc kubenswrapper[4867]: I0101 08:46:47.631948 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"862f6004-f33b-4866-be38-15e5e3227b0e","Type":"ContainerDied","Data":"469ab3b044082bed1128855ca4c5b95270d7a3a654f826dcc1be8ff880e3ff8d"} Jan 01 08:46:47 crc kubenswrapper[4867]: I0101 08:46:47.634945 4867 generic.go:334] "Generic (PLEG): container finished" podID="ae35dc22-012d-4c8e-8f97-5c5ee3596d96" containerID="4aac487634100db53b9df3d997c1eadf5155484f48b0f8d42bf7e03073bf5f1c" exitCode=143 Jan 01 08:46:47 crc kubenswrapper[4867]: I0101 08:46:47.634983 4867 generic.go:334] "Generic (PLEG): container finished" podID="ae35dc22-012d-4c8e-8f97-5c5ee3596d96" containerID="4e7dd44b068f40d1255226fdd522a880ca586de2e24e313ca8aa9cec93f2de45" exitCode=143 Jan 01 08:46:47 crc kubenswrapper[4867]: I0101 08:46:47.635041 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ae35dc22-012d-4c8e-8f97-5c5ee3596d96","Type":"ContainerDied","Data":"4aac487634100db53b9df3d997c1eadf5155484f48b0f8d42bf7e03073bf5f1c"} Jan 01 08:46:47 crc kubenswrapper[4867]: I0101 08:46:47.635093 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ae35dc22-012d-4c8e-8f97-5c5ee3596d96","Type":"ContainerDied","Data":"4e7dd44b068f40d1255226fdd522a880ca586de2e24e313ca8aa9cec93f2de45"} Jan 01 08:46:48 crc kubenswrapper[4867]: I0101 08:46:48.647347 4867 generic.go:334] "Generic (PLEG): container finished" podID="258f483b-5c81-4662-a95f-402d907ebfcc" containerID="c075a54d107c469a73bb0798878da75177f69ca6ebfdff598d2c839397f64a6d" exitCode=0 Jan 01 08:46:48 crc kubenswrapper[4867]: I0101 08:46:48.647419 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8qpzn" event={"ID":"258f483b-5c81-4662-a95f-402d907ebfcc","Type":"ContainerDied","Data":"c075a54d107c469a73bb0798878da75177f69ca6ebfdff598d2c839397f64a6d"} Jan 01 08:46:51 crc kubenswrapper[4867]: I0101 08:46:51.296339 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8qpzn" Jan 01 08:46:51 crc kubenswrapper[4867]: I0101 08:46:51.395006 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/258f483b-5c81-4662-a95f-402d907ebfcc-credential-keys\") pod \"258f483b-5c81-4662-a95f-402d907ebfcc\" (UID: \"258f483b-5c81-4662-a95f-402d907ebfcc\") " Jan 01 08:46:51 crc kubenswrapper[4867]: I0101 08:46:51.395148 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/258f483b-5c81-4662-a95f-402d907ebfcc-fernet-keys\") pod \"258f483b-5c81-4662-a95f-402d907ebfcc\" (UID: \"258f483b-5c81-4662-a95f-402d907ebfcc\") " Jan 01 08:46:51 crc kubenswrapper[4867]: I0101 08:46:51.395203 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/258f483b-5c81-4662-a95f-402d907ebfcc-config-data\") pod \"258f483b-5c81-4662-a95f-402d907ebfcc\" (UID: \"258f483b-5c81-4662-a95f-402d907ebfcc\") " Jan 01 08:46:51 crc kubenswrapper[4867]: I0101 08:46:51.395236 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/258f483b-5c81-4662-a95f-402d907ebfcc-scripts\") pod \"258f483b-5c81-4662-a95f-402d907ebfcc\" (UID: \"258f483b-5c81-4662-a95f-402d907ebfcc\") " Jan 01 08:46:51 crc kubenswrapper[4867]: I0101 08:46:51.395300 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6tv8\" (UniqueName: \"kubernetes.io/projected/258f483b-5c81-4662-a95f-402d907ebfcc-kube-api-access-d6tv8\") pod \"258f483b-5c81-4662-a95f-402d907ebfcc\" (UID: \"258f483b-5c81-4662-a95f-402d907ebfcc\") " Jan 01 08:46:51 crc kubenswrapper[4867]: I0101 08:46:51.395979 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/258f483b-5c81-4662-a95f-402d907ebfcc-combined-ca-bundle\") pod \"258f483b-5c81-4662-a95f-402d907ebfcc\" (UID: \"258f483b-5c81-4662-a95f-402d907ebfcc\") " Jan 01 08:46:51 crc kubenswrapper[4867]: I0101 08:46:51.401661 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/258f483b-5c81-4662-a95f-402d907ebfcc-scripts" (OuterVolumeSpecName: "scripts") pod "258f483b-5c81-4662-a95f-402d907ebfcc" (UID: "258f483b-5c81-4662-a95f-402d907ebfcc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:46:51 crc kubenswrapper[4867]: I0101 08:46:51.407606 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/258f483b-5c81-4662-a95f-402d907ebfcc-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "258f483b-5c81-4662-a95f-402d907ebfcc" (UID: "258f483b-5c81-4662-a95f-402d907ebfcc"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:46:51 crc kubenswrapper[4867]: I0101 08:46:51.415608 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/258f483b-5c81-4662-a95f-402d907ebfcc-kube-api-access-d6tv8" (OuterVolumeSpecName: "kube-api-access-d6tv8") pod "258f483b-5c81-4662-a95f-402d907ebfcc" (UID: "258f483b-5c81-4662-a95f-402d907ebfcc"). InnerVolumeSpecName "kube-api-access-d6tv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:46:51 crc kubenswrapper[4867]: I0101 08:46:51.415875 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/258f483b-5c81-4662-a95f-402d907ebfcc-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "258f483b-5c81-4662-a95f-402d907ebfcc" (UID: "258f483b-5c81-4662-a95f-402d907ebfcc"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:46:51 crc kubenswrapper[4867]: I0101 08:46:51.442077 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/258f483b-5c81-4662-a95f-402d907ebfcc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "258f483b-5c81-4662-a95f-402d907ebfcc" (UID: "258f483b-5c81-4662-a95f-402d907ebfcc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:46:51 crc kubenswrapper[4867]: I0101 08:46:51.442066 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/258f483b-5c81-4662-a95f-402d907ebfcc-config-data" (OuterVolumeSpecName: "config-data") pod "258f483b-5c81-4662-a95f-402d907ebfcc" (UID: "258f483b-5c81-4662-a95f-402d907ebfcc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:46:51 crc kubenswrapper[4867]: I0101 08:46:51.497760 4867 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/258f483b-5c81-4662-a95f-402d907ebfcc-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:51 crc kubenswrapper[4867]: I0101 08:46:51.497794 4867 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/258f483b-5c81-4662-a95f-402d907ebfcc-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:51 crc kubenswrapper[4867]: I0101 08:46:51.497806 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/258f483b-5c81-4662-a95f-402d907ebfcc-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:51 crc kubenswrapper[4867]: I0101 08:46:51.497814 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/258f483b-5c81-4662-a95f-402d907ebfcc-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:51 crc kubenswrapper[4867]: I0101 08:46:51.497825 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6tv8\" (UniqueName: \"kubernetes.io/projected/258f483b-5c81-4662-a95f-402d907ebfcc-kube-api-access-d6tv8\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:51 crc kubenswrapper[4867]: I0101 08:46:51.497834 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/258f483b-5c81-4662-a95f-402d907ebfcc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:46:51 crc kubenswrapper[4867]: I0101 08:46:51.685055 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8qpzn" event={"ID":"258f483b-5c81-4662-a95f-402d907ebfcc","Type":"ContainerDied","Data":"626a1f363701eb844a5c23d2d3ab81a7ae0597a1aa702f3e9f06ba93650b955e"} Jan 01 08:46:51 crc kubenswrapper[4867]: I0101 08:46:51.685104 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="626a1f363701eb844a5c23d2d3ab81a7ae0597a1aa702f3e9f06ba93650b955e" Jan 01 08:46:51 crc kubenswrapper[4867]: I0101 08:46:51.685171 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8qpzn" Jan 01 08:46:52 crc kubenswrapper[4867]: I0101 08:46:52.387546 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-8qpzn"] Jan 01 08:46:52 crc kubenswrapper[4867]: I0101 08:46:52.394081 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-8qpzn"] Jan 01 08:46:52 crc kubenswrapper[4867]: I0101 08:46:52.477115 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-7fvfm"] Jan 01 08:46:52 crc kubenswrapper[4867]: E0101 08:46:52.477691 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a485676-0927-4082-9329-aaacd9aabb0b" containerName="init" Jan 01 08:46:52 crc kubenswrapper[4867]: I0101 08:46:52.477720 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a485676-0927-4082-9329-aaacd9aabb0b" containerName="init" Jan 01 08:46:52 crc kubenswrapper[4867]: E0101 08:46:52.477750 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="258f483b-5c81-4662-a95f-402d907ebfcc" containerName="keystone-bootstrap" Jan 01 08:46:52 crc kubenswrapper[4867]: I0101 08:46:52.477764 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="258f483b-5c81-4662-a95f-402d907ebfcc" containerName="keystone-bootstrap" Jan 01 08:46:52 crc kubenswrapper[4867]: I0101 08:46:52.478097 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a485676-0927-4082-9329-aaacd9aabb0b" containerName="init" Jan 01 08:46:52 crc kubenswrapper[4867]: I0101 08:46:52.478139 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="258f483b-5c81-4662-a95f-402d907ebfcc" containerName="keystone-bootstrap" Jan 01 08:46:52 crc kubenswrapper[4867]: I0101 08:46:52.478989 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7fvfm" Jan 01 08:46:52 crc kubenswrapper[4867]: I0101 08:46:52.485630 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 01 08:46:52 crc kubenswrapper[4867]: I0101 08:46:52.485662 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 01 08:46:52 crc kubenswrapper[4867]: I0101 08:46:52.485702 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-67p9k" Jan 01 08:46:52 crc kubenswrapper[4867]: I0101 08:46:52.485934 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 01 08:46:52 crc kubenswrapper[4867]: I0101 08:46:52.486215 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 01 08:46:52 crc kubenswrapper[4867]: I0101 08:46:52.490863 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7fvfm"] Jan 01 08:46:52 crc kubenswrapper[4867]: I0101 08:46:52.616580 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/83a36dad-781c-47b3-a1f2-d8aa5d7182fb-fernet-keys\") pod \"keystone-bootstrap-7fvfm\" (UID: \"83a36dad-781c-47b3-a1f2-d8aa5d7182fb\") " pod="openstack/keystone-bootstrap-7fvfm" Jan 01 08:46:52 crc kubenswrapper[4867]: I0101 08:46:52.616648 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/83a36dad-781c-47b3-a1f2-d8aa5d7182fb-credential-keys\") pod \"keystone-bootstrap-7fvfm\" (UID: \"83a36dad-781c-47b3-a1f2-d8aa5d7182fb\") " pod="openstack/keystone-bootstrap-7fvfm" Jan 01 08:46:52 crc kubenswrapper[4867]: I0101 08:46:52.616684 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83a36dad-781c-47b3-a1f2-d8aa5d7182fb-scripts\") pod \"keystone-bootstrap-7fvfm\" (UID: \"83a36dad-781c-47b3-a1f2-d8aa5d7182fb\") " pod="openstack/keystone-bootstrap-7fvfm" Jan 01 08:46:52 crc kubenswrapper[4867]: I0101 08:46:52.616702 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83a36dad-781c-47b3-a1f2-d8aa5d7182fb-combined-ca-bundle\") pod \"keystone-bootstrap-7fvfm\" (UID: \"83a36dad-781c-47b3-a1f2-d8aa5d7182fb\") " pod="openstack/keystone-bootstrap-7fvfm" Jan 01 08:46:52 crc kubenswrapper[4867]: I0101 08:46:52.616759 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm7zj\" (UniqueName: \"kubernetes.io/projected/83a36dad-781c-47b3-a1f2-d8aa5d7182fb-kube-api-access-jm7zj\") pod \"keystone-bootstrap-7fvfm\" (UID: \"83a36dad-781c-47b3-a1f2-d8aa5d7182fb\") " pod="openstack/keystone-bootstrap-7fvfm" Jan 01 08:46:52 crc kubenswrapper[4867]: I0101 08:46:52.616784 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83a36dad-781c-47b3-a1f2-d8aa5d7182fb-config-data\") pod \"keystone-bootstrap-7fvfm\" (UID: \"83a36dad-781c-47b3-a1f2-d8aa5d7182fb\") " pod="openstack/keystone-bootstrap-7fvfm" Jan 01 08:46:52 crc kubenswrapper[4867]: I0101 08:46:52.625103 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-647d8845b5-mtlg7" Jan 01 08:46:52 crc kubenswrapper[4867]: I0101 08:46:52.691263 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-786cc75955-lpfwv"] Jan 01 08:46:52 crc kubenswrapper[4867]: I0101 08:46:52.691542 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-786cc75955-lpfwv" podUID="49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d" containerName="dnsmasq-dns" containerID="cri-o://3d48a7cc1f09c73f7654e35066bc60bfc3818573839a37b4f6363c4e57c8dfb4" gracePeriod=10 Jan 01 08:46:52 crc kubenswrapper[4867]: I0101 08:46:52.717937 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/83a36dad-781c-47b3-a1f2-d8aa5d7182fb-fernet-keys\") pod \"keystone-bootstrap-7fvfm\" (UID: \"83a36dad-781c-47b3-a1f2-d8aa5d7182fb\") " pod="openstack/keystone-bootstrap-7fvfm" Jan 01 08:46:52 crc kubenswrapper[4867]: I0101 08:46:52.718023 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/83a36dad-781c-47b3-a1f2-d8aa5d7182fb-credential-keys\") pod \"keystone-bootstrap-7fvfm\" (UID: \"83a36dad-781c-47b3-a1f2-d8aa5d7182fb\") " pod="openstack/keystone-bootstrap-7fvfm" Jan 01 08:46:52 crc kubenswrapper[4867]: I0101 08:46:52.718065 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83a36dad-781c-47b3-a1f2-d8aa5d7182fb-scripts\") pod \"keystone-bootstrap-7fvfm\" (UID: \"83a36dad-781c-47b3-a1f2-d8aa5d7182fb\") " pod="openstack/keystone-bootstrap-7fvfm" Jan 01 08:46:52 crc kubenswrapper[4867]: I0101 08:46:52.718085 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83a36dad-781c-47b3-a1f2-d8aa5d7182fb-combined-ca-bundle\") pod \"keystone-bootstrap-7fvfm\" (UID: \"83a36dad-781c-47b3-a1f2-d8aa5d7182fb\") " pod="openstack/keystone-bootstrap-7fvfm" Jan 01 08:46:52 crc kubenswrapper[4867]: I0101 08:46:52.718164 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm7zj\" (UniqueName: \"kubernetes.io/projected/83a36dad-781c-47b3-a1f2-d8aa5d7182fb-kube-api-access-jm7zj\") pod \"keystone-bootstrap-7fvfm\" (UID: \"83a36dad-781c-47b3-a1f2-d8aa5d7182fb\") " pod="openstack/keystone-bootstrap-7fvfm" Jan 01 08:46:52 crc kubenswrapper[4867]: I0101 08:46:52.718182 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83a36dad-781c-47b3-a1f2-d8aa5d7182fb-config-data\") pod \"keystone-bootstrap-7fvfm\" (UID: \"83a36dad-781c-47b3-a1f2-d8aa5d7182fb\") " pod="openstack/keystone-bootstrap-7fvfm" Jan 01 08:46:52 crc kubenswrapper[4867]: I0101 08:46:52.725180 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/83a36dad-781c-47b3-a1f2-d8aa5d7182fb-fernet-keys\") pod \"keystone-bootstrap-7fvfm\" (UID: \"83a36dad-781c-47b3-a1f2-d8aa5d7182fb\") " pod="openstack/keystone-bootstrap-7fvfm" Jan 01 08:46:52 crc kubenswrapper[4867]: I0101 08:46:52.725828 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/83a36dad-781c-47b3-a1f2-d8aa5d7182fb-credential-keys\") pod \"keystone-bootstrap-7fvfm\" (UID: \"83a36dad-781c-47b3-a1f2-d8aa5d7182fb\") " pod="openstack/keystone-bootstrap-7fvfm" Jan 01 08:46:52 crc kubenswrapper[4867]: I0101 08:46:52.726103 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83a36dad-781c-47b3-a1f2-d8aa5d7182fb-combined-ca-bundle\") pod \"keystone-bootstrap-7fvfm\" (UID: \"83a36dad-781c-47b3-a1f2-d8aa5d7182fb\") " pod="openstack/keystone-bootstrap-7fvfm" Jan 01 08:46:52 crc kubenswrapper[4867]: I0101 08:46:52.739813 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm7zj\" (UniqueName: \"kubernetes.io/projected/83a36dad-781c-47b3-a1f2-d8aa5d7182fb-kube-api-access-jm7zj\") pod \"keystone-bootstrap-7fvfm\" (UID: \"83a36dad-781c-47b3-a1f2-d8aa5d7182fb\") " pod="openstack/keystone-bootstrap-7fvfm" Jan 01 08:46:52 crc kubenswrapper[4867]: I0101 08:46:52.743080 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83a36dad-781c-47b3-a1f2-d8aa5d7182fb-scripts\") pod \"keystone-bootstrap-7fvfm\" (UID: \"83a36dad-781c-47b3-a1f2-d8aa5d7182fb\") " pod="openstack/keystone-bootstrap-7fvfm" Jan 01 08:46:52 crc kubenswrapper[4867]: I0101 08:46:52.743479 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83a36dad-781c-47b3-a1f2-d8aa5d7182fb-config-data\") pod \"keystone-bootstrap-7fvfm\" (UID: \"83a36dad-781c-47b3-a1f2-d8aa5d7182fb\") " pod="openstack/keystone-bootstrap-7fvfm" Jan 01 08:46:52 crc kubenswrapper[4867]: I0101 08:46:52.800559 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7fvfm" Jan 01 08:46:53 crc kubenswrapper[4867]: I0101 08:46:53.140585 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="258f483b-5c81-4662-a95f-402d907ebfcc" path="/var/lib/kubelet/pods/258f483b-5c81-4662-a95f-402d907ebfcc/volumes" Jan 01 08:46:53 crc kubenswrapper[4867]: I0101 08:46:53.718458 4867 generic.go:334] "Generic (PLEG): container finished" podID="49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d" containerID="3d48a7cc1f09c73f7654e35066bc60bfc3818573839a37b4f6363c4e57c8dfb4" exitCode=0 Jan 01 08:46:53 crc kubenswrapper[4867]: I0101 08:46:53.718500 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-786cc75955-lpfwv" event={"ID":"49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d","Type":"ContainerDied","Data":"3d48a7cc1f09c73f7654e35066bc60bfc3818573839a37b4f6363c4e57c8dfb4"} Jan 01 08:46:55 crc kubenswrapper[4867]: I0101 08:46:55.864466 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-786cc75955-lpfwv" podUID="49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.131:5353: connect: connection refused" Jan 01 08:47:00 crc kubenswrapper[4867]: I0101 08:47:00.863802 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-786cc75955-lpfwv" podUID="49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.131:5353: connect: connection refused" Jan 01 08:47:02 crc kubenswrapper[4867]: E0101 08:47:02.736384 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49" Jan 01 08:47:02 crc kubenswrapper[4867]: E0101 08:47:02.737151 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jk2cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-p6csz_openstack(978a99d3-4e55-4026-a329-5da06bf36c90): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 01 08:47:02 crc kubenswrapper[4867]: E0101 08:47:02.738639 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-p6csz" podUID="978a99d3-4e55-4026-a329-5da06bf36c90" Jan 01 08:47:02 crc kubenswrapper[4867]: I0101 08:47:02.811720 4867 generic.go:334] "Generic (PLEG): container finished" podID="8e3c0e55-238e-4e4b-b5fb-da86a9948f01" containerID="ad2a6a74c002016bb0dabc7c9ffad35550dacca301a44244c6a61f60a5d320af" exitCode=0 Jan 01 08:47:02 crc kubenswrapper[4867]: I0101 08:47:02.811841 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-49xz8" event={"ID":"8e3c0e55-238e-4e4b-b5fb-da86a9948f01","Type":"ContainerDied","Data":"ad2a6a74c002016bb0dabc7c9ffad35550dacca301a44244c6a61f60a5d320af"} Jan 01 08:47:02 crc kubenswrapper[4867]: E0101 08:47:02.814128 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49\\\"\"" pod="openstack/cinder-db-sync-p6csz" podUID="978a99d3-4e55-4026-a329-5da06bf36c90" Jan 01 08:47:03 crc kubenswrapper[4867]: E0101 08:47:03.040793 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:5a548c25fe3d02f7a042cb0a6d28fc8039a34c4a3b3d07aadda4aba3a926e777" Jan 01 08:47:03 crc kubenswrapper[4867]: E0101 08:47:03.041062 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:5a548c25fe3d02f7a042cb0a6d28fc8039a34c4a3b3d07aadda4aba3a926e777,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d4h677h5cfh649h58bhbh96h5bch57ch657h99h684h9fh64ch55dhdh597h584h96h586h5bfh87hc5h79h5b6h8hc6h5c7hb9h56ch578h55fq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8csl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(b51add02-d86c-4eb3-924d-1b2ac530e97b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.258337 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.304860 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.358070 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49n78\" (UniqueName: \"kubernetes.io/projected/862f6004-f33b-4866-be38-15e5e3227b0e-kube-api-access-49n78\") pod \"862f6004-f33b-4866-be38-15e5e3227b0e\" (UID: \"862f6004-f33b-4866-be38-15e5e3227b0e\") " Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.358125 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/862f6004-f33b-4866-be38-15e5e3227b0e-config-data\") pod \"862f6004-f33b-4866-be38-15e5e3227b0e\" (UID: \"862f6004-f33b-4866-be38-15e5e3227b0e\") " Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.358293 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/862f6004-f33b-4866-be38-15e5e3227b0e-scripts\") pod \"862f6004-f33b-4866-be38-15e5e3227b0e\" (UID: \"862f6004-f33b-4866-be38-15e5e3227b0e\") " Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.358321 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"862f6004-f33b-4866-be38-15e5e3227b0e\" (UID: \"862f6004-f33b-4866-be38-15e5e3227b0e\") " Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.358380 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/862f6004-f33b-4866-be38-15e5e3227b0e-internal-tls-certs\") pod \"862f6004-f33b-4866-be38-15e5e3227b0e\" (UID: \"862f6004-f33b-4866-be38-15e5e3227b0e\") " Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.358485 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/862f6004-f33b-4866-be38-15e5e3227b0e-logs\") pod \"862f6004-f33b-4866-be38-15e5e3227b0e\" (UID: \"862f6004-f33b-4866-be38-15e5e3227b0e\") " Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.358561 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/862f6004-f33b-4866-be38-15e5e3227b0e-combined-ca-bundle\") pod \"862f6004-f33b-4866-be38-15e5e3227b0e\" (UID: \"862f6004-f33b-4866-be38-15e5e3227b0e\") " Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.358598 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/862f6004-f33b-4866-be38-15e5e3227b0e-httpd-run\") pod \"862f6004-f33b-4866-be38-15e5e3227b0e\" (UID: \"862f6004-f33b-4866-be38-15e5e3227b0e\") " Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.359270 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/862f6004-f33b-4866-be38-15e5e3227b0e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "862f6004-f33b-4866-be38-15e5e3227b0e" (UID: "862f6004-f33b-4866-be38-15e5e3227b0e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.359410 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/862f6004-f33b-4866-be38-15e5e3227b0e-logs" (OuterVolumeSpecName: "logs") pod "862f6004-f33b-4866-be38-15e5e3227b0e" (UID: "862f6004-f33b-4866-be38-15e5e3227b0e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.363181 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-786cc75955-lpfwv" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.363544 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/862f6004-f33b-4866-be38-15e5e3227b0e-kube-api-access-49n78" (OuterVolumeSpecName: "kube-api-access-49n78") pod "862f6004-f33b-4866-be38-15e5e3227b0e" (UID: "862f6004-f33b-4866-be38-15e5e3227b0e"). InnerVolumeSpecName "kube-api-access-49n78". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.382231 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "862f6004-f33b-4866-be38-15e5e3227b0e" (UID: "862f6004-f33b-4866-be38-15e5e3227b0e"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.382846 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/862f6004-f33b-4866-be38-15e5e3227b0e-scripts" (OuterVolumeSpecName: "scripts") pod "862f6004-f33b-4866-be38-15e5e3227b0e" (UID: "862f6004-f33b-4866-be38-15e5e3227b0e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.383956 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/862f6004-f33b-4866-be38-15e5e3227b0e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "862f6004-f33b-4866-be38-15e5e3227b0e" (UID: "862f6004-f33b-4866-be38-15e5e3227b0e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.412950 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/862f6004-f33b-4866-be38-15e5e3227b0e-config-data" (OuterVolumeSpecName: "config-data") pod "862f6004-f33b-4866-be38-15e5e3227b0e" (UID: "862f6004-f33b-4866-be38-15e5e3227b0e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.458128 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/862f6004-f33b-4866-be38-15e5e3227b0e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "862f6004-f33b-4866-be38-15e5e3227b0e" (UID: "862f6004-f33b-4866-be38-15e5e3227b0e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.460941 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-logs\") pod \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\" (UID: \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\") " Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.461101 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-combined-ca-bundle\") pod \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\" (UID: \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\") " Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.461140 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d-dns-svc\") pod \"49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d\" (UID: \"49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d\") " Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.461265 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d-ovsdbserver-nb\") pod \"49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d\" (UID: \"49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d\") " Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.461305 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d-ovsdbserver-sb\") pod \"49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d\" (UID: \"49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d\") " Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.461376 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-httpd-run\") pod \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\" (UID: \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\") " Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.461428 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d-config\") pod \"49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d\" (UID: \"49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d\") " Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.461469 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-scripts\") pod \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\" (UID: \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\") " Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.461536 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-public-tls-certs\") pod \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\" (UID: \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\") " Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.461557 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\" (UID: \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\") " Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.461618 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2gh7\" (UniqueName: \"kubernetes.io/projected/49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d-kube-api-access-f2gh7\") pod \"49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d\" (UID: \"49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d\") " Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.461610 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-logs" (OuterVolumeSpecName: "logs") pod "ae35dc22-012d-4c8e-8f97-5c5ee3596d96" (UID: "ae35dc22-012d-4c8e-8f97-5c5ee3596d96"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.461637 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d-dns-swift-storage-0\") pod \"49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d\" (UID: \"49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d\") " Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.461732 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-config-data\") pod \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\" (UID: \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\") " Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.461776 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9qcm\" (UniqueName: \"kubernetes.io/projected/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-kube-api-access-z9qcm\") pod \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\" (UID: \"ae35dc22-012d-4c8e-8f97-5c5ee3596d96\") " Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.463801 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49n78\" (UniqueName: \"kubernetes.io/projected/862f6004-f33b-4866-be38-15e5e3227b0e-kube-api-access-49n78\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.463844 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/862f6004-f33b-4866-be38-15e5e3227b0e-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.463860 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/862f6004-f33b-4866-be38-15e5e3227b0e-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.463915 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.463932 4867 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/862f6004-f33b-4866-be38-15e5e3227b0e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.463943 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/862f6004-f33b-4866-be38-15e5e3227b0e-logs\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.463954 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-logs\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.463964 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/862f6004-f33b-4866-be38-15e5e3227b0e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.463975 4867 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/862f6004-f33b-4866-be38-15e5e3227b0e-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.464128 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ae35dc22-012d-4c8e-8f97-5c5ee3596d96" (UID: "ae35dc22-012d-4c8e-8f97-5c5ee3596d96"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.466474 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7fvfm"] Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.468013 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-scripts" (OuterVolumeSpecName: "scripts") pod "ae35dc22-012d-4c8e-8f97-5c5ee3596d96" (UID: "ae35dc22-012d-4c8e-8f97-5c5ee3596d96"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.468032 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-kube-api-access-z9qcm" (OuterVolumeSpecName: "kube-api-access-z9qcm") pod "ae35dc22-012d-4c8e-8f97-5c5ee3596d96" (UID: "ae35dc22-012d-4c8e-8f97-5c5ee3596d96"). InnerVolumeSpecName "kube-api-access-z9qcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.468012 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "ae35dc22-012d-4c8e-8f97-5c5ee3596d96" (UID: "ae35dc22-012d-4c8e-8f97-5c5ee3596d96"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.468229 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d-kube-api-access-f2gh7" (OuterVolumeSpecName: "kube-api-access-f2gh7") pod "49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d" (UID: "49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d"). InnerVolumeSpecName "kube-api-access-f2gh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.488020 4867 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.493970 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae35dc22-012d-4c8e-8f97-5c5ee3596d96" (UID: "ae35dc22-012d-4c8e-8f97-5c5ee3596d96"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.506148 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d" (UID: "49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.510402 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d" (UID: "49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.516764 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d" (UID: "49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.517912 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d-config" (OuterVolumeSpecName: "config") pod "49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d" (UID: "49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.518938 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d" (UID: "49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.519580 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-config-data" (OuterVolumeSpecName: "config-data") pod "ae35dc22-012d-4c8e-8f97-5c5ee3596d96" (UID: "ae35dc22-012d-4c8e-8f97-5c5ee3596d96"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.521252 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ae35dc22-012d-4c8e-8f97-5c5ee3596d96" (UID: "ae35dc22-012d-4c8e-8f97-5c5ee3596d96"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.565094 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.565137 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.565149 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.565159 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.565171 4867 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.565181 4867 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.565193 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.565203 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.565214 4867 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.565253 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.565266 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2gh7\" (UniqueName: \"kubernetes.io/projected/49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d-kube-api-access-f2gh7\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.565279 4867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.565292 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.565304 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9qcm\" (UniqueName: \"kubernetes.io/projected/ae35dc22-012d-4c8e-8f97-5c5ee3596d96-kube-api-access-z9qcm\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.583165 4867 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.667123 4867 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.823292 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"862f6004-f33b-4866-be38-15e5e3227b0e","Type":"ContainerDied","Data":"5516bbecfb8cf5922116809ce5df1c91b50e9467bdfade9b41c601986a4fe235"} Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.823337 4867 scope.go:117] "RemoveContainer" containerID="06a8468ca6be7f521713b92451687c271153a8d83f2396ac9b176ecca2471334" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.823434 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.840420 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ae35dc22-012d-4c8e-8f97-5c5ee3596d96","Type":"ContainerDied","Data":"977ae911b4ca6b78aa207fc27eb93397aa307f60652b9c0adf8bf5922ac67068"} Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.840462 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.843522 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7fvfm" event={"ID":"83a36dad-781c-47b3-a1f2-d8aa5d7182fb","Type":"ContainerStarted","Data":"5e417c532e469853d56e32420e309e312face5cadadd0c1e91903b890365eb58"} Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.843586 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7fvfm" event={"ID":"83a36dad-781c-47b3-a1f2-d8aa5d7182fb","Type":"ContainerStarted","Data":"b3b2addb6a313eb3b435309d82dbf094dc3184a6c72db275c076f2586dc138c5"} Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.846949 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-sx242" event={"ID":"23acd1c1-f4b4-4d70-be4e-ea07cbff8053","Type":"ContainerStarted","Data":"7517e3a771393dd813ce33a6e896560947fef7f4d72c0dac4bc8064fd05eda37"} Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.861961 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.863818 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-786cc75955-lpfwv" event={"ID":"49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d","Type":"ContainerDied","Data":"c4f1f714b0f0850cc6f3d54752698bc5c0032b855b8a65b1bfb9721d5b852c34"} Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.863959 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-786cc75955-lpfwv" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.865994 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-cdg7k" event={"ID":"eeb7c774-4ae0-475c-a44a-138a917beac0","Type":"ContainerStarted","Data":"dee65a7cb9358ada33024723a1727a8030b5e5df85252b95d15f51b33fabff61"} Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.886987 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.902754 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-sx242" podStartSLOduration=2.873604476 podStartE2EDuration="22.902730649s" podCreationTimestamp="2026-01-01 08:46:41 +0000 UTC" firstStartedPulling="2026-01-01 08:46:43.023599707 +0000 UTC m=+1212.158868476" lastFinishedPulling="2026-01-01 08:47:03.05272584 +0000 UTC m=+1232.187994649" observedRunningTime="2026-01-01 08:47:03.880763762 +0000 UTC m=+1233.016032551" watchObservedRunningTime="2026-01-01 08:47:03.902730649 +0000 UTC m=+1233.037999418" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.928439 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 01 08:47:03 crc kubenswrapper[4867]: E0101 08:47:03.943719 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="862f6004-f33b-4866-be38-15e5e3227b0e" containerName="glance-log" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.943739 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="862f6004-f33b-4866-be38-15e5e3227b0e" containerName="glance-log" Jan 01 08:47:03 crc kubenswrapper[4867]: E0101 08:47:03.943758 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae35dc22-012d-4c8e-8f97-5c5ee3596d96" containerName="glance-log" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.943764 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae35dc22-012d-4c8e-8f97-5c5ee3596d96" containerName="glance-log" Jan 01 08:47:03 crc kubenswrapper[4867]: E0101 08:47:03.943789 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae35dc22-012d-4c8e-8f97-5c5ee3596d96" containerName="glance-httpd" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.943797 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae35dc22-012d-4c8e-8f97-5c5ee3596d96" containerName="glance-httpd" Jan 01 08:47:03 crc kubenswrapper[4867]: E0101 08:47:03.946263 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d" containerName="dnsmasq-dns" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.946281 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d" containerName="dnsmasq-dns" Jan 01 08:47:03 crc kubenswrapper[4867]: E0101 08:47:03.946300 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d" containerName="init" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.946306 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d" containerName="init" Jan 01 08:47:03 crc kubenswrapper[4867]: E0101 08:47:03.946340 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="862f6004-f33b-4866-be38-15e5e3227b0e" containerName="glance-httpd" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.946348 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="862f6004-f33b-4866-be38-15e5e3227b0e" containerName="glance-httpd" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.947324 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="862f6004-f33b-4866-be38-15e5e3227b0e" containerName="glance-log" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.947856 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d" containerName="dnsmasq-dns" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.947906 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae35dc22-012d-4c8e-8f97-5c5ee3596d96" containerName="glance-httpd" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.947938 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="862f6004-f33b-4866-be38-15e5e3227b0e" containerName="glance-httpd" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.947957 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae35dc22-012d-4c8e-8f97-5c5ee3596d96" containerName="glance-log" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.949876 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.954780 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.954997 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.955192 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-sg4nb" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.955224 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.956280 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.964736 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-7fvfm" podStartSLOduration=11.964686189 podStartE2EDuration="11.964686189s" podCreationTimestamp="2026-01-01 08:46:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:47:03.912233476 +0000 UTC m=+1233.047502235" watchObservedRunningTime="2026-01-01 08:47:03.964686189 +0000 UTC m=+1233.099954958" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.971442 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-cdg7k" podStartSLOduration=2.178214293 podStartE2EDuration="21.971426269s" podCreationTimestamp="2026-01-01 08:46:42 +0000 UTC" firstStartedPulling="2026-01-01 08:46:43.241994552 +0000 UTC m=+1212.377263321" lastFinishedPulling="2026-01-01 08:47:03.035206488 +0000 UTC m=+1232.170475297" observedRunningTime="2026-01-01 08:47:03.933830152 +0000 UTC m=+1233.069098951" watchObservedRunningTime="2026-01-01 08:47:03.971426269 +0000 UTC m=+1233.106695038" Jan 01 08:47:03 crc kubenswrapper[4867]: I0101 08:47:03.992733 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-786cc75955-lpfwv"] Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.006921 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-786cc75955-lpfwv"] Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.018174 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.032451 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.051119 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.052412 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.056626 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.056816 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.062558 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.073667 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1032e145-2486-4fe1-9bde-b067d64c5d1c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1032e145-2486-4fe1-9bde-b067d64c5d1c\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.073740 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp7ng\" (UniqueName: \"kubernetes.io/projected/1032e145-2486-4fe1-9bde-b067d64c5d1c-kube-api-access-qp7ng\") pod \"glance-default-internal-api-0\" (UID: \"1032e145-2486-4fe1-9bde-b067d64c5d1c\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.073771 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1032e145-2486-4fe1-9bde-b067d64c5d1c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1032e145-2486-4fe1-9bde-b067d64c5d1c\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.073822 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1032e145-2486-4fe1-9bde-b067d64c5d1c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1032e145-2486-4fe1-9bde-b067d64c5d1c\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.073858 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1032e145-2486-4fe1-9bde-b067d64c5d1c-logs\") pod \"glance-default-internal-api-0\" (UID: \"1032e145-2486-4fe1-9bde-b067d64c5d1c\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.073896 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1032e145-2486-4fe1-9bde-b067d64c5d1c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1032e145-2486-4fe1-9bde-b067d64c5d1c\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.073979 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"1032e145-2486-4fe1-9bde-b067d64c5d1c\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.074001 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1032e145-2486-4fe1-9bde-b067d64c5d1c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1032e145-2486-4fe1-9bde-b067d64c5d1c\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.175213 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1032e145-2486-4fe1-9bde-b067d64c5d1c-logs\") pod \"glance-default-internal-api-0\" (UID: \"1032e145-2486-4fe1-9bde-b067d64c5d1c\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.175251 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1032e145-2486-4fe1-9bde-b067d64c5d1c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1032e145-2486-4fe1-9bde-b067d64c5d1c\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.175293 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa829cab-564c-410b-a84f-50bc6bba8676-logs\") pod \"glance-default-external-api-0\" (UID: \"fa829cab-564c-410b-a84f-50bc6bba8676\") " pod="openstack/glance-default-external-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.175308 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa829cab-564c-410b-a84f-50bc6bba8676-scripts\") pod \"glance-default-external-api-0\" (UID: \"fa829cab-564c-410b-a84f-50bc6bba8676\") " pod="openstack/glance-default-external-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.175340 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa829cab-564c-410b-a84f-50bc6bba8676-config-data\") pod \"glance-default-external-api-0\" (UID: \"fa829cab-564c-410b-a84f-50bc6bba8676\") " pod="openstack/glance-default-external-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.175401 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7nds\" (UniqueName: \"kubernetes.io/projected/fa829cab-564c-410b-a84f-50bc6bba8676-kube-api-access-n7nds\") pod \"glance-default-external-api-0\" (UID: \"fa829cab-564c-410b-a84f-50bc6bba8676\") " pod="openstack/glance-default-external-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.175443 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"1032e145-2486-4fe1-9bde-b067d64c5d1c\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.175461 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"fa829cab-564c-410b-a84f-50bc6bba8676\") " pod="openstack/glance-default-external-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.175476 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1032e145-2486-4fe1-9bde-b067d64c5d1c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1032e145-2486-4fe1-9bde-b067d64c5d1c\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.175541 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa829cab-564c-410b-a84f-50bc6bba8676-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fa829cab-564c-410b-a84f-50bc6bba8676\") " pod="openstack/glance-default-external-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.175559 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1032e145-2486-4fe1-9bde-b067d64c5d1c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1032e145-2486-4fe1-9bde-b067d64c5d1c\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.175602 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp7ng\" (UniqueName: \"kubernetes.io/projected/1032e145-2486-4fe1-9bde-b067d64c5d1c-kube-api-access-qp7ng\") pod \"glance-default-internal-api-0\" (UID: \"1032e145-2486-4fe1-9bde-b067d64c5d1c\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.175616 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa829cab-564c-410b-a84f-50bc6bba8676-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fa829cab-564c-410b-a84f-50bc6bba8676\") " pod="openstack/glance-default-external-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.175632 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1032e145-2486-4fe1-9bde-b067d64c5d1c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1032e145-2486-4fe1-9bde-b067d64c5d1c\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.175687 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa829cab-564c-410b-a84f-50bc6bba8676-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fa829cab-564c-410b-a84f-50bc6bba8676\") " pod="openstack/glance-default-external-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.175705 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1032e145-2486-4fe1-9bde-b067d64c5d1c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1032e145-2486-4fe1-9bde-b067d64c5d1c\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.177104 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1032e145-2486-4fe1-9bde-b067d64c5d1c-logs\") pod \"glance-default-internal-api-0\" (UID: \"1032e145-2486-4fe1-9bde-b067d64c5d1c\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.178602 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1032e145-2486-4fe1-9bde-b067d64c5d1c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1032e145-2486-4fe1-9bde-b067d64c5d1c\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.178628 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"1032e145-2486-4fe1-9bde-b067d64c5d1c\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.184372 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1032e145-2486-4fe1-9bde-b067d64c5d1c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1032e145-2486-4fe1-9bde-b067d64c5d1c\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.194560 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1032e145-2486-4fe1-9bde-b067d64c5d1c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1032e145-2486-4fe1-9bde-b067d64c5d1c\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.194770 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1032e145-2486-4fe1-9bde-b067d64c5d1c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1032e145-2486-4fe1-9bde-b067d64c5d1c\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.204707 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1032e145-2486-4fe1-9bde-b067d64c5d1c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1032e145-2486-4fe1-9bde-b067d64c5d1c\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.209524 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"1032e145-2486-4fe1-9bde-b067d64c5d1c\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.215535 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp7ng\" (UniqueName: \"kubernetes.io/projected/1032e145-2486-4fe1-9bde-b067d64c5d1c-kube-api-access-qp7ng\") pod \"glance-default-internal-api-0\" (UID: \"1032e145-2486-4fe1-9bde-b067d64c5d1c\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.278448 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7nds\" (UniqueName: \"kubernetes.io/projected/fa829cab-564c-410b-a84f-50bc6bba8676-kube-api-access-n7nds\") pod \"glance-default-external-api-0\" (UID: \"fa829cab-564c-410b-a84f-50bc6bba8676\") " pod="openstack/glance-default-external-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.278508 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"fa829cab-564c-410b-a84f-50bc6bba8676\") " pod="openstack/glance-default-external-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.278550 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa829cab-564c-410b-a84f-50bc6bba8676-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fa829cab-564c-410b-a84f-50bc6bba8676\") " pod="openstack/glance-default-external-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.278584 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa829cab-564c-410b-a84f-50bc6bba8676-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fa829cab-564c-410b-a84f-50bc6bba8676\") " pod="openstack/glance-default-external-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.278620 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa829cab-564c-410b-a84f-50bc6bba8676-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fa829cab-564c-410b-a84f-50bc6bba8676\") " pod="openstack/glance-default-external-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.278648 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa829cab-564c-410b-a84f-50bc6bba8676-logs\") pod \"glance-default-external-api-0\" (UID: \"fa829cab-564c-410b-a84f-50bc6bba8676\") " pod="openstack/glance-default-external-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.278662 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa829cab-564c-410b-a84f-50bc6bba8676-scripts\") pod \"glance-default-external-api-0\" (UID: \"fa829cab-564c-410b-a84f-50bc6bba8676\") " pod="openstack/glance-default-external-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.278683 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa829cab-564c-410b-a84f-50bc6bba8676-config-data\") pod \"glance-default-external-api-0\" (UID: \"fa829cab-564c-410b-a84f-50bc6bba8676\") " pod="openstack/glance-default-external-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.278851 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"fa829cab-564c-410b-a84f-50bc6bba8676\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.279504 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa829cab-564c-410b-a84f-50bc6bba8676-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fa829cab-564c-410b-a84f-50bc6bba8676\") " pod="openstack/glance-default-external-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.279731 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa829cab-564c-410b-a84f-50bc6bba8676-logs\") pod \"glance-default-external-api-0\" (UID: \"fa829cab-564c-410b-a84f-50bc6bba8676\") " pod="openstack/glance-default-external-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.281427 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.285361 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa829cab-564c-410b-a84f-50bc6bba8676-config-data\") pod \"glance-default-external-api-0\" (UID: \"fa829cab-564c-410b-a84f-50bc6bba8676\") " pod="openstack/glance-default-external-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.285490 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa829cab-564c-410b-a84f-50bc6bba8676-scripts\") pod \"glance-default-external-api-0\" (UID: \"fa829cab-564c-410b-a84f-50bc6bba8676\") " pod="openstack/glance-default-external-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.286113 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa829cab-564c-410b-a84f-50bc6bba8676-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fa829cab-564c-410b-a84f-50bc6bba8676\") " pod="openstack/glance-default-external-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.288271 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa829cab-564c-410b-a84f-50bc6bba8676-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fa829cab-564c-410b-a84f-50bc6bba8676\") " pod="openstack/glance-default-external-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.294807 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7nds\" (UniqueName: \"kubernetes.io/projected/fa829cab-564c-410b-a84f-50bc6bba8676-kube-api-access-n7nds\") pod \"glance-default-external-api-0\" (UID: \"fa829cab-564c-410b-a84f-50bc6bba8676\") " pod="openstack/glance-default-external-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.308101 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"fa829cab-564c-410b-a84f-50bc6bba8676\") " pod="openstack/glance-default-external-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.348126 4867 scope.go:117] "RemoveContainer" containerID="469ab3b044082bed1128855ca4c5b95270d7a3a654f826dcc1be8ff880e3ff8d" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.369940 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-49xz8" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.425450 4867 scope.go:117] "RemoveContainer" containerID="4aac487634100db53b9df3d997c1eadf5155484f48b0f8d42bf7e03073bf5f1c" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.448029 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.471149 4867 scope.go:117] "RemoveContainer" containerID="4e7dd44b068f40d1255226fdd522a880ca586de2e24e313ca8aa9cec93f2de45" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.483147 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnjwg\" (UniqueName: \"kubernetes.io/projected/8e3c0e55-238e-4e4b-b5fb-da86a9948f01-kube-api-access-lnjwg\") pod \"8e3c0e55-238e-4e4b-b5fb-da86a9948f01\" (UID: \"8e3c0e55-238e-4e4b-b5fb-da86a9948f01\") " Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.483202 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e3c0e55-238e-4e4b-b5fb-da86a9948f01-combined-ca-bundle\") pod \"8e3c0e55-238e-4e4b-b5fb-da86a9948f01\" (UID: \"8e3c0e55-238e-4e4b-b5fb-da86a9948f01\") " Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.483258 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8e3c0e55-238e-4e4b-b5fb-da86a9948f01-config\") pod \"8e3c0e55-238e-4e4b-b5fb-da86a9948f01\" (UID: \"8e3c0e55-238e-4e4b-b5fb-da86a9948f01\") " Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.488290 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e3c0e55-238e-4e4b-b5fb-da86a9948f01-kube-api-access-lnjwg" (OuterVolumeSpecName: "kube-api-access-lnjwg") pod "8e3c0e55-238e-4e4b-b5fb-da86a9948f01" (UID: "8e3c0e55-238e-4e4b-b5fb-da86a9948f01"). InnerVolumeSpecName "kube-api-access-lnjwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.495709 4867 scope.go:117] "RemoveContainer" containerID="3d48a7cc1f09c73f7654e35066bc60bfc3818573839a37b4f6363c4e57c8dfb4" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.512334 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e3c0e55-238e-4e4b-b5fb-da86a9948f01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e3c0e55-238e-4e4b-b5fb-da86a9948f01" (UID: "8e3c0e55-238e-4e4b-b5fb-da86a9948f01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.529178 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e3c0e55-238e-4e4b-b5fb-da86a9948f01-config" (OuterVolumeSpecName: "config") pod "8e3c0e55-238e-4e4b-b5fb-da86a9948f01" (UID: "8e3c0e55-238e-4e4b-b5fb-da86a9948f01"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.548618 4867 scope.go:117] "RemoveContainer" containerID="93180d09bcd9a1c91babba1163bcf4b7e516da99bb04040d573f2226d479ec8d" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.585144 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8e3c0e55-238e-4e4b-b5fb-da86a9948f01-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.585166 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnjwg\" (UniqueName: \"kubernetes.io/projected/8e3c0e55-238e-4e4b-b5fb-da86a9948f01-kube-api-access-lnjwg\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.585177 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e3c0e55-238e-4e4b-b5fb-da86a9948f01-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.880134 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-49xz8" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.880149 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-49xz8" event={"ID":"8e3c0e55-238e-4e4b-b5fb-da86a9948f01","Type":"ContainerDied","Data":"b56b1e4bfc9cb063d1630945033f7857eab14f35aea0f527f4132ee285b7ba10"} Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.880468 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b56b1e4bfc9cb063d1630945033f7857eab14f35aea0f527f4132ee285b7ba10" Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.890825 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b51add02-d86c-4eb3-924d-1b2ac530e97b","Type":"ContainerStarted","Data":"0ffb3e321bea803090e9084b8864f32d053e1fb66b69554db5ac6b16f37a45db"} Jan 01 08:47:04 crc kubenswrapper[4867]: I0101 08:47:04.956581 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 01 08:47:04 crc kubenswrapper[4867]: W0101 08:47:04.960157 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1032e145_2486_4fe1_9bde_b067d64c5d1c.slice/crio-4396ed1a1dd6df9589cdcf1e89d1ab371a26dda2389465c77cc40a9c4009e360 WatchSource:0}: Error finding container 4396ed1a1dd6df9589cdcf1e89d1ab371a26dda2389465c77cc40a9c4009e360: Status 404 returned error can't find the container with id 4396ed1a1dd6df9589cdcf1e89d1ab371a26dda2389465c77cc40a9c4009e360 Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.099958 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bbc7d46bf-rkzwt"] Jan 01 08:47:05 crc kubenswrapper[4867]: E0101 08:47:05.100401 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e3c0e55-238e-4e4b-b5fb-da86a9948f01" containerName="neutron-db-sync" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.100418 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e3c0e55-238e-4e4b-b5fb-da86a9948f01" containerName="neutron-db-sync" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.100657 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e3c0e55-238e-4e4b-b5fb-da86a9948f01" containerName="neutron-db-sync" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.101720 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bbc7d46bf-rkzwt" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.117440 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.163002 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d" path="/var/lib/kubelet/pods/49b4a3f5-7ac8-4bb6-866a-5b6ca795ce0d/volumes" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.165145 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="862f6004-f33b-4866-be38-15e5e3227b0e" path="/var/lib/kubelet/pods/862f6004-f33b-4866-be38-15e5e3227b0e/volumes" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.165933 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae35dc22-012d-4c8e-8f97-5c5ee3596d96" path="/var/lib/kubelet/pods/ae35dc22-012d-4c8e-8f97-5c5ee3596d96/volumes" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.170678 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bbc7d46bf-rkzwt"] Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.185947 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6bccf6db66-lbtdw"] Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.188041 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6bccf6db66-lbtdw" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.198268 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.198502 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.198662 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-vlnxp" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.198789 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.205656 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdzc2\" (UniqueName: \"kubernetes.io/projected/aef45f27-5b04-455e-b71d-693aebb9a57b-kube-api-access-vdzc2\") pod \"dnsmasq-dns-5bbc7d46bf-rkzwt\" (UID: \"aef45f27-5b04-455e-b71d-693aebb9a57b\") " pod="openstack/dnsmasq-dns-5bbc7d46bf-rkzwt" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.205711 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aef45f27-5b04-455e-b71d-693aebb9a57b-dns-swift-storage-0\") pod \"dnsmasq-dns-5bbc7d46bf-rkzwt\" (UID: \"aef45f27-5b04-455e-b71d-693aebb9a57b\") " pod="openstack/dnsmasq-dns-5bbc7d46bf-rkzwt" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.205761 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aef45f27-5b04-455e-b71d-693aebb9a57b-ovsdbserver-nb\") pod \"dnsmasq-dns-5bbc7d46bf-rkzwt\" (UID: \"aef45f27-5b04-455e-b71d-693aebb9a57b\") " pod="openstack/dnsmasq-dns-5bbc7d46bf-rkzwt" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.205780 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aef45f27-5b04-455e-b71d-693aebb9a57b-ovsdbserver-sb\") pod \"dnsmasq-dns-5bbc7d46bf-rkzwt\" (UID: \"aef45f27-5b04-455e-b71d-693aebb9a57b\") " pod="openstack/dnsmasq-dns-5bbc7d46bf-rkzwt" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.205834 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aef45f27-5b04-455e-b71d-693aebb9a57b-config\") pod \"dnsmasq-dns-5bbc7d46bf-rkzwt\" (UID: \"aef45f27-5b04-455e-b71d-693aebb9a57b\") " pod="openstack/dnsmasq-dns-5bbc7d46bf-rkzwt" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.205862 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aef45f27-5b04-455e-b71d-693aebb9a57b-dns-svc\") pod \"dnsmasq-dns-5bbc7d46bf-rkzwt\" (UID: \"aef45f27-5b04-455e-b71d-693aebb9a57b\") " pod="openstack/dnsmasq-dns-5bbc7d46bf-rkzwt" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.221252 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6bccf6db66-lbtdw"] Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.307503 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/96b7e6f9-7a1c-4f53-8317-f11e46a64ee4-config\") pod \"neutron-6bccf6db66-lbtdw\" (UID: \"96b7e6f9-7a1c-4f53-8317-f11e46a64ee4\") " pod="openstack/neutron-6bccf6db66-lbtdw" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.307542 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aef45f27-5b04-455e-b71d-693aebb9a57b-ovsdbserver-nb\") pod \"dnsmasq-dns-5bbc7d46bf-rkzwt\" (UID: \"aef45f27-5b04-455e-b71d-693aebb9a57b\") " pod="openstack/dnsmasq-dns-5bbc7d46bf-rkzwt" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.307567 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aef45f27-5b04-455e-b71d-693aebb9a57b-ovsdbserver-sb\") pod \"dnsmasq-dns-5bbc7d46bf-rkzwt\" (UID: \"aef45f27-5b04-455e-b71d-693aebb9a57b\") " pod="openstack/dnsmasq-dns-5bbc7d46bf-rkzwt" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.307583 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/96b7e6f9-7a1c-4f53-8317-f11e46a64ee4-httpd-config\") pod \"neutron-6bccf6db66-lbtdw\" (UID: \"96b7e6f9-7a1c-4f53-8317-f11e46a64ee4\") " pod="openstack/neutron-6bccf6db66-lbtdw" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.307632 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/96b7e6f9-7a1c-4f53-8317-f11e46a64ee4-ovndb-tls-certs\") pod \"neutron-6bccf6db66-lbtdw\" (UID: \"96b7e6f9-7a1c-4f53-8317-f11e46a64ee4\") " pod="openstack/neutron-6bccf6db66-lbtdw" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.307653 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aef45f27-5b04-455e-b71d-693aebb9a57b-config\") pod \"dnsmasq-dns-5bbc7d46bf-rkzwt\" (UID: \"aef45f27-5b04-455e-b71d-693aebb9a57b\") " pod="openstack/dnsmasq-dns-5bbc7d46bf-rkzwt" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.307679 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aef45f27-5b04-455e-b71d-693aebb9a57b-dns-svc\") pod \"dnsmasq-dns-5bbc7d46bf-rkzwt\" (UID: \"aef45f27-5b04-455e-b71d-693aebb9a57b\") " pod="openstack/dnsmasq-dns-5bbc7d46bf-rkzwt" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.307695 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96b7e6f9-7a1c-4f53-8317-f11e46a64ee4-combined-ca-bundle\") pod \"neutron-6bccf6db66-lbtdw\" (UID: \"96b7e6f9-7a1c-4f53-8317-f11e46a64ee4\") " pod="openstack/neutron-6bccf6db66-lbtdw" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.307742 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdzc2\" (UniqueName: \"kubernetes.io/projected/aef45f27-5b04-455e-b71d-693aebb9a57b-kube-api-access-vdzc2\") pod \"dnsmasq-dns-5bbc7d46bf-rkzwt\" (UID: \"aef45f27-5b04-455e-b71d-693aebb9a57b\") " pod="openstack/dnsmasq-dns-5bbc7d46bf-rkzwt" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.307770 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aef45f27-5b04-455e-b71d-693aebb9a57b-dns-swift-storage-0\") pod \"dnsmasq-dns-5bbc7d46bf-rkzwt\" (UID: \"aef45f27-5b04-455e-b71d-693aebb9a57b\") " pod="openstack/dnsmasq-dns-5bbc7d46bf-rkzwt" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.307793 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpt69\" (UniqueName: \"kubernetes.io/projected/96b7e6f9-7a1c-4f53-8317-f11e46a64ee4-kube-api-access-kpt69\") pod \"neutron-6bccf6db66-lbtdw\" (UID: \"96b7e6f9-7a1c-4f53-8317-f11e46a64ee4\") " pod="openstack/neutron-6bccf6db66-lbtdw" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.308575 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aef45f27-5b04-455e-b71d-693aebb9a57b-ovsdbserver-nb\") pod \"dnsmasq-dns-5bbc7d46bf-rkzwt\" (UID: \"aef45f27-5b04-455e-b71d-693aebb9a57b\") " pod="openstack/dnsmasq-dns-5bbc7d46bf-rkzwt" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.308995 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aef45f27-5b04-455e-b71d-693aebb9a57b-config\") pod \"dnsmasq-dns-5bbc7d46bf-rkzwt\" (UID: \"aef45f27-5b04-455e-b71d-693aebb9a57b\") " pod="openstack/dnsmasq-dns-5bbc7d46bf-rkzwt" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.314134 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aef45f27-5b04-455e-b71d-693aebb9a57b-dns-svc\") pod \"dnsmasq-dns-5bbc7d46bf-rkzwt\" (UID: \"aef45f27-5b04-455e-b71d-693aebb9a57b\") " pod="openstack/dnsmasq-dns-5bbc7d46bf-rkzwt" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.314459 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aef45f27-5b04-455e-b71d-693aebb9a57b-ovsdbserver-sb\") pod \"dnsmasq-dns-5bbc7d46bf-rkzwt\" (UID: \"aef45f27-5b04-455e-b71d-693aebb9a57b\") " pod="openstack/dnsmasq-dns-5bbc7d46bf-rkzwt" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.315198 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aef45f27-5b04-455e-b71d-693aebb9a57b-dns-swift-storage-0\") pod \"dnsmasq-dns-5bbc7d46bf-rkzwt\" (UID: \"aef45f27-5b04-455e-b71d-693aebb9a57b\") " pod="openstack/dnsmasq-dns-5bbc7d46bf-rkzwt" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.343369 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdzc2\" (UniqueName: \"kubernetes.io/projected/aef45f27-5b04-455e-b71d-693aebb9a57b-kube-api-access-vdzc2\") pod \"dnsmasq-dns-5bbc7d46bf-rkzwt\" (UID: \"aef45f27-5b04-455e-b71d-693aebb9a57b\") " pod="openstack/dnsmasq-dns-5bbc7d46bf-rkzwt" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.410441 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96b7e6f9-7a1c-4f53-8317-f11e46a64ee4-combined-ca-bundle\") pod \"neutron-6bccf6db66-lbtdw\" (UID: \"96b7e6f9-7a1c-4f53-8317-f11e46a64ee4\") " pod="openstack/neutron-6bccf6db66-lbtdw" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.410565 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpt69\" (UniqueName: \"kubernetes.io/projected/96b7e6f9-7a1c-4f53-8317-f11e46a64ee4-kube-api-access-kpt69\") pod \"neutron-6bccf6db66-lbtdw\" (UID: \"96b7e6f9-7a1c-4f53-8317-f11e46a64ee4\") " pod="openstack/neutron-6bccf6db66-lbtdw" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.410599 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/96b7e6f9-7a1c-4f53-8317-f11e46a64ee4-config\") pod \"neutron-6bccf6db66-lbtdw\" (UID: \"96b7e6f9-7a1c-4f53-8317-f11e46a64ee4\") " pod="openstack/neutron-6bccf6db66-lbtdw" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.410622 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/96b7e6f9-7a1c-4f53-8317-f11e46a64ee4-httpd-config\") pod \"neutron-6bccf6db66-lbtdw\" (UID: \"96b7e6f9-7a1c-4f53-8317-f11e46a64ee4\") " pod="openstack/neutron-6bccf6db66-lbtdw" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.410671 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/96b7e6f9-7a1c-4f53-8317-f11e46a64ee4-ovndb-tls-certs\") pod \"neutron-6bccf6db66-lbtdw\" (UID: \"96b7e6f9-7a1c-4f53-8317-f11e46a64ee4\") " pod="openstack/neutron-6bccf6db66-lbtdw" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.415121 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/96b7e6f9-7a1c-4f53-8317-f11e46a64ee4-httpd-config\") pod \"neutron-6bccf6db66-lbtdw\" (UID: \"96b7e6f9-7a1c-4f53-8317-f11e46a64ee4\") " pod="openstack/neutron-6bccf6db66-lbtdw" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.415341 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/96b7e6f9-7a1c-4f53-8317-f11e46a64ee4-ovndb-tls-certs\") pod \"neutron-6bccf6db66-lbtdw\" (UID: \"96b7e6f9-7a1c-4f53-8317-f11e46a64ee4\") " pod="openstack/neutron-6bccf6db66-lbtdw" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.424238 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96b7e6f9-7a1c-4f53-8317-f11e46a64ee4-combined-ca-bundle\") pod \"neutron-6bccf6db66-lbtdw\" (UID: \"96b7e6f9-7a1c-4f53-8317-f11e46a64ee4\") " pod="openstack/neutron-6bccf6db66-lbtdw" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.430562 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/96b7e6f9-7a1c-4f53-8317-f11e46a64ee4-config\") pod \"neutron-6bccf6db66-lbtdw\" (UID: \"96b7e6f9-7a1c-4f53-8317-f11e46a64ee4\") " pod="openstack/neutron-6bccf6db66-lbtdw" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.448751 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpt69\" (UniqueName: \"kubernetes.io/projected/96b7e6f9-7a1c-4f53-8317-f11e46a64ee4-kube-api-access-kpt69\") pod \"neutron-6bccf6db66-lbtdw\" (UID: \"96b7e6f9-7a1c-4f53-8317-f11e46a64ee4\") " pod="openstack/neutron-6bccf6db66-lbtdw" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.483394 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bbc7d46bf-rkzwt" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.521339 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6bccf6db66-lbtdw" Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.923039 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1032e145-2486-4fe1-9bde-b067d64c5d1c","Type":"ContainerStarted","Data":"dd7d657d98de1804bb48e8272431ea76481f11e8f2c965c2ac1f114a9dbdcf4b"} Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.923328 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1032e145-2486-4fe1-9bde-b067d64c5d1c","Type":"ContainerStarted","Data":"4396ed1a1dd6df9589cdcf1e89d1ab371a26dda2389465c77cc40a9c4009e360"} Jan 01 08:47:05 crc kubenswrapper[4867]: I0101 08:47:05.925746 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa829cab-564c-410b-a84f-50bc6bba8676","Type":"ContainerStarted","Data":"d73749da25c692d25167f228707058330f1f86f64e88c7d64588f8b946fc367d"} Jan 01 08:47:06 crc kubenswrapper[4867]: I0101 08:47:06.001287 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bbc7d46bf-rkzwt"] Jan 01 08:47:06 crc kubenswrapper[4867]: I0101 08:47:06.249721 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6bccf6db66-lbtdw"] Jan 01 08:47:06 crc kubenswrapper[4867]: W0101 08:47:06.260993 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96b7e6f9_7a1c_4f53_8317_f11e46a64ee4.slice/crio-c570023f8844b34978cd6e84f1a9270eb4eac37d778cd320e11ea3caa4df3bad WatchSource:0}: Error finding container c570023f8844b34978cd6e84f1a9270eb4eac37d778cd320e11ea3caa4df3bad: Status 404 returned error can't find the container with id c570023f8844b34978cd6e84f1a9270eb4eac37d778cd320e11ea3caa4df3bad Jan 01 08:47:06 crc kubenswrapper[4867]: I0101 08:47:06.945209 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa829cab-564c-410b-a84f-50bc6bba8676","Type":"ContainerStarted","Data":"bf5a24a0389da79ac334afcfb3dee92dc2c07c58a8ef2298c7e0f9ace80ed7c0"} Jan 01 08:47:06 crc kubenswrapper[4867]: I0101 08:47:06.945863 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa829cab-564c-410b-a84f-50bc6bba8676","Type":"ContainerStarted","Data":"82b64cc4d5a9681c6b6e7f280a45133b1d7d8b66620bf65a4a254e9c97580d6a"} Jan 01 08:47:06 crc kubenswrapper[4867]: I0101 08:47:06.961667 4867 generic.go:334] "Generic (PLEG): container finished" podID="23acd1c1-f4b4-4d70-be4e-ea07cbff8053" containerID="7517e3a771393dd813ce33a6e896560947fef7f4d72c0dac4bc8064fd05eda37" exitCode=0 Jan 01 08:47:06 crc kubenswrapper[4867]: I0101 08:47:06.961755 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-sx242" event={"ID":"23acd1c1-f4b4-4d70-be4e-ea07cbff8053","Type":"ContainerDied","Data":"7517e3a771393dd813ce33a6e896560947fef7f4d72c0dac4bc8064fd05eda37"} Jan 01 08:47:06 crc kubenswrapper[4867]: I0101 08:47:06.968845 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.968828164 podStartE2EDuration="3.968828164s" podCreationTimestamp="2026-01-01 08:47:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:47:06.968063163 +0000 UTC m=+1236.103331922" watchObservedRunningTime="2026-01-01 08:47:06.968828164 +0000 UTC m=+1236.104096933" Jan 01 08:47:06 crc kubenswrapper[4867]: I0101 08:47:06.976654 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1032e145-2486-4fe1-9bde-b067d64c5d1c","Type":"ContainerStarted","Data":"776c8c16c62f11e76ab7ec02a01a146e73a47c3522a5e9e4d5ea077d152a0c24"} Jan 01 08:47:06 crc kubenswrapper[4867]: I0101 08:47:06.982324 4867 generic.go:334] "Generic (PLEG): container finished" podID="eeb7c774-4ae0-475c-a44a-138a917beac0" containerID="dee65a7cb9358ada33024723a1727a8030b5e5df85252b95d15f51b33fabff61" exitCode=0 Jan 01 08:47:06 crc kubenswrapper[4867]: I0101 08:47:06.982388 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-cdg7k" event={"ID":"eeb7c774-4ae0-475c-a44a-138a917beac0","Type":"ContainerDied","Data":"dee65a7cb9358ada33024723a1727a8030b5e5df85252b95d15f51b33fabff61"} Jan 01 08:47:06 crc kubenswrapper[4867]: I0101 08:47:06.984642 4867 generic.go:334] "Generic (PLEG): container finished" podID="aef45f27-5b04-455e-b71d-693aebb9a57b" containerID="b75139b61458207fe81684635d7087e47ecb4f8916750d2b41e9707204bccb28" exitCode=0 Jan 01 08:47:06 crc kubenswrapper[4867]: I0101 08:47:06.984694 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bbc7d46bf-rkzwt" event={"ID":"aef45f27-5b04-455e-b71d-693aebb9a57b","Type":"ContainerDied","Data":"b75139b61458207fe81684635d7087e47ecb4f8916750d2b41e9707204bccb28"} Jan 01 08:47:06 crc kubenswrapper[4867]: I0101 08:47:06.984709 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bbc7d46bf-rkzwt" event={"ID":"aef45f27-5b04-455e-b71d-693aebb9a57b","Type":"ContainerStarted","Data":"862275802f02119fdf342a987bdebfbf90b4c7f86216d1e0dfbd809789a86a1a"} Jan 01 08:47:06 crc kubenswrapper[4867]: I0101 08:47:06.994263 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bccf6db66-lbtdw" event={"ID":"96b7e6f9-7a1c-4f53-8317-f11e46a64ee4","Type":"ContainerStarted","Data":"84372a0b96d302680aac52645226307b41fb5dee0164942592b278b11e466ac9"} Jan 01 08:47:06 crc kubenswrapper[4867]: I0101 08:47:06.994465 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bccf6db66-lbtdw" event={"ID":"96b7e6f9-7a1c-4f53-8317-f11e46a64ee4","Type":"ContainerStarted","Data":"8c31e833d9d875c10ee9ea92d077898d1c1a4d514dd0e1b6017de6ae661ea19e"} Jan 01 08:47:06 crc kubenswrapper[4867]: I0101 08:47:06.994527 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bccf6db66-lbtdw" event={"ID":"96b7e6f9-7a1c-4f53-8317-f11e46a64ee4","Type":"ContainerStarted","Data":"c570023f8844b34978cd6e84f1a9270eb4eac37d778cd320e11ea3caa4df3bad"} Jan 01 08:47:06 crc kubenswrapper[4867]: I0101 08:47:06.995293 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6bccf6db66-lbtdw" Jan 01 08:47:07 crc kubenswrapper[4867]: I0101 08:47:07.142404 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6bccf6db66-lbtdw" podStartSLOduration=2.142387479 podStartE2EDuration="2.142387479s" podCreationTimestamp="2026-01-01 08:47:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:47:07.141859005 +0000 UTC m=+1236.277127794" watchObservedRunningTime="2026-01-01 08:47:07.142387479 +0000 UTC m=+1236.277656248" Jan 01 08:47:07 crc kubenswrapper[4867]: I0101 08:47:07.143630 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.143621854 podStartE2EDuration="4.143621854s" podCreationTimestamp="2026-01-01 08:47:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:47:07.079436315 +0000 UTC m=+1236.214705104" watchObservedRunningTime="2026-01-01 08:47:07.143621854 +0000 UTC m=+1236.278890623" Jan 01 08:47:08 crc kubenswrapper[4867]: I0101 08:47:08.007741 4867 generic.go:334] "Generic (PLEG): container finished" podID="83a36dad-781c-47b3-a1f2-d8aa5d7182fb" containerID="5e417c532e469853d56e32420e309e312face5cadadd0c1e91903b890365eb58" exitCode=0 Jan 01 08:47:08 crc kubenswrapper[4867]: I0101 08:47:08.007966 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7fvfm" event={"ID":"83a36dad-781c-47b3-a1f2-d8aa5d7182fb","Type":"ContainerDied","Data":"5e417c532e469853d56e32420e309e312face5cadadd0c1e91903b890365eb58"} Jan 01 08:47:08 crc kubenswrapper[4867]: I0101 08:47:08.017484 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bbc7d46bf-rkzwt" event={"ID":"aef45f27-5b04-455e-b71d-693aebb9a57b","Type":"ContainerStarted","Data":"4ae9c3edcaf133a2935de58b33c334622e1d02427a751d6d2e6ecea061577498"} Jan 01 08:47:08 crc kubenswrapper[4867]: I0101 08:47:08.017567 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bbc7d46bf-rkzwt" Jan 01 08:47:08 crc kubenswrapper[4867]: I0101 08:47:08.070422 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bbc7d46bf-rkzwt" podStartSLOduration=3.070400062 podStartE2EDuration="3.070400062s" podCreationTimestamp="2026-01-01 08:47:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:47:08.060451643 +0000 UTC m=+1237.195720412" watchObservedRunningTime="2026-01-01 08:47:08.070400062 +0000 UTC m=+1237.205668831" Jan 01 08:47:08 crc kubenswrapper[4867]: I0101 08:47:08.296271 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5fb785fd89-9d8g9"] Jan 01 08:47:08 crc kubenswrapper[4867]: I0101 08:47:08.298000 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5fb785fd89-9d8g9" Jan 01 08:47:08 crc kubenswrapper[4867]: I0101 08:47:08.302921 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5fb785fd89-9d8g9"] Jan 01 08:47:08 crc kubenswrapper[4867]: I0101 08:47:08.303302 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 01 08:47:08 crc kubenswrapper[4867]: I0101 08:47:08.303340 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 01 08:47:08 crc kubenswrapper[4867]: I0101 08:47:08.487212 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-combined-ca-bundle\") pod \"neutron-5fb785fd89-9d8g9\" (UID: \"0973b1fb-6399-4d31-aa7e-2a41a163e4f4\") " pod="openstack/neutron-5fb785fd89-9d8g9" Jan 01 08:47:08 crc kubenswrapper[4867]: I0101 08:47:08.487279 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-internal-tls-certs\") pod \"neutron-5fb785fd89-9d8g9\" (UID: \"0973b1fb-6399-4d31-aa7e-2a41a163e4f4\") " pod="openstack/neutron-5fb785fd89-9d8g9" Jan 01 08:47:08 crc kubenswrapper[4867]: I0101 08:47:08.487316 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-config\") pod \"neutron-5fb785fd89-9d8g9\" (UID: \"0973b1fb-6399-4d31-aa7e-2a41a163e4f4\") " pod="openstack/neutron-5fb785fd89-9d8g9" Jan 01 08:47:08 crc kubenswrapper[4867]: I0101 08:47:08.487349 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-ovndb-tls-certs\") pod \"neutron-5fb785fd89-9d8g9\" (UID: \"0973b1fb-6399-4d31-aa7e-2a41a163e4f4\") " pod="openstack/neutron-5fb785fd89-9d8g9" Jan 01 08:47:08 crc kubenswrapper[4867]: I0101 08:47:08.487375 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-httpd-config\") pod \"neutron-5fb785fd89-9d8g9\" (UID: \"0973b1fb-6399-4d31-aa7e-2a41a163e4f4\") " pod="openstack/neutron-5fb785fd89-9d8g9" Jan 01 08:47:08 crc kubenswrapper[4867]: I0101 08:47:08.487393 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54ph6\" (UniqueName: \"kubernetes.io/projected/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-kube-api-access-54ph6\") pod \"neutron-5fb785fd89-9d8g9\" (UID: \"0973b1fb-6399-4d31-aa7e-2a41a163e4f4\") " pod="openstack/neutron-5fb785fd89-9d8g9" Jan 01 08:47:08 crc kubenswrapper[4867]: I0101 08:47:08.487411 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-public-tls-certs\") pod \"neutron-5fb785fd89-9d8g9\" (UID: \"0973b1fb-6399-4d31-aa7e-2a41a163e4f4\") " pod="openstack/neutron-5fb785fd89-9d8g9" Jan 01 08:47:08 crc kubenswrapper[4867]: I0101 08:47:08.588947 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-ovndb-tls-certs\") pod \"neutron-5fb785fd89-9d8g9\" (UID: \"0973b1fb-6399-4d31-aa7e-2a41a163e4f4\") " pod="openstack/neutron-5fb785fd89-9d8g9" Jan 01 08:47:08 crc kubenswrapper[4867]: I0101 08:47:08.589001 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-httpd-config\") pod \"neutron-5fb785fd89-9d8g9\" (UID: \"0973b1fb-6399-4d31-aa7e-2a41a163e4f4\") " pod="openstack/neutron-5fb785fd89-9d8g9" Jan 01 08:47:08 crc kubenswrapper[4867]: I0101 08:47:08.589022 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54ph6\" (UniqueName: \"kubernetes.io/projected/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-kube-api-access-54ph6\") pod \"neutron-5fb785fd89-9d8g9\" (UID: \"0973b1fb-6399-4d31-aa7e-2a41a163e4f4\") " pod="openstack/neutron-5fb785fd89-9d8g9" Jan 01 08:47:08 crc kubenswrapper[4867]: I0101 08:47:08.589041 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-public-tls-certs\") pod \"neutron-5fb785fd89-9d8g9\" (UID: \"0973b1fb-6399-4d31-aa7e-2a41a163e4f4\") " pod="openstack/neutron-5fb785fd89-9d8g9" Jan 01 08:47:08 crc kubenswrapper[4867]: I0101 08:47:08.589086 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-combined-ca-bundle\") pod \"neutron-5fb785fd89-9d8g9\" (UID: \"0973b1fb-6399-4d31-aa7e-2a41a163e4f4\") " pod="openstack/neutron-5fb785fd89-9d8g9" Jan 01 08:47:08 crc kubenswrapper[4867]: I0101 08:47:08.589137 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-internal-tls-certs\") pod \"neutron-5fb785fd89-9d8g9\" (UID: \"0973b1fb-6399-4d31-aa7e-2a41a163e4f4\") " pod="openstack/neutron-5fb785fd89-9d8g9" Jan 01 08:47:08 crc kubenswrapper[4867]: I0101 08:47:08.589176 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-config\") pod \"neutron-5fb785fd89-9d8g9\" (UID: \"0973b1fb-6399-4d31-aa7e-2a41a163e4f4\") " pod="openstack/neutron-5fb785fd89-9d8g9" Jan 01 08:47:08 crc kubenswrapper[4867]: I0101 08:47:08.595419 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-public-tls-certs\") pod \"neutron-5fb785fd89-9d8g9\" (UID: \"0973b1fb-6399-4d31-aa7e-2a41a163e4f4\") " pod="openstack/neutron-5fb785fd89-9d8g9" Jan 01 08:47:08 crc kubenswrapper[4867]: I0101 08:47:08.596570 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-ovndb-tls-certs\") pod \"neutron-5fb785fd89-9d8g9\" (UID: \"0973b1fb-6399-4d31-aa7e-2a41a163e4f4\") " pod="openstack/neutron-5fb785fd89-9d8g9" Jan 01 08:47:08 crc kubenswrapper[4867]: I0101 08:47:08.596742 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-combined-ca-bundle\") pod \"neutron-5fb785fd89-9d8g9\" (UID: \"0973b1fb-6399-4d31-aa7e-2a41a163e4f4\") " pod="openstack/neutron-5fb785fd89-9d8g9" Jan 01 08:47:08 crc kubenswrapper[4867]: I0101 08:47:08.597196 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-config\") pod \"neutron-5fb785fd89-9d8g9\" (UID: \"0973b1fb-6399-4d31-aa7e-2a41a163e4f4\") " pod="openstack/neutron-5fb785fd89-9d8g9" Jan 01 08:47:08 crc kubenswrapper[4867]: I0101 08:47:08.612370 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-httpd-config\") pod \"neutron-5fb785fd89-9d8g9\" (UID: \"0973b1fb-6399-4d31-aa7e-2a41a163e4f4\") " pod="openstack/neutron-5fb785fd89-9d8g9" Jan 01 08:47:08 crc kubenswrapper[4867]: I0101 08:47:08.612483 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-internal-tls-certs\") pod \"neutron-5fb785fd89-9d8g9\" (UID: \"0973b1fb-6399-4d31-aa7e-2a41a163e4f4\") " pod="openstack/neutron-5fb785fd89-9d8g9" Jan 01 08:47:08 crc kubenswrapper[4867]: I0101 08:47:08.619067 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54ph6\" (UniqueName: \"kubernetes.io/projected/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-kube-api-access-54ph6\") pod \"neutron-5fb785fd89-9d8g9\" (UID: \"0973b1fb-6399-4d31-aa7e-2a41a163e4f4\") " pod="openstack/neutron-5fb785fd89-9d8g9" Jan 01 08:47:08 crc kubenswrapper[4867]: I0101 08:47:08.623708 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5fb785fd89-9d8g9" Jan 01 08:47:10 crc kubenswrapper[4867]: I0101 08:47:10.534689 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-cdg7k" Jan 01 08:47:10 crc kubenswrapper[4867]: I0101 08:47:10.535485 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7fvfm" Jan 01 08:47:10 crc kubenswrapper[4867]: I0101 08:47:10.724016 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeb7c774-4ae0-475c-a44a-138a917beac0-config-data\") pod \"eeb7c774-4ae0-475c-a44a-138a917beac0\" (UID: \"eeb7c774-4ae0-475c-a44a-138a917beac0\") " Jan 01 08:47:10 crc kubenswrapper[4867]: I0101 08:47:10.724408 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/83a36dad-781c-47b3-a1f2-d8aa5d7182fb-fernet-keys\") pod \"83a36dad-781c-47b3-a1f2-d8aa5d7182fb\" (UID: \"83a36dad-781c-47b3-a1f2-d8aa5d7182fb\") " Jan 01 08:47:10 crc kubenswrapper[4867]: I0101 08:47:10.724563 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/83a36dad-781c-47b3-a1f2-d8aa5d7182fb-credential-keys\") pod \"83a36dad-781c-47b3-a1f2-d8aa5d7182fb\" (UID: \"83a36dad-781c-47b3-a1f2-d8aa5d7182fb\") " Jan 01 08:47:10 crc kubenswrapper[4867]: I0101 08:47:10.724663 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83a36dad-781c-47b3-a1f2-d8aa5d7182fb-scripts\") pod \"83a36dad-781c-47b3-a1f2-d8aa5d7182fb\" (UID: \"83a36dad-781c-47b3-a1f2-d8aa5d7182fb\") " Jan 01 08:47:10 crc kubenswrapper[4867]: I0101 08:47:10.724698 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jm7zj\" (UniqueName: \"kubernetes.io/projected/83a36dad-781c-47b3-a1f2-d8aa5d7182fb-kube-api-access-jm7zj\") pod \"83a36dad-781c-47b3-a1f2-d8aa5d7182fb\" (UID: \"83a36dad-781c-47b3-a1f2-d8aa5d7182fb\") " Jan 01 08:47:10 crc kubenswrapper[4867]: I0101 08:47:10.724738 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83a36dad-781c-47b3-a1f2-d8aa5d7182fb-combined-ca-bundle\") pod \"83a36dad-781c-47b3-a1f2-d8aa5d7182fb\" (UID: \"83a36dad-781c-47b3-a1f2-d8aa5d7182fb\") " Jan 01 08:47:10 crc kubenswrapper[4867]: I0101 08:47:10.724778 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwvtx\" (UniqueName: \"kubernetes.io/projected/eeb7c774-4ae0-475c-a44a-138a917beac0-kube-api-access-hwvtx\") pod \"eeb7c774-4ae0-475c-a44a-138a917beac0\" (UID: \"eeb7c774-4ae0-475c-a44a-138a917beac0\") " Jan 01 08:47:10 crc kubenswrapper[4867]: I0101 08:47:10.724804 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeb7c774-4ae0-475c-a44a-138a917beac0-combined-ca-bundle\") pod \"eeb7c774-4ae0-475c-a44a-138a917beac0\" (UID: \"eeb7c774-4ae0-475c-a44a-138a917beac0\") " Jan 01 08:47:10 crc kubenswrapper[4867]: I0101 08:47:10.724838 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83a36dad-781c-47b3-a1f2-d8aa5d7182fb-config-data\") pod \"83a36dad-781c-47b3-a1f2-d8aa5d7182fb\" (UID: \"83a36dad-781c-47b3-a1f2-d8aa5d7182fb\") " Jan 01 08:47:10 crc kubenswrapper[4867]: I0101 08:47:10.724906 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eeb7c774-4ae0-475c-a44a-138a917beac0-logs\") pod \"eeb7c774-4ae0-475c-a44a-138a917beac0\" (UID: \"eeb7c774-4ae0-475c-a44a-138a917beac0\") " Jan 01 08:47:10 crc kubenswrapper[4867]: I0101 08:47:10.724937 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eeb7c774-4ae0-475c-a44a-138a917beac0-scripts\") pod \"eeb7c774-4ae0-475c-a44a-138a917beac0\" (UID: \"eeb7c774-4ae0-475c-a44a-138a917beac0\") " Jan 01 08:47:10 crc kubenswrapper[4867]: I0101 08:47:10.729755 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eeb7c774-4ae0-475c-a44a-138a917beac0-scripts" (OuterVolumeSpecName: "scripts") pod "eeb7c774-4ae0-475c-a44a-138a917beac0" (UID: "eeb7c774-4ae0-475c-a44a-138a917beac0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:10 crc kubenswrapper[4867]: I0101 08:47:10.730766 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eeb7c774-4ae0-475c-a44a-138a917beac0-logs" (OuterVolumeSpecName: "logs") pod "eeb7c774-4ae0-475c-a44a-138a917beac0" (UID: "eeb7c774-4ae0-475c-a44a-138a917beac0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:47:10 crc kubenswrapper[4867]: I0101 08:47:10.734238 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83a36dad-781c-47b3-a1f2-d8aa5d7182fb-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "83a36dad-781c-47b3-a1f2-d8aa5d7182fb" (UID: "83a36dad-781c-47b3-a1f2-d8aa5d7182fb"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:10 crc kubenswrapper[4867]: I0101 08:47:10.734817 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83a36dad-781c-47b3-a1f2-d8aa5d7182fb-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "83a36dad-781c-47b3-a1f2-d8aa5d7182fb" (UID: "83a36dad-781c-47b3-a1f2-d8aa5d7182fb"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:10 crc kubenswrapper[4867]: I0101 08:47:10.737717 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eeb7c774-4ae0-475c-a44a-138a917beac0-kube-api-access-hwvtx" (OuterVolumeSpecName: "kube-api-access-hwvtx") pod "eeb7c774-4ae0-475c-a44a-138a917beac0" (UID: "eeb7c774-4ae0-475c-a44a-138a917beac0"). InnerVolumeSpecName "kube-api-access-hwvtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:47:10 crc kubenswrapper[4867]: I0101 08:47:10.739606 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83a36dad-781c-47b3-a1f2-d8aa5d7182fb-scripts" (OuterVolumeSpecName: "scripts") pod "83a36dad-781c-47b3-a1f2-d8aa5d7182fb" (UID: "83a36dad-781c-47b3-a1f2-d8aa5d7182fb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:10 crc kubenswrapper[4867]: I0101 08:47:10.741995 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83a36dad-781c-47b3-a1f2-d8aa5d7182fb-kube-api-access-jm7zj" (OuterVolumeSpecName: "kube-api-access-jm7zj") pod "83a36dad-781c-47b3-a1f2-d8aa5d7182fb" (UID: "83a36dad-781c-47b3-a1f2-d8aa5d7182fb"). InnerVolumeSpecName "kube-api-access-jm7zj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:47:10 crc kubenswrapper[4867]: I0101 08:47:10.764022 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eeb7c774-4ae0-475c-a44a-138a917beac0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eeb7c774-4ae0-475c-a44a-138a917beac0" (UID: "eeb7c774-4ae0-475c-a44a-138a917beac0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:10 crc kubenswrapper[4867]: I0101 08:47:10.766175 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eeb7c774-4ae0-475c-a44a-138a917beac0-config-data" (OuterVolumeSpecName: "config-data") pod "eeb7c774-4ae0-475c-a44a-138a917beac0" (UID: "eeb7c774-4ae0-475c-a44a-138a917beac0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:10 crc kubenswrapper[4867]: I0101 08:47:10.767604 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83a36dad-781c-47b3-a1f2-d8aa5d7182fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83a36dad-781c-47b3-a1f2-d8aa5d7182fb" (UID: "83a36dad-781c-47b3-a1f2-d8aa5d7182fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:10 crc kubenswrapper[4867]: I0101 08:47:10.768196 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83a36dad-781c-47b3-a1f2-d8aa5d7182fb-config-data" (OuterVolumeSpecName: "config-data") pod "83a36dad-781c-47b3-a1f2-d8aa5d7182fb" (UID: "83a36dad-781c-47b3-a1f2-d8aa5d7182fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:10 crc kubenswrapper[4867]: I0101 08:47:10.827336 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83a36dad-781c-47b3-a1f2-d8aa5d7182fb-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:10 crc kubenswrapper[4867]: I0101 08:47:10.827363 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jm7zj\" (UniqueName: \"kubernetes.io/projected/83a36dad-781c-47b3-a1f2-d8aa5d7182fb-kube-api-access-jm7zj\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:10 crc kubenswrapper[4867]: I0101 08:47:10.827373 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83a36dad-781c-47b3-a1f2-d8aa5d7182fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:10 crc kubenswrapper[4867]: I0101 08:47:10.827383 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwvtx\" (UniqueName: \"kubernetes.io/projected/eeb7c774-4ae0-475c-a44a-138a917beac0-kube-api-access-hwvtx\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:10 crc kubenswrapper[4867]: I0101 08:47:10.827392 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeb7c774-4ae0-475c-a44a-138a917beac0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:10 crc kubenswrapper[4867]: I0101 08:47:10.827401 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83a36dad-781c-47b3-a1f2-d8aa5d7182fb-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:10 crc kubenswrapper[4867]: I0101 08:47:10.827410 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eeb7c774-4ae0-475c-a44a-138a917beac0-logs\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:10 crc kubenswrapper[4867]: I0101 08:47:10.827417 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eeb7c774-4ae0-475c-a44a-138a917beac0-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:10 crc kubenswrapper[4867]: I0101 08:47:10.827425 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeb7c774-4ae0-475c-a44a-138a917beac0-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:10 crc kubenswrapper[4867]: I0101 08:47:10.827433 4867 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/83a36dad-781c-47b3-a1f2-d8aa5d7182fb-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:10 crc kubenswrapper[4867]: I0101 08:47:10.827442 4867 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/83a36dad-781c-47b3-a1f2-d8aa5d7182fb-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.060854 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-cdg7k" event={"ID":"eeb7c774-4ae0-475c-a44a-138a917beac0","Type":"ContainerDied","Data":"c45c42b74c6f608b40077b8e79b0d6279d0548a40f095a1a141e3585c5c6c38e"} Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.060936 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c45c42b74c6f608b40077b8e79b0d6279d0548a40f095a1a141e3585c5c6c38e" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.061010 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-cdg7k" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.072740 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7fvfm" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.072360 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7fvfm" event={"ID":"83a36dad-781c-47b3-a1f2-d8aa5d7182fb","Type":"ContainerDied","Data":"b3b2addb6a313eb3b435309d82dbf094dc3184a6c72db275c076f2586dc138c5"} Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.073947 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3b2addb6a313eb3b435309d82dbf094dc3184a6c72db275c076f2586dc138c5" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.664345 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-67dd85d5b6-ww7ll"] Jan 01 08:47:11 crc kubenswrapper[4867]: E0101 08:47:11.665808 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83a36dad-781c-47b3-a1f2-d8aa5d7182fb" containerName="keystone-bootstrap" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.665895 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="83a36dad-781c-47b3-a1f2-d8aa5d7182fb" containerName="keystone-bootstrap" Jan 01 08:47:11 crc kubenswrapper[4867]: E0101 08:47:11.665965 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeb7c774-4ae0-475c-a44a-138a917beac0" containerName="placement-db-sync" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.666024 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeb7c774-4ae0-475c-a44a-138a917beac0" containerName="placement-db-sync" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.666231 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="83a36dad-781c-47b3-a1f2-d8aa5d7182fb" containerName="keystone-bootstrap" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.666304 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeb7c774-4ae0-475c-a44a-138a917beac0" containerName="placement-db-sync" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.667283 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-67dd85d5b6-ww7ll" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.672125 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.672340 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-smwhf" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.674376 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.681916 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.682590 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.684594 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-67dd85d5b6-ww7ll"] Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.770736 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1822baf8-11aa-4152-a74f-2ce0383c1094-combined-ca-bundle\") pod \"placement-67dd85d5b6-ww7ll\" (UID: \"1822baf8-11aa-4152-a74f-2ce0383c1094\") " pod="openstack/placement-67dd85d5b6-ww7ll" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.770819 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb687\" (UniqueName: \"kubernetes.io/projected/1822baf8-11aa-4152-a74f-2ce0383c1094-kube-api-access-cb687\") pod \"placement-67dd85d5b6-ww7ll\" (UID: \"1822baf8-11aa-4152-a74f-2ce0383c1094\") " pod="openstack/placement-67dd85d5b6-ww7ll" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.770864 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1822baf8-11aa-4152-a74f-2ce0383c1094-public-tls-certs\") pod \"placement-67dd85d5b6-ww7ll\" (UID: \"1822baf8-11aa-4152-a74f-2ce0383c1094\") " pod="openstack/placement-67dd85d5b6-ww7ll" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.770939 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1822baf8-11aa-4152-a74f-2ce0383c1094-scripts\") pod \"placement-67dd85d5b6-ww7ll\" (UID: \"1822baf8-11aa-4152-a74f-2ce0383c1094\") " pod="openstack/placement-67dd85d5b6-ww7ll" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.770961 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1822baf8-11aa-4152-a74f-2ce0383c1094-config-data\") pod \"placement-67dd85d5b6-ww7ll\" (UID: \"1822baf8-11aa-4152-a74f-2ce0383c1094\") " pod="openstack/placement-67dd85d5b6-ww7ll" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.770980 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1822baf8-11aa-4152-a74f-2ce0383c1094-internal-tls-certs\") pod \"placement-67dd85d5b6-ww7ll\" (UID: \"1822baf8-11aa-4152-a74f-2ce0383c1094\") " pod="openstack/placement-67dd85d5b6-ww7ll" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.770998 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1822baf8-11aa-4152-a74f-2ce0383c1094-logs\") pod \"placement-67dd85d5b6-ww7ll\" (UID: \"1822baf8-11aa-4152-a74f-2ce0383c1094\") " pod="openstack/placement-67dd85d5b6-ww7ll" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.774874 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6498f7d58c-nhfz8"] Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.784384 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6498f7d58c-nhfz8" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.788492 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-67p9k" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.788568 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.788802 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.788964 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6498f7d58c-nhfz8"] Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.792633 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.792790 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.792926 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.873793 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1822baf8-11aa-4152-a74f-2ce0383c1094-public-tls-certs\") pod \"placement-67dd85d5b6-ww7ll\" (UID: \"1822baf8-11aa-4152-a74f-2ce0383c1094\") " pod="openstack/placement-67dd85d5b6-ww7ll" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.873843 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-combined-ca-bundle\") pod \"keystone-6498f7d58c-nhfz8\" (UID: \"985cc3ff-ea2f-4386-a828-180deef97412\") " pod="openstack/keystone-6498f7d58c-nhfz8" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.873876 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-public-tls-certs\") pod \"keystone-6498f7d58c-nhfz8\" (UID: \"985cc3ff-ea2f-4386-a828-180deef97412\") " pod="openstack/keystone-6498f7d58c-nhfz8" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.873927 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-config-data\") pod \"keystone-6498f7d58c-nhfz8\" (UID: \"985cc3ff-ea2f-4386-a828-180deef97412\") " pod="openstack/keystone-6498f7d58c-nhfz8" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.873943 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1822baf8-11aa-4152-a74f-2ce0383c1094-scripts\") pod \"placement-67dd85d5b6-ww7ll\" (UID: \"1822baf8-11aa-4152-a74f-2ce0383c1094\") " pod="openstack/placement-67dd85d5b6-ww7ll" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.873962 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1822baf8-11aa-4152-a74f-2ce0383c1094-config-data\") pod \"placement-67dd85d5b6-ww7ll\" (UID: \"1822baf8-11aa-4152-a74f-2ce0383c1094\") " pod="openstack/placement-67dd85d5b6-ww7ll" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.873978 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-fernet-keys\") pod \"keystone-6498f7d58c-nhfz8\" (UID: \"985cc3ff-ea2f-4386-a828-180deef97412\") " pod="openstack/keystone-6498f7d58c-nhfz8" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.873993 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-internal-tls-certs\") pod \"keystone-6498f7d58c-nhfz8\" (UID: \"985cc3ff-ea2f-4386-a828-180deef97412\") " pod="openstack/keystone-6498f7d58c-nhfz8" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.874010 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1822baf8-11aa-4152-a74f-2ce0383c1094-internal-tls-certs\") pod \"placement-67dd85d5b6-ww7ll\" (UID: \"1822baf8-11aa-4152-a74f-2ce0383c1094\") " pod="openstack/placement-67dd85d5b6-ww7ll" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.874026 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1822baf8-11aa-4152-a74f-2ce0383c1094-logs\") pod \"placement-67dd85d5b6-ww7ll\" (UID: \"1822baf8-11aa-4152-a74f-2ce0383c1094\") " pod="openstack/placement-67dd85d5b6-ww7ll" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.874054 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp592\" (UniqueName: \"kubernetes.io/projected/985cc3ff-ea2f-4386-a828-180deef97412-kube-api-access-fp592\") pod \"keystone-6498f7d58c-nhfz8\" (UID: \"985cc3ff-ea2f-4386-a828-180deef97412\") " pod="openstack/keystone-6498f7d58c-nhfz8" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.874088 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1822baf8-11aa-4152-a74f-2ce0383c1094-combined-ca-bundle\") pod \"placement-67dd85d5b6-ww7ll\" (UID: \"1822baf8-11aa-4152-a74f-2ce0383c1094\") " pod="openstack/placement-67dd85d5b6-ww7ll" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.874121 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-credential-keys\") pod \"keystone-6498f7d58c-nhfz8\" (UID: \"985cc3ff-ea2f-4386-a828-180deef97412\") " pod="openstack/keystone-6498f7d58c-nhfz8" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.874147 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb687\" (UniqueName: \"kubernetes.io/projected/1822baf8-11aa-4152-a74f-2ce0383c1094-kube-api-access-cb687\") pod \"placement-67dd85d5b6-ww7ll\" (UID: \"1822baf8-11aa-4152-a74f-2ce0383c1094\") " pod="openstack/placement-67dd85d5b6-ww7ll" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.874176 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-scripts\") pod \"keystone-6498f7d58c-nhfz8\" (UID: \"985cc3ff-ea2f-4386-a828-180deef97412\") " pod="openstack/keystone-6498f7d58c-nhfz8" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.880246 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1822baf8-11aa-4152-a74f-2ce0383c1094-internal-tls-certs\") pod \"placement-67dd85d5b6-ww7ll\" (UID: \"1822baf8-11aa-4152-a74f-2ce0383c1094\") " pod="openstack/placement-67dd85d5b6-ww7ll" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.880521 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1822baf8-11aa-4152-a74f-2ce0383c1094-logs\") pod \"placement-67dd85d5b6-ww7ll\" (UID: \"1822baf8-11aa-4152-a74f-2ce0383c1094\") " pod="openstack/placement-67dd85d5b6-ww7ll" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.884863 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1822baf8-11aa-4152-a74f-2ce0383c1094-combined-ca-bundle\") pod \"placement-67dd85d5b6-ww7ll\" (UID: \"1822baf8-11aa-4152-a74f-2ce0383c1094\") " pod="openstack/placement-67dd85d5b6-ww7ll" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.886128 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1822baf8-11aa-4152-a74f-2ce0383c1094-scripts\") pod \"placement-67dd85d5b6-ww7ll\" (UID: \"1822baf8-11aa-4152-a74f-2ce0383c1094\") " pod="openstack/placement-67dd85d5b6-ww7ll" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.890471 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1822baf8-11aa-4152-a74f-2ce0383c1094-config-data\") pod \"placement-67dd85d5b6-ww7ll\" (UID: \"1822baf8-11aa-4152-a74f-2ce0383c1094\") " pod="openstack/placement-67dd85d5b6-ww7ll" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.903396 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1822baf8-11aa-4152-a74f-2ce0383c1094-public-tls-certs\") pod \"placement-67dd85d5b6-ww7ll\" (UID: \"1822baf8-11aa-4152-a74f-2ce0383c1094\") " pod="openstack/placement-67dd85d5b6-ww7ll" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.907240 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb687\" (UniqueName: \"kubernetes.io/projected/1822baf8-11aa-4152-a74f-2ce0383c1094-kube-api-access-cb687\") pod \"placement-67dd85d5b6-ww7ll\" (UID: \"1822baf8-11aa-4152-a74f-2ce0383c1094\") " pod="openstack/placement-67dd85d5b6-ww7ll" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.978027 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-combined-ca-bundle\") pod \"keystone-6498f7d58c-nhfz8\" (UID: \"985cc3ff-ea2f-4386-a828-180deef97412\") " pod="openstack/keystone-6498f7d58c-nhfz8" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.978106 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-public-tls-certs\") pod \"keystone-6498f7d58c-nhfz8\" (UID: \"985cc3ff-ea2f-4386-a828-180deef97412\") " pod="openstack/keystone-6498f7d58c-nhfz8" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.978171 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-config-data\") pod \"keystone-6498f7d58c-nhfz8\" (UID: \"985cc3ff-ea2f-4386-a828-180deef97412\") " pod="openstack/keystone-6498f7d58c-nhfz8" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.978216 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-fernet-keys\") pod \"keystone-6498f7d58c-nhfz8\" (UID: \"985cc3ff-ea2f-4386-a828-180deef97412\") " pod="openstack/keystone-6498f7d58c-nhfz8" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.978238 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-internal-tls-certs\") pod \"keystone-6498f7d58c-nhfz8\" (UID: \"985cc3ff-ea2f-4386-a828-180deef97412\") " pod="openstack/keystone-6498f7d58c-nhfz8" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.978276 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp592\" (UniqueName: \"kubernetes.io/projected/985cc3ff-ea2f-4386-a828-180deef97412-kube-api-access-fp592\") pod \"keystone-6498f7d58c-nhfz8\" (UID: \"985cc3ff-ea2f-4386-a828-180deef97412\") " pod="openstack/keystone-6498f7d58c-nhfz8" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.978349 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-credential-keys\") pod \"keystone-6498f7d58c-nhfz8\" (UID: \"985cc3ff-ea2f-4386-a828-180deef97412\") " pod="openstack/keystone-6498f7d58c-nhfz8" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.978406 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-scripts\") pod \"keystone-6498f7d58c-nhfz8\" (UID: \"985cc3ff-ea2f-4386-a828-180deef97412\") " pod="openstack/keystone-6498f7d58c-nhfz8" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.988017 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-public-tls-certs\") pod \"keystone-6498f7d58c-nhfz8\" (UID: \"985cc3ff-ea2f-4386-a828-180deef97412\") " pod="openstack/keystone-6498f7d58c-nhfz8" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.988338 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-scripts\") pod \"keystone-6498f7d58c-nhfz8\" (UID: \"985cc3ff-ea2f-4386-a828-180deef97412\") " pod="openstack/keystone-6498f7d58c-nhfz8" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.988516 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-internal-tls-certs\") pod \"keystone-6498f7d58c-nhfz8\" (UID: \"985cc3ff-ea2f-4386-a828-180deef97412\") " pod="openstack/keystone-6498f7d58c-nhfz8" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.989526 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-config-data\") pod \"keystone-6498f7d58c-nhfz8\" (UID: \"985cc3ff-ea2f-4386-a828-180deef97412\") " pod="openstack/keystone-6498f7d58c-nhfz8" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.993347 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-67dd85d5b6-ww7ll" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.994607 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-credential-keys\") pod \"keystone-6498f7d58c-nhfz8\" (UID: \"985cc3ff-ea2f-4386-a828-180deef97412\") " pod="openstack/keystone-6498f7d58c-nhfz8" Jan 01 08:47:11 crc kubenswrapper[4867]: I0101 08:47:11.994739 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-combined-ca-bundle\") pod \"keystone-6498f7d58c-nhfz8\" (UID: \"985cc3ff-ea2f-4386-a828-180deef97412\") " pod="openstack/keystone-6498f7d58c-nhfz8" Jan 01 08:47:12 crc kubenswrapper[4867]: I0101 08:47:12.003417 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-fernet-keys\") pod \"keystone-6498f7d58c-nhfz8\" (UID: \"985cc3ff-ea2f-4386-a828-180deef97412\") " pod="openstack/keystone-6498f7d58c-nhfz8" Jan 01 08:47:12 crc kubenswrapper[4867]: I0101 08:47:12.003454 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp592\" (UniqueName: \"kubernetes.io/projected/985cc3ff-ea2f-4386-a828-180deef97412-kube-api-access-fp592\") pod \"keystone-6498f7d58c-nhfz8\" (UID: \"985cc3ff-ea2f-4386-a828-180deef97412\") " pod="openstack/keystone-6498f7d58c-nhfz8" Jan 01 08:47:12 crc kubenswrapper[4867]: I0101 08:47:12.123875 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6498f7d58c-nhfz8" Jan 01 08:47:13 crc kubenswrapper[4867]: I0101 08:47:13.098029 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-sx242" event={"ID":"23acd1c1-f4b4-4d70-be4e-ea07cbff8053","Type":"ContainerDied","Data":"8177d642c3d58eb781af0f63b55181622f7956b45ffda266211965604abccb6e"} Jan 01 08:47:13 crc kubenswrapper[4867]: I0101 08:47:13.098444 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8177d642c3d58eb781af0f63b55181622f7956b45ffda266211965604abccb6e" Jan 01 08:47:13 crc kubenswrapper[4867]: I0101 08:47:13.130529 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-sx242" Jan 01 08:47:13 crc kubenswrapper[4867]: I0101 08:47:13.200700 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/23acd1c1-f4b4-4d70-be4e-ea07cbff8053-db-sync-config-data\") pod \"23acd1c1-f4b4-4d70-be4e-ea07cbff8053\" (UID: \"23acd1c1-f4b4-4d70-be4e-ea07cbff8053\") " Jan 01 08:47:13 crc kubenswrapper[4867]: I0101 08:47:13.200826 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23acd1c1-f4b4-4d70-be4e-ea07cbff8053-combined-ca-bundle\") pod \"23acd1c1-f4b4-4d70-be4e-ea07cbff8053\" (UID: \"23acd1c1-f4b4-4d70-be4e-ea07cbff8053\") " Jan 01 08:47:13 crc kubenswrapper[4867]: I0101 08:47:13.201031 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxdf7\" (UniqueName: \"kubernetes.io/projected/23acd1c1-f4b4-4d70-be4e-ea07cbff8053-kube-api-access-rxdf7\") pod \"23acd1c1-f4b4-4d70-be4e-ea07cbff8053\" (UID: \"23acd1c1-f4b4-4d70-be4e-ea07cbff8053\") " Jan 01 08:47:13 crc kubenswrapper[4867]: I0101 08:47:13.206398 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23acd1c1-f4b4-4d70-be4e-ea07cbff8053-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "23acd1c1-f4b4-4d70-be4e-ea07cbff8053" (UID: "23acd1c1-f4b4-4d70-be4e-ea07cbff8053"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:13 crc kubenswrapper[4867]: I0101 08:47:13.207727 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23acd1c1-f4b4-4d70-be4e-ea07cbff8053-kube-api-access-rxdf7" (OuterVolumeSpecName: "kube-api-access-rxdf7") pod "23acd1c1-f4b4-4d70-be4e-ea07cbff8053" (UID: "23acd1c1-f4b4-4d70-be4e-ea07cbff8053"). InnerVolumeSpecName "kube-api-access-rxdf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:47:13 crc kubenswrapper[4867]: I0101 08:47:13.226517 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23acd1c1-f4b4-4d70-be4e-ea07cbff8053-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "23acd1c1-f4b4-4d70-be4e-ea07cbff8053" (UID: "23acd1c1-f4b4-4d70-be4e-ea07cbff8053"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:13 crc kubenswrapper[4867]: I0101 08:47:13.303219 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxdf7\" (UniqueName: \"kubernetes.io/projected/23acd1c1-f4b4-4d70-be4e-ea07cbff8053-kube-api-access-rxdf7\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:13 crc kubenswrapper[4867]: I0101 08:47:13.303256 4867 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/23acd1c1-f4b4-4d70-be4e-ea07cbff8053-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:13 crc kubenswrapper[4867]: I0101 08:47:13.303268 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23acd1c1-f4b4-4d70-be4e-ea07cbff8053-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:13 crc kubenswrapper[4867]: I0101 08:47:13.420369 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6498f7d58c-nhfz8"] Jan 01 08:47:13 crc kubenswrapper[4867]: W0101 08:47:13.422788 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod985cc3ff_ea2f_4386_a828_180deef97412.slice/crio-78d166e6881a233791f4c96550ca9196d6e3169a5a30f0435a44c02b656e7909 WatchSource:0}: Error finding container 78d166e6881a233791f4c96550ca9196d6e3169a5a30f0435a44c02b656e7909: Status 404 returned error can't find the container with id 78d166e6881a233791f4c96550ca9196d6e3169a5a30f0435a44c02b656e7909 Jan 01 08:47:13 crc kubenswrapper[4867]: W0101 08:47:13.501187 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1822baf8_11aa_4152_a74f_2ce0383c1094.slice/crio-9109b287283ce25f0cf31d32541a7913dcbf7e7e7f86d7286073c204ccaf08bc WatchSource:0}: Error finding container 9109b287283ce25f0cf31d32541a7913dcbf7e7e7f86d7286073c204ccaf08bc: Status 404 returned error can't find the container with id 9109b287283ce25f0cf31d32541a7913dcbf7e7e7f86d7286073c204ccaf08bc Jan 01 08:47:13 crc kubenswrapper[4867]: I0101 08:47:13.502296 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-67dd85d5b6-ww7ll"] Jan 01 08:47:13 crc kubenswrapper[4867]: I0101 08:47:13.590763 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5fb785fd89-9d8g9"] Jan 01 08:47:13 crc kubenswrapper[4867]: W0101 08:47:13.592230 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0973b1fb_6399_4d31_aa7e_2a41a163e4f4.slice/crio-ff5e46304dfd2d1375fb26f79017527c9b78ef588816bac9d81188ffad6768b8 WatchSource:0}: Error finding container ff5e46304dfd2d1375fb26f79017527c9b78ef588816bac9d81188ffad6768b8: Status 404 returned error can't find the container with id ff5e46304dfd2d1375fb26f79017527c9b78ef588816bac9d81188ffad6768b8 Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.137069 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6498f7d58c-nhfz8" event={"ID":"985cc3ff-ea2f-4386-a828-180deef97412","Type":"ContainerStarted","Data":"8e0fec353ecc8bde0124bae2920fcbd9124025492a08e22cab9fb8e38095f3a4"} Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.137461 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6498f7d58c-nhfz8" event={"ID":"985cc3ff-ea2f-4386-a828-180deef97412","Type":"ContainerStarted","Data":"78d166e6881a233791f4c96550ca9196d6e3169a5a30f0435a44c02b656e7909"} Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.137497 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6498f7d58c-nhfz8" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.154165 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fb785fd89-9d8g9" event={"ID":"0973b1fb-6399-4d31-aa7e-2a41a163e4f4","Type":"ContainerStarted","Data":"729b6a580bd1e1ee405c44dd7bf80943fddab7c16924f9f0fb594ae3af67973d"} Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.154372 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fb785fd89-9d8g9" event={"ID":"0973b1fb-6399-4d31-aa7e-2a41a163e4f4","Type":"ContainerStarted","Data":"ff5e46304dfd2d1375fb26f79017527c9b78ef588816bac9d81188ffad6768b8"} Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.166121 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b51add02-d86c-4eb3-924d-1b2ac530e97b","Type":"ContainerStarted","Data":"cf8d5a63e66d616af8da42126dcb234a35029b480562ae50fceae31231a65b57"} Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.171097 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-sx242" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.171589 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6498f7d58c-nhfz8" podStartSLOduration=3.171553967 podStartE2EDuration="3.171553967s" podCreationTimestamp="2026-01-01 08:47:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:47:14.154390176 +0000 UTC m=+1243.289658965" watchObservedRunningTime="2026-01-01 08:47:14.171553967 +0000 UTC m=+1243.306822736" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.173316 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-67dd85d5b6-ww7ll" event={"ID":"1822baf8-11aa-4152-a74f-2ce0383c1094","Type":"ContainerStarted","Data":"e80411603dc0ac8d446f1e707d73b2bad909e42859006cf6a585616040d3b259"} Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.173377 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-67dd85d5b6-ww7ll" event={"ID":"1822baf8-11aa-4152-a74f-2ce0383c1094","Type":"ContainerStarted","Data":"455b0cde75a033b7a0c94fdc6b6b1dd1216e9777beb9c14b66a6998f6b2fa1d5"} Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.173394 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-67dd85d5b6-ww7ll" event={"ID":"1822baf8-11aa-4152-a74f-2ce0383c1094","Type":"ContainerStarted","Data":"9109b287283ce25f0cf31d32541a7913dcbf7e7e7f86d7286073c204ccaf08bc"} Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.173733 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-67dd85d5b6-ww7ll" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.173824 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-67dd85d5b6-ww7ll" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.196747 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-67dd85d5b6-ww7ll" podStartSLOduration=3.196702342 podStartE2EDuration="3.196702342s" podCreationTimestamp="2026-01-01 08:47:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:47:14.190745385 +0000 UTC m=+1243.326014164" watchObservedRunningTime="2026-01-01 08:47:14.196702342 +0000 UTC m=+1243.331971111" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.282641 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.282734 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.315903 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.324779 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.448481 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.449972 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.509143 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6596d5f4d6-9cxqr"] Jan 01 08:47:14 crc kubenswrapper[4867]: E0101 08:47:14.509552 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23acd1c1-f4b4-4d70-be4e-ea07cbff8053" containerName="barbican-db-sync" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.509581 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="23acd1c1-f4b4-4d70-be4e-ea07cbff8053" containerName="barbican-db-sync" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.509759 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="23acd1c1-f4b4-4d70-be4e-ea07cbff8053" containerName="barbican-db-sync" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.513816 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6596d5f4d6-9cxqr" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.516994 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.517161 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-flkxl" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.517193 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.529399 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7965d77d77-cwbt7"] Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.532686 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7965d77d77-cwbt7" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.543260 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.543632 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6596d5f4d6-9cxqr"] Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.543801 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.549379 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7965d77d77-cwbt7"] Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.556373 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bbc7d46bf-rkzwt"] Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.556730 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bbc7d46bf-rkzwt" podUID="aef45f27-5b04-455e-b71d-693aebb9a57b" containerName="dnsmasq-dns" containerID="cri-o://4ae9c3edcaf133a2935de58b33c334622e1d02427a751d6d2e6ecea061577498" gracePeriod=10 Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.560144 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bbc7d46bf-rkzwt" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.560325 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.612783 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b4c6f4469-xj4b9"] Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.614168 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b4c6f4469-xj4b9" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.626071 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b4c6f4469-xj4b9"] Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.627926 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e96caa-b906-4b24-af21-8068ea727bba-combined-ca-bundle\") pod \"barbican-keystone-listener-6596d5f4d6-9cxqr\" (UID: \"c6e96caa-b906-4b24-af21-8068ea727bba\") " pod="openstack/barbican-keystone-listener-6596d5f4d6-9cxqr" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.627992 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6e96caa-b906-4b24-af21-8068ea727bba-config-data-custom\") pod \"barbican-keystone-listener-6596d5f4d6-9cxqr\" (UID: \"c6e96caa-b906-4b24-af21-8068ea727bba\") " pod="openstack/barbican-keystone-listener-6596d5f4d6-9cxqr" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.628045 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg6sm\" (UniqueName: \"kubernetes.io/projected/c6e96caa-b906-4b24-af21-8068ea727bba-kube-api-access-fg6sm\") pod \"barbican-keystone-listener-6596d5f4d6-9cxqr\" (UID: \"c6e96caa-b906-4b24-af21-8068ea727bba\") " pod="openstack/barbican-keystone-listener-6596d5f4d6-9cxqr" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.628084 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6e96caa-b906-4b24-af21-8068ea727bba-config-data\") pod \"barbican-keystone-listener-6596d5f4d6-9cxqr\" (UID: \"c6e96caa-b906-4b24-af21-8068ea727bba\") " pod="openstack/barbican-keystone-listener-6596d5f4d6-9cxqr" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.628118 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6e96caa-b906-4b24-af21-8068ea727bba-logs\") pod \"barbican-keystone-listener-6596d5f4d6-9cxqr\" (UID: \"c6e96caa-b906-4b24-af21-8068ea727bba\") " pod="openstack/barbican-keystone-listener-6596d5f4d6-9cxqr" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.739190 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e96caa-b906-4b24-af21-8068ea727bba-combined-ca-bundle\") pod \"barbican-keystone-listener-6596d5f4d6-9cxqr\" (UID: \"c6e96caa-b906-4b24-af21-8068ea727bba\") " pod="openstack/barbican-keystone-listener-6596d5f4d6-9cxqr" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.739256 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd62t\" (UniqueName: \"kubernetes.io/projected/22fe2632-f8f6-4ef9-9f4c-72b69bd45932-kube-api-access-dd62t\") pod \"barbican-worker-7965d77d77-cwbt7\" (UID: \"22fe2632-f8f6-4ef9-9f4c-72b69bd45932\") " pod="openstack/barbican-worker-7965d77d77-cwbt7" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.739291 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6e96caa-b906-4b24-af21-8068ea727bba-config-data-custom\") pod \"barbican-keystone-listener-6596d5f4d6-9cxqr\" (UID: \"c6e96caa-b906-4b24-af21-8068ea727bba\") " pod="openstack/barbican-keystone-listener-6596d5f4d6-9cxqr" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.739314 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22fe2632-f8f6-4ef9-9f4c-72b69bd45932-config-data-custom\") pod \"barbican-worker-7965d77d77-cwbt7\" (UID: \"22fe2632-f8f6-4ef9-9f4c-72b69bd45932\") " pod="openstack/barbican-worker-7965d77d77-cwbt7" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.739335 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22fe2632-f8f6-4ef9-9f4c-72b69bd45932-config-data\") pod \"barbican-worker-7965d77d77-cwbt7\" (UID: \"22fe2632-f8f6-4ef9-9f4c-72b69bd45932\") " pod="openstack/barbican-worker-7965d77d77-cwbt7" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.739366 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79f8db2c-a9dd-4a94-a0fc-c24d551c2baa-dns-svc\") pod \"dnsmasq-dns-5b4c6f4469-xj4b9\" (UID: \"79f8db2c-a9dd-4a94-a0fc-c24d551c2baa\") " pod="openstack/dnsmasq-dns-5b4c6f4469-xj4b9" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.739387 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg6sm\" (UniqueName: \"kubernetes.io/projected/c6e96caa-b906-4b24-af21-8068ea727bba-kube-api-access-fg6sm\") pod \"barbican-keystone-listener-6596d5f4d6-9cxqr\" (UID: \"c6e96caa-b906-4b24-af21-8068ea727bba\") " pod="openstack/barbican-keystone-listener-6596d5f4d6-9cxqr" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.739408 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79f8db2c-a9dd-4a94-a0fc-c24d551c2baa-ovsdbserver-nb\") pod \"dnsmasq-dns-5b4c6f4469-xj4b9\" (UID: \"79f8db2c-a9dd-4a94-a0fc-c24d551c2baa\") " pod="openstack/dnsmasq-dns-5b4c6f4469-xj4b9" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.739432 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6e96caa-b906-4b24-af21-8068ea727bba-config-data\") pod \"barbican-keystone-listener-6596d5f4d6-9cxqr\" (UID: \"c6e96caa-b906-4b24-af21-8068ea727bba\") " pod="openstack/barbican-keystone-listener-6596d5f4d6-9cxqr" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.739449 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22fe2632-f8f6-4ef9-9f4c-72b69bd45932-combined-ca-bundle\") pod \"barbican-worker-7965d77d77-cwbt7\" (UID: \"22fe2632-f8f6-4ef9-9f4c-72b69bd45932\") " pod="openstack/barbican-worker-7965d77d77-cwbt7" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.739471 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6e96caa-b906-4b24-af21-8068ea727bba-logs\") pod \"barbican-keystone-listener-6596d5f4d6-9cxqr\" (UID: \"c6e96caa-b906-4b24-af21-8068ea727bba\") " pod="openstack/barbican-keystone-listener-6596d5f4d6-9cxqr" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.739493 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22fe2632-f8f6-4ef9-9f4c-72b69bd45932-logs\") pod \"barbican-worker-7965d77d77-cwbt7\" (UID: \"22fe2632-f8f6-4ef9-9f4c-72b69bd45932\") " pod="openstack/barbican-worker-7965d77d77-cwbt7" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.739514 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79f8db2c-a9dd-4a94-a0fc-c24d551c2baa-ovsdbserver-sb\") pod \"dnsmasq-dns-5b4c6f4469-xj4b9\" (UID: \"79f8db2c-a9dd-4a94-a0fc-c24d551c2baa\") " pod="openstack/dnsmasq-dns-5b4c6f4469-xj4b9" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.739552 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79f8db2c-a9dd-4a94-a0fc-c24d551c2baa-dns-swift-storage-0\") pod \"dnsmasq-dns-5b4c6f4469-xj4b9\" (UID: \"79f8db2c-a9dd-4a94-a0fc-c24d551c2baa\") " pod="openstack/dnsmasq-dns-5b4c6f4469-xj4b9" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.739573 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79f8db2c-a9dd-4a94-a0fc-c24d551c2baa-config\") pod \"dnsmasq-dns-5b4c6f4469-xj4b9\" (UID: \"79f8db2c-a9dd-4a94-a0fc-c24d551c2baa\") " pod="openstack/dnsmasq-dns-5b4c6f4469-xj4b9" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.739598 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfvs2\" (UniqueName: \"kubernetes.io/projected/79f8db2c-a9dd-4a94-a0fc-c24d551c2baa-kube-api-access-rfvs2\") pod \"dnsmasq-dns-5b4c6f4469-xj4b9\" (UID: \"79f8db2c-a9dd-4a94-a0fc-c24d551c2baa\") " pod="openstack/dnsmasq-dns-5b4c6f4469-xj4b9" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.740388 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6e96caa-b906-4b24-af21-8068ea727bba-logs\") pod \"barbican-keystone-listener-6596d5f4d6-9cxqr\" (UID: \"c6e96caa-b906-4b24-af21-8068ea727bba\") " pod="openstack/barbican-keystone-listener-6596d5f4d6-9cxqr" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.749760 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6e96caa-b906-4b24-af21-8068ea727bba-config-data-custom\") pod \"barbican-keystone-listener-6596d5f4d6-9cxqr\" (UID: \"c6e96caa-b906-4b24-af21-8068ea727bba\") " pod="openstack/barbican-keystone-listener-6596d5f4d6-9cxqr" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.751236 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6e96caa-b906-4b24-af21-8068ea727bba-config-data\") pod \"barbican-keystone-listener-6596d5f4d6-9cxqr\" (UID: \"c6e96caa-b906-4b24-af21-8068ea727bba\") " pod="openstack/barbican-keystone-listener-6596d5f4d6-9cxqr" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.752787 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e96caa-b906-4b24-af21-8068ea727bba-combined-ca-bundle\") pod \"barbican-keystone-listener-6596d5f4d6-9cxqr\" (UID: \"c6e96caa-b906-4b24-af21-8068ea727bba\") " pod="openstack/barbican-keystone-listener-6596d5f4d6-9cxqr" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.774171 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7998fdfbd-7j4fm"] Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.775695 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7998fdfbd-7j4fm" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.776089 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg6sm\" (UniqueName: \"kubernetes.io/projected/c6e96caa-b906-4b24-af21-8068ea727bba-kube-api-access-fg6sm\") pod \"barbican-keystone-listener-6596d5f4d6-9cxqr\" (UID: \"c6e96caa-b906-4b24-af21-8068ea727bba\") " pod="openstack/barbican-keystone-listener-6596d5f4d6-9cxqr" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.782526 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.792788 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7998fdfbd-7j4fm"] Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.841140 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd62t\" (UniqueName: \"kubernetes.io/projected/22fe2632-f8f6-4ef9-9f4c-72b69bd45932-kube-api-access-dd62t\") pod \"barbican-worker-7965d77d77-cwbt7\" (UID: \"22fe2632-f8f6-4ef9-9f4c-72b69bd45932\") " pod="openstack/barbican-worker-7965d77d77-cwbt7" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.841387 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22fe2632-f8f6-4ef9-9f4c-72b69bd45932-config-data-custom\") pod \"barbican-worker-7965d77d77-cwbt7\" (UID: \"22fe2632-f8f6-4ef9-9f4c-72b69bd45932\") " pod="openstack/barbican-worker-7965d77d77-cwbt7" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.841469 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22fe2632-f8f6-4ef9-9f4c-72b69bd45932-config-data\") pod \"barbican-worker-7965d77d77-cwbt7\" (UID: \"22fe2632-f8f6-4ef9-9f4c-72b69bd45932\") " pod="openstack/barbican-worker-7965d77d77-cwbt7" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.841555 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79f8db2c-a9dd-4a94-a0fc-c24d551c2baa-dns-svc\") pod \"dnsmasq-dns-5b4c6f4469-xj4b9\" (UID: \"79f8db2c-a9dd-4a94-a0fc-c24d551c2baa\") " pod="openstack/dnsmasq-dns-5b4c6f4469-xj4b9" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.841664 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79f8db2c-a9dd-4a94-a0fc-c24d551c2baa-ovsdbserver-nb\") pod \"dnsmasq-dns-5b4c6f4469-xj4b9\" (UID: \"79f8db2c-a9dd-4a94-a0fc-c24d551c2baa\") " pod="openstack/dnsmasq-dns-5b4c6f4469-xj4b9" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.841781 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22fe2632-f8f6-4ef9-9f4c-72b69bd45932-combined-ca-bundle\") pod \"barbican-worker-7965d77d77-cwbt7\" (UID: \"22fe2632-f8f6-4ef9-9f4c-72b69bd45932\") " pod="openstack/barbican-worker-7965d77d77-cwbt7" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.841868 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22fe2632-f8f6-4ef9-9f4c-72b69bd45932-logs\") pod \"barbican-worker-7965d77d77-cwbt7\" (UID: \"22fe2632-f8f6-4ef9-9f4c-72b69bd45932\") " pod="openstack/barbican-worker-7965d77d77-cwbt7" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.842327 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79f8db2c-a9dd-4a94-a0fc-c24d551c2baa-ovsdbserver-sb\") pod \"dnsmasq-dns-5b4c6f4469-xj4b9\" (UID: \"79f8db2c-a9dd-4a94-a0fc-c24d551c2baa\") " pod="openstack/dnsmasq-dns-5b4c6f4469-xj4b9" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.842425 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7970bf01-94d4-4ceb-9289-f6e4f7a00f86-config-data-custom\") pod \"barbican-api-7998fdfbd-7j4fm\" (UID: \"7970bf01-94d4-4ceb-9289-f6e4f7a00f86\") " pod="openstack/barbican-api-7998fdfbd-7j4fm" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.842538 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqbnw\" (UniqueName: \"kubernetes.io/projected/7970bf01-94d4-4ceb-9289-f6e4f7a00f86-kube-api-access-pqbnw\") pod \"barbican-api-7998fdfbd-7j4fm\" (UID: \"7970bf01-94d4-4ceb-9289-f6e4f7a00f86\") " pod="openstack/barbican-api-7998fdfbd-7j4fm" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.842637 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79f8db2c-a9dd-4a94-a0fc-c24d551c2baa-dns-swift-storage-0\") pod \"dnsmasq-dns-5b4c6f4469-xj4b9\" (UID: \"79f8db2c-a9dd-4a94-a0fc-c24d551c2baa\") " pod="openstack/dnsmasq-dns-5b4c6f4469-xj4b9" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.842745 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79f8db2c-a9dd-4a94-a0fc-c24d551c2baa-config\") pod \"dnsmasq-dns-5b4c6f4469-xj4b9\" (UID: \"79f8db2c-a9dd-4a94-a0fc-c24d551c2baa\") " pod="openstack/dnsmasq-dns-5b4c6f4469-xj4b9" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.843341 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfvs2\" (UniqueName: \"kubernetes.io/projected/79f8db2c-a9dd-4a94-a0fc-c24d551c2baa-kube-api-access-rfvs2\") pod \"dnsmasq-dns-5b4c6f4469-xj4b9\" (UID: \"79f8db2c-a9dd-4a94-a0fc-c24d551c2baa\") " pod="openstack/dnsmasq-dns-5b4c6f4469-xj4b9" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.843428 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7970bf01-94d4-4ceb-9289-f6e4f7a00f86-combined-ca-bundle\") pod \"barbican-api-7998fdfbd-7j4fm\" (UID: \"7970bf01-94d4-4ceb-9289-f6e4f7a00f86\") " pod="openstack/barbican-api-7998fdfbd-7j4fm" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.843529 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7970bf01-94d4-4ceb-9289-f6e4f7a00f86-logs\") pod \"barbican-api-7998fdfbd-7j4fm\" (UID: \"7970bf01-94d4-4ceb-9289-f6e4f7a00f86\") " pod="openstack/barbican-api-7998fdfbd-7j4fm" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.843624 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7970bf01-94d4-4ceb-9289-f6e4f7a00f86-config-data\") pod \"barbican-api-7998fdfbd-7j4fm\" (UID: \"7970bf01-94d4-4ceb-9289-f6e4f7a00f86\") " pod="openstack/barbican-api-7998fdfbd-7j4fm" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.844317 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22fe2632-f8f6-4ef9-9f4c-72b69bd45932-logs\") pod \"barbican-worker-7965d77d77-cwbt7\" (UID: \"22fe2632-f8f6-4ef9-9f4c-72b69bd45932\") " pod="openstack/barbican-worker-7965d77d77-cwbt7" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.845162 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79f8db2c-a9dd-4a94-a0fc-c24d551c2baa-dns-svc\") pod \"dnsmasq-dns-5b4c6f4469-xj4b9\" (UID: \"79f8db2c-a9dd-4a94-a0fc-c24d551c2baa\") " pod="openstack/dnsmasq-dns-5b4c6f4469-xj4b9" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.845288 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6596d5f4d6-9cxqr" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.845423 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79f8db2c-a9dd-4a94-a0fc-c24d551c2baa-ovsdbserver-nb\") pod \"dnsmasq-dns-5b4c6f4469-xj4b9\" (UID: \"79f8db2c-a9dd-4a94-a0fc-c24d551c2baa\") " pod="openstack/dnsmasq-dns-5b4c6f4469-xj4b9" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.846726 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79f8db2c-a9dd-4a94-a0fc-c24d551c2baa-ovsdbserver-sb\") pod \"dnsmasq-dns-5b4c6f4469-xj4b9\" (UID: \"79f8db2c-a9dd-4a94-a0fc-c24d551c2baa\") " pod="openstack/dnsmasq-dns-5b4c6f4469-xj4b9" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.848267 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79f8db2c-a9dd-4a94-a0fc-c24d551c2baa-config\") pod \"dnsmasq-dns-5b4c6f4469-xj4b9\" (UID: \"79f8db2c-a9dd-4a94-a0fc-c24d551c2baa\") " pod="openstack/dnsmasq-dns-5b4c6f4469-xj4b9" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.856281 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79f8db2c-a9dd-4a94-a0fc-c24d551c2baa-dns-swift-storage-0\") pod \"dnsmasq-dns-5b4c6f4469-xj4b9\" (UID: \"79f8db2c-a9dd-4a94-a0fc-c24d551c2baa\") " pod="openstack/dnsmasq-dns-5b4c6f4469-xj4b9" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.857315 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22fe2632-f8f6-4ef9-9f4c-72b69bd45932-combined-ca-bundle\") pod \"barbican-worker-7965d77d77-cwbt7\" (UID: \"22fe2632-f8f6-4ef9-9f4c-72b69bd45932\") " pod="openstack/barbican-worker-7965d77d77-cwbt7" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.857981 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22fe2632-f8f6-4ef9-9f4c-72b69bd45932-config-data-custom\") pod \"barbican-worker-7965d77d77-cwbt7\" (UID: \"22fe2632-f8f6-4ef9-9f4c-72b69bd45932\") " pod="openstack/barbican-worker-7965d77d77-cwbt7" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.858282 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22fe2632-f8f6-4ef9-9f4c-72b69bd45932-config-data\") pod \"barbican-worker-7965d77d77-cwbt7\" (UID: \"22fe2632-f8f6-4ef9-9f4c-72b69bd45932\") " pod="openstack/barbican-worker-7965d77d77-cwbt7" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.861140 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd62t\" (UniqueName: \"kubernetes.io/projected/22fe2632-f8f6-4ef9-9f4c-72b69bd45932-kube-api-access-dd62t\") pod \"barbican-worker-7965d77d77-cwbt7\" (UID: \"22fe2632-f8f6-4ef9-9f4c-72b69bd45932\") " pod="openstack/barbican-worker-7965d77d77-cwbt7" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.864223 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfvs2\" (UniqueName: \"kubernetes.io/projected/79f8db2c-a9dd-4a94-a0fc-c24d551c2baa-kube-api-access-rfvs2\") pod \"dnsmasq-dns-5b4c6f4469-xj4b9\" (UID: \"79f8db2c-a9dd-4a94-a0fc-c24d551c2baa\") " pod="openstack/dnsmasq-dns-5b4c6f4469-xj4b9" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.885652 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7965d77d77-cwbt7" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.945399 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7970bf01-94d4-4ceb-9289-f6e4f7a00f86-config-data-custom\") pod \"barbican-api-7998fdfbd-7j4fm\" (UID: \"7970bf01-94d4-4ceb-9289-f6e4f7a00f86\") " pod="openstack/barbican-api-7998fdfbd-7j4fm" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.945465 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqbnw\" (UniqueName: \"kubernetes.io/projected/7970bf01-94d4-4ceb-9289-f6e4f7a00f86-kube-api-access-pqbnw\") pod \"barbican-api-7998fdfbd-7j4fm\" (UID: \"7970bf01-94d4-4ceb-9289-f6e4f7a00f86\") " pod="openstack/barbican-api-7998fdfbd-7j4fm" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.945537 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7970bf01-94d4-4ceb-9289-f6e4f7a00f86-combined-ca-bundle\") pod \"barbican-api-7998fdfbd-7j4fm\" (UID: \"7970bf01-94d4-4ceb-9289-f6e4f7a00f86\") " pod="openstack/barbican-api-7998fdfbd-7j4fm" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.945567 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7970bf01-94d4-4ceb-9289-f6e4f7a00f86-logs\") pod \"barbican-api-7998fdfbd-7j4fm\" (UID: \"7970bf01-94d4-4ceb-9289-f6e4f7a00f86\") " pod="openstack/barbican-api-7998fdfbd-7j4fm" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.945587 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7970bf01-94d4-4ceb-9289-f6e4f7a00f86-config-data\") pod \"barbican-api-7998fdfbd-7j4fm\" (UID: \"7970bf01-94d4-4ceb-9289-f6e4f7a00f86\") " pod="openstack/barbican-api-7998fdfbd-7j4fm" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.947046 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7970bf01-94d4-4ceb-9289-f6e4f7a00f86-logs\") pod \"barbican-api-7998fdfbd-7j4fm\" (UID: \"7970bf01-94d4-4ceb-9289-f6e4f7a00f86\") " pod="openstack/barbican-api-7998fdfbd-7j4fm" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.951138 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7970bf01-94d4-4ceb-9289-f6e4f7a00f86-config-data-custom\") pod \"barbican-api-7998fdfbd-7j4fm\" (UID: \"7970bf01-94d4-4ceb-9289-f6e4f7a00f86\") " pod="openstack/barbican-api-7998fdfbd-7j4fm" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.955455 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7970bf01-94d4-4ceb-9289-f6e4f7a00f86-combined-ca-bundle\") pod \"barbican-api-7998fdfbd-7j4fm\" (UID: \"7970bf01-94d4-4ceb-9289-f6e4f7a00f86\") " pod="openstack/barbican-api-7998fdfbd-7j4fm" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.956171 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7970bf01-94d4-4ceb-9289-f6e4f7a00f86-config-data\") pod \"barbican-api-7998fdfbd-7j4fm\" (UID: \"7970bf01-94d4-4ceb-9289-f6e4f7a00f86\") " pod="openstack/barbican-api-7998fdfbd-7j4fm" Jan 01 08:47:14 crc kubenswrapper[4867]: I0101 08:47:14.971520 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqbnw\" (UniqueName: \"kubernetes.io/projected/7970bf01-94d4-4ceb-9289-f6e4f7a00f86-kube-api-access-pqbnw\") pod \"barbican-api-7998fdfbd-7j4fm\" (UID: \"7970bf01-94d4-4ceb-9289-f6e4f7a00f86\") " pod="openstack/barbican-api-7998fdfbd-7j4fm" Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.018511 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7998fdfbd-7j4fm" Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.145225 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b4c6f4469-xj4b9" Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.197371 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bbc7d46bf-rkzwt" Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.201073 4867 generic.go:334] "Generic (PLEG): container finished" podID="aef45f27-5b04-455e-b71d-693aebb9a57b" containerID="4ae9c3edcaf133a2935de58b33c334622e1d02427a751d6d2e6ecea061577498" exitCode=0 Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.201143 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bbc7d46bf-rkzwt" event={"ID":"aef45f27-5b04-455e-b71d-693aebb9a57b","Type":"ContainerDied","Data":"4ae9c3edcaf133a2935de58b33c334622e1d02427a751d6d2e6ecea061577498"} Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.201171 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bbc7d46bf-rkzwt" event={"ID":"aef45f27-5b04-455e-b71d-693aebb9a57b","Type":"ContainerDied","Data":"862275802f02119fdf342a987bdebfbf90b4c7f86216d1e0dfbd809789a86a1a"} Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.201189 4867 scope.go:117] "RemoveContainer" containerID="4ae9c3edcaf133a2935de58b33c334622e1d02427a751d6d2e6ecea061577498" Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.230184 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fb785fd89-9d8g9" event={"ID":"0973b1fb-6399-4d31-aa7e-2a41a163e4f4","Type":"ContainerStarted","Data":"bac9c7668db5a75c9609096697c08006409e297a731d4223463f224f07576d59"} Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.230218 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.230232 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5fb785fd89-9d8g9" Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.230397 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.230572 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.230919 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.255704 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5fb785fd89-9d8g9" podStartSLOduration=7.255681916 podStartE2EDuration="7.255681916s" podCreationTimestamp="2026-01-01 08:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:47:15.24941529 +0000 UTC m=+1244.384684089" watchObservedRunningTime="2026-01-01 08:47:15.255681916 +0000 UTC m=+1244.390950685" Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.268955 4867 scope.go:117] "RemoveContainer" containerID="b75139b61458207fe81684635d7087e47ecb4f8916750d2b41e9707204bccb28" Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.323334 4867 scope.go:117] "RemoveContainer" containerID="4ae9c3edcaf133a2935de58b33c334622e1d02427a751d6d2e6ecea061577498" Jan 01 08:47:15 crc kubenswrapper[4867]: E0101 08:47:15.323734 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ae9c3edcaf133a2935de58b33c334622e1d02427a751d6d2e6ecea061577498\": container with ID starting with 4ae9c3edcaf133a2935de58b33c334622e1d02427a751d6d2e6ecea061577498 not found: ID does not exist" containerID="4ae9c3edcaf133a2935de58b33c334622e1d02427a751d6d2e6ecea061577498" Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.323868 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ae9c3edcaf133a2935de58b33c334622e1d02427a751d6d2e6ecea061577498"} err="failed to get container status \"4ae9c3edcaf133a2935de58b33c334622e1d02427a751d6d2e6ecea061577498\": rpc error: code = NotFound desc = could not find container \"4ae9c3edcaf133a2935de58b33c334622e1d02427a751d6d2e6ecea061577498\": container with ID starting with 4ae9c3edcaf133a2935de58b33c334622e1d02427a751d6d2e6ecea061577498 not found: ID does not exist" Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.324017 4867 scope.go:117] "RemoveContainer" containerID="b75139b61458207fe81684635d7087e47ecb4f8916750d2b41e9707204bccb28" Jan 01 08:47:15 crc kubenswrapper[4867]: E0101 08:47:15.324297 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b75139b61458207fe81684635d7087e47ecb4f8916750d2b41e9707204bccb28\": container with ID starting with b75139b61458207fe81684635d7087e47ecb4f8916750d2b41e9707204bccb28 not found: ID does not exist" containerID="b75139b61458207fe81684635d7087e47ecb4f8916750d2b41e9707204bccb28" Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.324371 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b75139b61458207fe81684635d7087e47ecb4f8916750d2b41e9707204bccb28"} err="failed to get container status \"b75139b61458207fe81684635d7087e47ecb4f8916750d2b41e9707204bccb28\": rpc error: code = NotFound desc = could not find container \"b75139b61458207fe81684635d7087e47ecb4f8916750d2b41e9707204bccb28\": container with ID starting with b75139b61458207fe81684635d7087e47ecb4f8916750d2b41e9707204bccb28 not found: ID does not exist" Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.373696 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aef45f27-5b04-455e-b71d-693aebb9a57b-dns-svc\") pod \"aef45f27-5b04-455e-b71d-693aebb9a57b\" (UID: \"aef45f27-5b04-455e-b71d-693aebb9a57b\") " Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.373848 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aef45f27-5b04-455e-b71d-693aebb9a57b-config\") pod \"aef45f27-5b04-455e-b71d-693aebb9a57b\" (UID: \"aef45f27-5b04-455e-b71d-693aebb9a57b\") " Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.373909 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdzc2\" (UniqueName: \"kubernetes.io/projected/aef45f27-5b04-455e-b71d-693aebb9a57b-kube-api-access-vdzc2\") pod \"aef45f27-5b04-455e-b71d-693aebb9a57b\" (UID: \"aef45f27-5b04-455e-b71d-693aebb9a57b\") " Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.373943 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aef45f27-5b04-455e-b71d-693aebb9a57b-ovsdbserver-nb\") pod \"aef45f27-5b04-455e-b71d-693aebb9a57b\" (UID: \"aef45f27-5b04-455e-b71d-693aebb9a57b\") " Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.374005 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aef45f27-5b04-455e-b71d-693aebb9a57b-dns-swift-storage-0\") pod \"aef45f27-5b04-455e-b71d-693aebb9a57b\" (UID: \"aef45f27-5b04-455e-b71d-693aebb9a57b\") " Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.374024 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aef45f27-5b04-455e-b71d-693aebb9a57b-ovsdbserver-sb\") pod \"aef45f27-5b04-455e-b71d-693aebb9a57b\" (UID: \"aef45f27-5b04-455e-b71d-693aebb9a57b\") " Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.380073 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aef45f27-5b04-455e-b71d-693aebb9a57b-kube-api-access-vdzc2" (OuterVolumeSpecName: "kube-api-access-vdzc2") pod "aef45f27-5b04-455e-b71d-693aebb9a57b" (UID: "aef45f27-5b04-455e-b71d-693aebb9a57b"). InnerVolumeSpecName "kube-api-access-vdzc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.426782 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aef45f27-5b04-455e-b71d-693aebb9a57b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "aef45f27-5b04-455e-b71d-693aebb9a57b" (UID: "aef45f27-5b04-455e-b71d-693aebb9a57b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.476686 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdzc2\" (UniqueName: \"kubernetes.io/projected/aef45f27-5b04-455e-b71d-693aebb9a57b-kube-api-access-vdzc2\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.476718 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aef45f27-5b04-455e-b71d-693aebb9a57b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.498956 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7965d77d77-cwbt7"] Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.511482 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aef45f27-5b04-455e-b71d-693aebb9a57b-config" (OuterVolumeSpecName: "config") pod "aef45f27-5b04-455e-b71d-693aebb9a57b" (UID: "aef45f27-5b04-455e-b71d-693aebb9a57b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.530404 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aef45f27-5b04-455e-b71d-693aebb9a57b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "aef45f27-5b04-455e-b71d-693aebb9a57b" (UID: "aef45f27-5b04-455e-b71d-693aebb9a57b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.533931 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aef45f27-5b04-455e-b71d-693aebb9a57b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "aef45f27-5b04-455e-b71d-693aebb9a57b" (UID: "aef45f27-5b04-455e-b71d-693aebb9a57b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.559305 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6596d5f4d6-9cxqr"] Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.574406 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aef45f27-5b04-455e-b71d-693aebb9a57b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aef45f27-5b04-455e-b71d-693aebb9a57b" (UID: "aef45f27-5b04-455e-b71d-693aebb9a57b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.587053 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aef45f27-5b04-455e-b71d-693aebb9a57b-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.587085 4867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aef45f27-5b04-455e-b71d-693aebb9a57b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.587096 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aef45f27-5b04-455e-b71d-693aebb9a57b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.587106 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aef45f27-5b04-455e-b71d-693aebb9a57b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.741583 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b4c6f4469-xj4b9"] Jan 01 08:47:15 crc kubenswrapper[4867]: W0101 08:47:15.747497 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79f8db2c_a9dd_4a94_a0fc_c24d551c2baa.slice/crio-d807638e9520261dde0834d8a7855aeac43a8567e9921c38259f4691a0cc6700 WatchSource:0}: Error finding container d807638e9520261dde0834d8a7855aeac43a8567e9921c38259f4691a0cc6700: Status 404 returned error can't find the container with id d807638e9520261dde0834d8a7855aeac43a8567e9921c38259f4691a0cc6700 Jan 01 08:47:15 crc kubenswrapper[4867]: I0101 08:47:15.791816 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7998fdfbd-7j4fm"] Jan 01 08:47:16 crc kubenswrapper[4867]: I0101 08:47:16.246918 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7965d77d77-cwbt7" event={"ID":"22fe2632-f8f6-4ef9-9f4c-72b69bd45932","Type":"ContainerStarted","Data":"7e87f25f46bcfa7bfcd57512086c4de728bbc31e4afa1b98cc7982e64bec37f2"} Jan 01 08:47:16 crc kubenswrapper[4867]: I0101 08:47:16.249126 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7998fdfbd-7j4fm" event={"ID":"7970bf01-94d4-4ceb-9289-f6e4f7a00f86","Type":"ContainerStarted","Data":"8620038bc7a216bcaa2c84ee2c6daf70d342571e9e35270dd64fc523d1d2ed14"} Jan 01 08:47:16 crc kubenswrapper[4867]: I0101 08:47:16.249150 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7998fdfbd-7j4fm" event={"ID":"7970bf01-94d4-4ceb-9289-f6e4f7a00f86","Type":"ContainerStarted","Data":"7e54efc172f8be24d7b83a42610ea91a2ecd55045c360979da0d30c809b0ec2f"} Jan 01 08:47:16 crc kubenswrapper[4867]: I0101 08:47:16.252533 4867 generic.go:334] "Generic (PLEG): container finished" podID="79f8db2c-a9dd-4a94-a0fc-c24d551c2baa" containerID="86c2cec082a09770270bd5194b98abf8c27b1585d93389c0c27fa861b65fdfc6" exitCode=0 Jan 01 08:47:16 crc kubenswrapper[4867]: I0101 08:47:16.252599 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b4c6f4469-xj4b9" event={"ID":"79f8db2c-a9dd-4a94-a0fc-c24d551c2baa","Type":"ContainerDied","Data":"86c2cec082a09770270bd5194b98abf8c27b1585d93389c0c27fa861b65fdfc6"} Jan 01 08:47:16 crc kubenswrapper[4867]: I0101 08:47:16.252624 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b4c6f4469-xj4b9" event={"ID":"79f8db2c-a9dd-4a94-a0fc-c24d551c2baa","Type":"ContainerStarted","Data":"d807638e9520261dde0834d8a7855aeac43a8567e9921c38259f4691a0cc6700"} Jan 01 08:47:16 crc kubenswrapper[4867]: I0101 08:47:16.255357 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bbc7d46bf-rkzwt" Jan 01 08:47:16 crc kubenswrapper[4867]: I0101 08:47:16.256595 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6596d5f4d6-9cxqr" event={"ID":"c6e96caa-b906-4b24-af21-8068ea727bba","Type":"ContainerStarted","Data":"7168733cccc3027f6c89418c54278683d8482779dc899896d27c0684ce67c9d3"} Jan 01 08:47:16 crc kubenswrapper[4867]: I0101 08:47:16.436631 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bbc7d46bf-rkzwt"] Jan 01 08:47:16 crc kubenswrapper[4867]: I0101 08:47:16.445661 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bbc7d46bf-rkzwt"] Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.043732 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-58dc5bfddd-522rc"] Jan 01 08:47:17 crc kubenswrapper[4867]: E0101 08:47:17.044210 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aef45f27-5b04-455e-b71d-693aebb9a57b" containerName="dnsmasq-dns" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.044594 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="aef45f27-5b04-455e-b71d-693aebb9a57b" containerName="dnsmasq-dns" Jan 01 08:47:17 crc kubenswrapper[4867]: E0101 08:47:17.044614 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aef45f27-5b04-455e-b71d-693aebb9a57b" containerName="init" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.044622 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="aef45f27-5b04-455e-b71d-693aebb9a57b" containerName="init" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.044846 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="aef45f27-5b04-455e-b71d-693aebb9a57b" containerName="dnsmasq-dns" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.061244 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-58dc5bfddd-522rc"] Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.061352 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58dc5bfddd-522rc" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.081924 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.082079 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.127970 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-public-tls-certs\") pod \"barbican-api-58dc5bfddd-522rc\" (UID: \"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20\") " pod="openstack/barbican-api-58dc5bfddd-522rc" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.128059 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-config-data-custom\") pod \"barbican-api-58dc5bfddd-522rc\" (UID: \"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20\") " pod="openstack/barbican-api-58dc5bfddd-522rc" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.128084 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-logs\") pod \"barbican-api-58dc5bfddd-522rc\" (UID: \"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20\") " pod="openstack/barbican-api-58dc5bfddd-522rc" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.128102 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-combined-ca-bundle\") pod \"barbican-api-58dc5bfddd-522rc\" (UID: \"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20\") " pod="openstack/barbican-api-58dc5bfddd-522rc" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.128131 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-internal-tls-certs\") pod \"barbican-api-58dc5bfddd-522rc\" (UID: \"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20\") " pod="openstack/barbican-api-58dc5bfddd-522rc" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.128151 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t7ll\" (UniqueName: \"kubernetes.io/projected/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-kube-api-access-7t7ll\") pod \"barbican-api-58dc5bfddd-522rc\" (UID: \"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20\") " pod="openstack/barbican-api-58dc5bfddd-522rc" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.128185 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-config-data\") pod \"barbican-api-58dc5bfddd-522rc\" (UID: \"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20\") " pod="openstack/barbican-api-58dc5bfddd-522rc" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.138132 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aef45f27-5b04-455e-b71d-693aebb9a57b" path="/var/lib/kubelet/pods/aef45f27-5b04-455e-b71d-693aebb9a57b/volumes" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.229610 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-logs\") pod \"barbican-api-58dc5bfddd-522rc\" (UID: \"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20\") " pod="openstack/barbican-api-58dc5bfddd-522rc" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.229660 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-combined-ca-bundle\") pod \"barbican-api-58dc5bfddd-522rc\" (UID: \"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20\") " pod="openstack/barbican-api-58dc5bfddd-522rc" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.229722 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-internal-tls-certs\") pod \"barbican-api-58dc5bfddd-522rc\" (UID: \"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20\") " pod="openstack/barbican-api-58dc5bfddd-522rc" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.229763 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t7ll\" (UniqueName: \"kubernetes.io/projected/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-kube-api-access-7t7ll\") pod \"barbican-api-58dc5bfddd-522rc\" (UID: \"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20\") " pod="openstack/barbican-api-58dc5bfddd-522rc" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.229845 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-config-data\") pod \"barbican-api-58dc5bfddd-522rc\" (UID: \"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20\") " pod="openstack/barbican-api-58dc5bfddd-522rc" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.229965 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-public-tls-certs\") pod \"barbican-api-58dc5bfddd-522rc\" (UID: \"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20\") " pod="openstack/barbican-api-58dc5bfddd-522rc" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.230243 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-config-data-custom\") pod \"barbican-api-58dc5bfddd-522rc\" (UID: \"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20\") " pod="openstack/barbican-api-58dc5bfddd-522rc" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.231099 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-logs\") pod \"barbican-api-58dc5bfddd-522rc\" (UID: \"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20\") " pod="openstack/barbican-api-58dc5bfddd-522rc" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.235329 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-internal-tls-certs\") pod \"barbican-api-58dc5bfddd-522rc\" (UID: \"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20\") " pod="openstack/barbican-api-58dc5bfddd-522rc" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.240111 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-config-data-custom\") pod \"barbican-api-58dc5bfddd-522rc\" (UID: \"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20\") " pod="openstack/barbican-api-58dc5bfddd-522rc" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.240361 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-config-data\") pod \"barbican-api-58dc5bfddd-522rc\" (UID: \"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20\") " pod="openstack/barbican-api-58dc5bfddd-522rc" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.241083 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-combined-ca-bundle\") pod \"barbican-api-58dc5bfddd-522rc\" (UID: \"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20\") " pod="openstack/barbican-api-58dc5bfddd-522rc" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.250589 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-public-tls-certs\") pod \"barbican-api-58dc5bfddd-522rc\" (UID: \"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20\") " pod="openstack/barbican-api-58dc5bfddd-522rc" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.255032 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t7ll\" (UniqueName: \"kubernetes.io/projected/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-kube-api-access-7t7ll\") pod \"barbican-api-58dc5bfddd-522rc\" (UID: \"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20\") " pod="openstack/barbican-api-58dc5bfddd-522rc" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.267853 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7998fdfbd-7j4fm" event={"ID":"7970bf01-94d4-4ceb-9289-f6e4f7a00f86","Type":"ContainerStarted","Data":"f0faa4e5c265cb6fd5c37a90e5f1fd1d09f968338daf05e4b9a3007515c35263"} Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.268026 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7998fdfbd-7j4fm" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.271909 4867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.271929 4867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.273300 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b4c6f4469-xj4b9" event={"ID":"79f8db2c-a9dd-4a94-a0fc-c24d551c2baa","Type":"ContainerStarted","Data":"3ff9506c60682e3449b67de6a85c0c9e60bafe26d40e57a74aeb41a25472b5b4"} Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.273353 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b4c6f4469-xj4b9" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.273409 4867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.273417 4867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.286398 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7998fdfbd-7j4fm" podStartSLOduration=3.286384856 podStartE2EDuration="3.286384856s" podCreationTimestamp="2026-01-01 08:47:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:47:17.286200631 +0000 UTC m=+1246.421469410" watchObservedRunningTime="2026-01-01 08:47:17.286384856 +0000 UTC m=+1246.421653615" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.314733 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b4c6f4469-xj4b9" podStartSLOduration=3.31471552 podStartE2EDuration="3.31471552s" podCreationTimestamp="2026-01-01 08:47:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:47:17.311389327 +0000 UTC m=+1246.446658106" watchObservedRunningTime="2026-01-01 08:47:17.31471552 +0000 UTC m=+1246.449984289" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.397501 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58dc5bfddd-522rc" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.578273 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.579956 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.906001 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 01 08:47:17 crc kubenswrapper[4867]: I0101 08:47:17.906641 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 01 08:47:18 crc kubenswrapper[4867]: I0101 08:47:18.247546 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-58dc5bfddd-522rc"] Jan 01 08:47:18 crc kubenswrapper[4867]: I0101 08:47:18.294401 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6596d5f4d6-9cxqr" event={"ID":"c6e96caa-b906-4b24-af21-8068ea727bba","Type":"ContainerStarted","Data":"dc673bc1feba5e02af532b24171ae7075ed044000fad91c5933e93e216ca2214"} Jan 01 08:47:18 crc kubenswrapper[4867]: I0101 08:47:18.301173 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7965d77d77-cwbt7" event={"ID":"22fe2632-f8f6-4ef9-9f4c-72b69bd45932","Type":"ContainerStarted","Data":"65ef15ad242719f3da63fa724d97de1fb1223fd81f2c48a72e0cb2f1c91f8f4b"} Jan 01 08:47:18 crc kubenswrapper[4867]: I0101 08:47:18.301612 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7998fdfbd-7j4fm" Jan 01 08:47:19 crc kubenswrapper[4867]: I0101 08:47:19.314962 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58dc5bfddd-522rc" event={"ID":"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20","Type":"ContainerStarted","Data":"dee68f8d073a368d94e9708c1869989ddd8ada0a6eb993b2a239618bdb95a0c6"} Jan 01 08:47:19 crc kubenswrapper[4867]: I0101 08:47:19.315355 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-58dc5bfddd-522rc" Jan 01 08:47:19 crc kubenswrapper[4867]: I0101 08:47:19.315374 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58dc5bfddd-522rc" event={"ID":"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20","Type":"ContainerStarted","Data":"937838972a1573c1df4db392f223b3e988bccb1ee572a68dba6f4b3aed9b91ee"} Jan 01 08:47:19 crc kubenswrapper[4867]: I0101 08:47:19.315392 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58dc5bfddd-522rc" event={"ID":"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20","Type":"ContainerStarted","Data":"c3bdf5b34305427f14a8a9029700bb4c7abd99e962ae92de63d6edf014b4bfcb"} Jan 01 08:47:19 crc kubenswrapper[4867]: I0101 08:47:19.315412 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-58dc5bfddd-522rc" Jan 01 08:47:19 crc kubenswrapper[4867]: I0101 08:47:19.318508 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6596d5f4d6-9cxqr" event={"ID":"c6e96caa-b906-4b24-af21-8068ea727bba","Type":"ContainerStarted","Data":"12ac59ef1025a56a54145198bfc20879e2c8969f62ef2c28de3bb86b0129fd27"} Jan 01 08:47:19 crc kubenswrapper[4867]: I0101 08:47:19.329697 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7965d77d77-cwbt7" event={"ID":"22fe2632-f8f6-4ef9-9f4c-72b69bd45932","Type":"ContainerStarted","Data":"f95ad7dcbf76b229ef0f72ae0e667de7d0e25a5f3d7e84f84fd18139ab18e305"} Jan 01 08:47:19 crc kubenswrapper[4867]: I0101 08:47:19.336743 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-58dc5bfddd-522rc" podStartSLOduration=2.336723618 podStartE2EDuration="2.336723618s" podCreationTimestamp="2026-01-01 08:47:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:47:19.331863752 +0000 UTC m=+1248.467132551" watchObservedRunningTime="2026-01-01 08:47:19.336723618 +0000 UTC m=+1248.471992387" Jan 01 08:47:19 crc kubenswrapper[4867]: I0101 08:47:19.342453 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-p6csz" event={"ID":"978a99d3-4e55-4026-a329-5da06bf36c90","Type":"ContainerStarted","Data":"b206d939a5a3a8e3fc6d74b4154bd992040ba308ec64bd564ca2a8ed436d7ec4"} Jan 01 08:47:19 crc kubenswrapper[4867]: I0101 08:47:19.354203 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7965d77d77-cwbt7" podStartSLOduration=3.181651151 podStartE2EDuration="5.354185647s" podCreationTimestamp="2026-01-01 08:47:14 +0000 UTC" firstStartedPulling="2026-01-01 08:47:15.569540133 +0000 UTC m=+1244.704808902" lastFinishedPulling="2026-01-01 08:47:17.742074629 +0000 UTC m=+1246.877343398" observedRunningTime="2026-01-01 08:47:19.353048536 +0000 UTC m=+1248.488317305" watchObservedRunningTime="2026-01-01 08:47:19.354185647 +0000 UTC m=+1248.489454416" Jan 01 08:47:19 crc kubenswrapper[4867]: I0101 08:47:19.380664 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6596d5f4d6-9cxqr" podStartSLOduration=3.172847744 podStartE2EDuration="5.380648629s" podCreationTimestamp="2026-01-01 08:47:14 +0000 UTC" firstStartedPulling="2026-01-01 08:47:15.543070111 +0000 UTC m=+1244.678338880" lastFinishedPulling="2026-01-01 08:47:17.750870986 +0000 UTC m=+1246.886139765" observedRunningTime="2026-01-01 08:47:19.372353227 +0000 UTC m=+1248.507622006" watchObservedRunningTime="2026-01-01 08:47:19.380648629 +0000 UTC m=+1248.515917398" Jan 01 08:47:19 crc kubenswrapper[4867]: I0101 08:47:19.395131 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-p6csz" podStartSLOduration=3.516056614 podStartE2EDuration="38.395111155s" podCreationTimestamp="2026-01-01 08:46:41 +0000 UTC" firstStartedPulling="2026-01-01 08:46:42.898914144 +0000 UTC m=+1212.034182913" lastFinishedPulling="2026-01-01 08:47:17.777968685 +0000 UTC m=+1246.913237454" observedRunningTime="2026-01-01 08:47:19.389004483 +0000 UTC m=+1248.524273262" watchObservedRunningTime="2026-01-01 08:47:19.395111155 +0000 UTC m=+1248.530379934" Jan 01 08:47:23 crc kubenswrapper[4867]: I0101 08:47:23.396348 4867 generic.go:334] "Generic (PLEG): container finished" podID="978a99d3-4e55-4026-a329-5da06bf36c90" containerID="b206d939a5a3a8e3fc6d74b4154bd992040ba308ec64bd564ca2a8ed436d7ec4" exitCode=0 Jan 01 08:47:23 crc kubenswrapper[4867]: I0101 08:47:23.396511 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-p6csz" event={"ID":"978a99d3-4e55-4026-a329-5da06bf36c90","Type":"ContainerDied","Data":"b206d939a5a3a8e3fc6d74b4154bd992040ba308ec64bd564ca2a8ed436d7ec4"} Jan 01 08:47:23 crc kubenswrapper[4867]: E0101 08:47:23.591305 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="b51add02-d86c-4eb3-924d-1b2ac530e97b" Jan 01 08:47:24 crc kubenswrapper[4867]: I0101 08:47:24.411788 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b51add02-d86c-4eb3-924d-1b2ac530e97b","Type":"ContainerStarted","Data":"2c9ba5e7e9909a9e42d08d566af0459042e7a0415bf8fbefbf672fdede258d9c"} Jan 01 08:47:24 crc kubenswrapper[4867]: I0101 08:47:24.412074 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b51add02-d86c-4eb3-924d-1b2ac530e97b" containerName="ceilometer-notification-agent" containerID="cri-o://0ffb3e321bea803090e9084b8864f32d053e1fb66b69554db5ac6b16f37a45db" gracePeriod=30 Jan 01 08:47:24 crc kubenswrapper[4867]: I0101 08:47:24.412169 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b51add02-d86c-4eb3-924d-1b2ac530e97b" containerName="sg-core" containerID="cri-o://cf8d5a63e66d616af8da42126dcb234a35029b480562ae50fceae31231a65b57" gracePeriod=30 Jan 01 08:47:24 crc kubenswrapper[4867]: I0101 08:47:24.412254 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b51add02-d86c-4eb3-924d-1b2ac530e97b" containerName="proxy-httpd" containerID="cri-o://2c9ba5e7e9909a9e42d08d566af0459042e7a0415bf8fbefbf672fdede258d9c" gracePeriod=30 Jan 01 08:47:24 crc kubenswrapper[4867]: I0101 08:47:24.953357 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-p6csz" Jan 01 08:47:24 crc kubenswrapper[4867]: I0101 08:47:24.976189 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/978a99d3-4e55-4026-a329-5da06bf36c90-config-data\") pod \"978a99d3-4e55-4026-a329-5da06bf36c90\" (UID: \"978a99d3-4e55-4026-a329-5da06bf36c90\") " Jan 01 08:47:24 crc kubenswrapper[4867]: I0101 08:47:24.976270 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/978a99d3-4e55-4026-a329-5da06bf36c90-combined-ca-bundle\") pod \"978a99d3-4e55-4026-a329-5da06bf36c90\" (UID: \"978a99d3-4e55-4026-a329-5da06bf36c90\") " Jan 01 08:47:24 crc kubenswrapper[4867]: I0101 08:47:24.976316 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/978a99d3-4e55-4026-a329-5da06bf36c90-etc-machine-id\") pod \"978a99d3-4e55-4026-a329-5da06bf36c90\" (UID: \"978a99d3-4e55-4026-a329-5da06bf36c90\") " Jan 01 08:47:24 crc kubenswrapper[4867]: I0101 08:47:24.976420 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jk2cl\" (UniqueName: \"kubernetes.io/projected/978a99d3-4e55-4026-a329-5da06bf36c90-kube-api-access-jk2cl\") pod \"978a99d3-4e55-4026-a329-5da06bf36c90\" (UID: \"978a99d3-4e55-4026-a329-5da06bf36c90\") " Jan 01 08:47:24 crc kubenswrapper[4867]: I0101 08:47:24.976498 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/978a99d3-4e55-4026-a329-5da06bf36c90-scripts\") pod \"978a99d3-4e55-4026-a329-5da06bf36c90\" (UID: \"978a99d3-4e55-4026-a329-5da06bf36c90\") " Jan 01 08:47:24 crc kubenswrapper[4867]: I0101 08:47:24.976674 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/978a99d3-4e55-4026-a329-5da06bf36c90-db-sync-config-data\") pod \"978a99d3-4e55-4026-a329-5da06bf36c90\" (UID: \"978a99d3-4e55-4026-a329-5da06bf36c90\") " Jan 01 08:47:24 crc kubenswrapper[4867]: I0101 08:47:24.979954 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/978a99d3-4e55-4026-a329-5da06bf36c90-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "978a99d3-4e55-4026-a329-5da06bf36c90" (UID: "978a99d3-4e55-4026-a329-5da06bf36c90"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:47:24 crc kubenswrapper[4867]: I0101 08:47:24.984718 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/978a99d3-4e55-4026-a329-5da06bf36c90-kube-api-access-jk2cl" (OuterVolumeSpecName: "kube-api-access-jk2cl") pod "978a99d3-4e55-4026-a329-5da06bf36c90" (UID: "978a99d3-4e55-4026-a329-5da06bf36c90"). InnerVolumeSpecName "kube-api-access-jk2cl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:47:24 crc kubenswrapper[4867]: I0101 08:47:24.992537 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/978a99d3-4e55-4026-a329-5da06bf36c90-scripts" (OuterVolumeSpecName: "scripts") pod "978a99d3-4e55-4026-a329-5da06bf36c90" (UID: "978a99d3-4e55-4026-a329-5da06bf36c90"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.000367 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/978a99d3-4e55-4026-a329-5da06bf36c90-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "978a99d3-4e55-4026-a329-5da06bf36c90" (UID: "978a99d3-4e55-4026-a329-5da06bf36c90"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.007918 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/978a99d3-4e55-4026-a329-5da06bf36c90-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "978a99d3-4e55-4026-a329-5da06bf36c90" (UID: "978a99d3-4e55-4026-a329-5da06bf36c90"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.047045 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/978a99d3-4e55-4026-a329-5da06bf36c90-config-data" (OuterVolumeSpecName: "config-data") pod "978a99d3-4e55-4026-a329-5da06bf36c90" (UID: "978a99d3-4e55-4026-a329-5da06bf36c90"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.077952 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/978a99d3-4e55-4026-a329-5da06bf36c90-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.078156 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/978a99d3-4e55-4026-a329-5da06bf36c90-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.078220 4867 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/978a99d3-4e55-4026-a329-5da06bf36c90-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.078299 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jk2cl\" (UniqueName: \"kubernetes.io/projected/978a99d3-4e55-4026-a329-5da06bf36c90-kube-api-access-jk2cl\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.078353 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/978a99d3-4e55-4026-a329-5da06bf36c90-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.078406 4867 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/978a99d3-4e55-4026-a329-5da06bf36c90-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.147240 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b4c6f4469-xj4b9" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.213393 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-647d8845b5-mtlg7"] Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.214259 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-647d8845b5-mtlg7" podUID="eaa076e1-87b1-4118-b601-ba85239d1239" containerName="dnsmasq-dns" containerID="cri-o://0b06e88cc4d2f8b2777aa114152eca62a31a01c35345a95786744159b492aacf" gracePeriod=10 Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.425591 4867 generic.go:334] "Generic (PLEG): container finished" podID="eaa076e1-87b1-4118-b601-ba85239d1239" containerID="0b06e88cc4d2f8b2777aa114152eca62a31a01c35345a95786744159b492aacf" exitCode=0 Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.425673 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647d8845b5-mtlg7" event={"ID":"eaa076e1-87b1-4118-b601-ba85239d1239","Type":"ContainerDied","Data":"0b06e88cc4d2f8b2777aa114152eca62a31a01c35345a95786744159b492aacf"} Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.435501 4867 generic.go:334] "Generic (PLEG): container finished" podID="b51add02-d86c-4eb3-924d-1b2ac530e97b" containerID="2c9ba5e7e9909a9e42d08d566af0459042e7a0415bf8fbefbf672fdede258d9c" exitCode=0 Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.435526 4867 generic.go:334] "Generic (PLEG): container finished" podID="b51add02-d86c-4eb3-924d-1b2ac530e97b" containerID="cf8d5a63e66d616af8da42126dcb234a35029b480562ae50fceae31231a65b57" exitCode=2 Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.435569 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b51add02-d86c-4eb3-924d-1b2ac530e97b","Type":"ContainerDied","Data":"2c9ba5e7e9909a9e42d08d566af0459042e7a0415bf8fbefbf672fdede258d9c"} Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.435592 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b51add02-d86c-4eb3-924d-1b2ac530e97b","Type":"ContainerDied","Data":"cf8d5a63e66d616af8da42126dcb234a35029b480562ae50fceae31231a65b57"} Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.443928 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-p6csz" event={"ID":"978a99d3-4e55-4026-a329-5da06bf36c90","Type":"ContainerDied","Data":"18026ff58ad1ae737e98df65f047b46427fa933ffc7b8f59634a8e902ffcc426"} Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.443977 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18026ff58ad1ae737e98df65f047b46427fa933ffc7b8f59634a8e902ffcc426" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.444017 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-p6csz" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.644672 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 01 08:47:25 crc kubenswrapper[4867]: E0101 08:47:25.658496 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="978a99d3-4e55-4026-a329-5da06bf36c90" containerName="cinder-db-sync" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.658516 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="978a99d3-4e55-4026-a329-5da06bf36c90" containerName="cinder-db-sync" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.658668 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="978a99d3-4e55-4026-a329-5da06bf36c90" containerName="cinder-db-sync" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.659709 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.662543 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.662697 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.665973 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.669027 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.669316 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-jhfk9" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.694704 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d87039c4-162c-48a7-a367-176bf83b674f-config-data\") pod \"cinder-scheduler-0\" (UID: \"d87039c4-162c-48a7-a367-176bf83b674f\") " pod="openstack/cinder-scheduler-0" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.694938 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d87039c4-162c-48a7-a367-176bf83b674f-scripts\") pod \"cinder-scheduler-0\" (UID: \"d87039c4-162c-48a7-a367-176bf83b674f\") " pod="openstack/cinder-scheduler-0" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.695063 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d87039c4-162c-48a7-a367-176bf83b674f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d87039c4-162c-48a7-a367-176bf83b674f\") " pod="openstack/cinder-scheduler-0" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.695160 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7gxf\" (UniqueName: \"kubernetes.io/projected/d87039c4-162c-48a7-a367-176bf83b674f-kube-api-access-t7gxf\") pod \"cinder-scheduler-0\" (UID: \"d87039c4-162c-48a7-a367-176bf83b674f\") " pod="openstack/cinder-scheduler-0" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.695346 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d87039c4-162c-48a7-a367-176bf83b674f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d87039c4-162c-48a7-a367-176bf83b674f\") " pod="openstack/cinder-scheduler-0" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.695439 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d87039c4-162c-48a7-a367-176bf83b674f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d87039c4-162c-48a7-a367-176bf83b674f\") " pod="openstack/cinder-scheduler-0" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.725945 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b79d6d5d9-r54bp"] Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.727480 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b79d6d5d9-r54bp" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.747137 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b79d6d5d9-r54bp"] Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.777440 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647d8845b5-mtlg7" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.796409 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa076e1-87b1-4118-b601-ba85239d1239-config\") pod \"eaa076e1-87b1-4118-b601-ba85239d1239\" (UID: \"eaa076e1-87b1-4118-b601-ba85239d1239\") " Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.796458 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhfht\" (UniqueName: \"kubernetes.io/projected/eaa076e1-87b1-4118-b601-ba85239d1239-kube-api-access-xhfht\") pod \"eaa076e1-87b1-4118-b601-ba85239d1239\" (UID: \"eaa076e1-87b1-4118-b601-ba85239d1239\") " Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.796525 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaa076e1-87b1-4118-b601-ba85239d1239-dns-svc\") pod \"eaa076e1-87b1-4118-b601-ba85239d1239\" (UID: \"eaa076e1-87b1-4118-b601-ba85239d1239\") " Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.796557 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eaa076e1-87b1-4118-b601-ba85239d1239-ovsdbserver-sb\") pod \"eaa076e1-87b1-4118-b601-ba85239d1239\" (UID: \"eaa076e1-87b1-4118-b601-ba85239d1239\") " Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.796610 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eaa076e1-87b1-4118-b601-ba85239d1239-ovsdbserver-nb\") pod \"eaa076e1-87b1-4118-b601-ba85239d1239\" (UID: \"eaa076e1-87b1-4118-b601-ba85239d1239\") " Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.796629 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eaa076e1-87b1-4118-b601-ba85239d1239-dns-swift-storage-0\") pod \"eaa076e1-87b1-4118-b601-ba85239d1239\" (UID: \"eaa076e1-87b1-4118-b601-ba85239d1239\") " Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.796749 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d87039c4-162c-48a7-a367-176bf83b674f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d87039c4-162c-48a7-a367-176bf83b674f\") " pod="openstack/cinder-scheduler-0" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.796789 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz6wl\" (UniqueName: \"kubernetes.io/projected/75ce895d-6831-4af5-9e10-481ce05ec976-kube-api-access-vz6wl\") pod \"dnsmasq-dns-b79d6d5d9-r54bp\" (UID: \"75ce895d-6831-4af5-9e10-481ce05ec976\") " pod="openstack/dnsmasq-dns-b79d6d5d9-r54bp" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.796823 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75ce895d-6831-4af5-9e10-481ce05ec976-dns-swift-storage-0\") pod \"dnsmasq-dns-b79d6d5d9-r54bp\" (UID: \"75ce895d-6831-4af5-9e10-481ce05ec976\") " pod="openstack/dnsmasq-dns-b79d6d5d9-r54bp" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.796850 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7gxf\" (UniqueName: \"kubernetes.io/projected/d87039c4-162c-48a7-a367-176bf83b674f-kube-api-access-t7gxf\") pod \"cinder-scheduler-0\" (UID: \"d87039c4-162c-48a7-a367-176bf83b674f\") " pod="openstack/cinder-scheduler-0" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.796868 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75ce895d-6831-4af5-9e10-481ce05ec976-config\") pod \"dnsmasq-dns-b79d6d5d9-r54bp\" (UID: \"75ce895d-6831-4af5-9e10-481ce05ec976\") " pod="openstack/dnsmasq-dns-b79d6d5d9-r54bp" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.796907 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75ce895d-6831-4af5-9e10-481ce05ec976-ovsdbserver-sb\") pod \"dnsmasq-dns-b79d6d5d9-r54bp\" (UID: \"75ce895d-6831-4af5-9e10-481ce05ec976\") " pod="openstack/dnsmasq-dns-b79d6d5d9-r54bp" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.796930 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75ce895d-6831-4af5-9e10-481ce05ec976-dns-svc\") pod \"dnsmasq-dns-b79d6d5d9-r54bp\" (UID: \"75ce895d-6831-4af5-9e10-481ce05ec976\") " pod="openstack/dnsmasq-dns-b79d6d5d9-r54bp" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.796949 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d87039c4-162c-48a7-a367-176bf83b674f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d87039c4-162c-48a7-a367-176bf83b674f\") " pod="openstack/cinder-scheduler-0" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.796971 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75ce895d-6831-4af5-9e10-481ce05ec976-ovsdbserver-nb\") pod \"dnsmasq-dns-b79d6d5d9-r54bp\" (UID: \"75ce895d-6831-4af5-9e10-481ce05ec976\") " pod="openstack/dnsmasq-dns-b79d6d5d9-r54bp" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.797000 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d87039c4-162c-48a7-a367-176bf83b674f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d87039c4-162c-48a7-a367-176bf83b674f\") " pod="openstack/cinder-scheduler-0" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.797034 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d87039c4-162c-48a7-a367-176bf83b674f-config-data\") pod \"cinder-scheduler-0\" (UID: \"d87039c4-162c-48a7-a367-176bf83b674f\") " pod="openstack/cinder-scheduler-0" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.797057 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d87039c4-162c-48a7-a367-176bf83b674f-scripts\") pod \"cinder-scheduler-0\" (UID: \"d87039c4-162c-48a7-a367-176bf83b674f\") " pod="openstack/cinder-scheduler-0" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.797824 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d87039c4-162c-48a7-a367-176bf83b674f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d87039c4-162c-48a7-a367-176bf83b674f\") " pod="openstack/cinder-scheduler-0" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.803118 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d87039c4-162c-48a7-a367-176bf83b674f-scripts\") pod \"cinder-scheduler-0\" (UID: \"d87039c4-162c-48a7-a367-176bf83b674f\") " pod="openstack/cinder-scheduler-0" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.809238 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaa076e1-87b1-4118-b601-ba85239d1239-kube-api-access-xhfht" (OuterVolumeSpecName: "kube-api-access-xhfht") pod "eaa076e1-87b1-4118-b601-ba85239d1239" (UID: "eaa076e1-87b1-4118-b601-ba85239d1239"). InnerVolumeSpecName "kube-api-access-xhfht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.812784 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d87039c4-162c-48a7-a367-176bf83b674f-config-data\") pod \"cinder-scheduler-0\" (UID: \"d87039c4-162c-48a7-a367-176bf83b674f\") " pod="openstack/cinder-scheduler-0" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.825065 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d87039c4-162c-48a7-a367-176bf83b674f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d87039c4-162c-48a7-a367-176bf83b674f\") " pod="openstack/cinder-scheduler-0" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.843522 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d87039c4-162c-48a7-a367-176bf83b674f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d87039c4-162c-48a7-a367-176bf83b674f\") " pod="openstack/cinder-scheduler-0" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.845076 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7gxf\" (UniqueName: \"kubernetes.io/projected/d87039c4-162c-48a7-a367-176bf83b674f-kube-api-access-t7gxf\") pod \"cinder-scheduler-0\" (UID: \"d87039c4-162c-48a7-a367-176bf83b674f\") " pod="openstack/cinder-scheduler-0" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.859270 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 01 08:47:25 crc kubenswrapper[4867]: E0101 08:47:25.859676 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa076e1-87b1-4118-b601-ba85239d1239" containerName="dnsmasq-dns" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.859690 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa076e1-87b1-4118-b601-ba85239d1239" containerName="dnsmasq-dns" Jan 01 08:47:25 crc kubenswrapper[4867]: E0101 08:47:25.859705 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa076e1-87b1-4118-b601-ba85239d1239" containerName="init" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.859712 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa076e1-87b1-4118-b601-ba85239d1239" containerName="init" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.860079 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaa076e1-87b1-4118-b601-ba85239d1239" containerName="dnsmasq-dns" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.861066 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 01 08:47:25 crc kubenswrapper[4867]: W0101 08:47:25.864302 4867 reflector.go:561] object-"openstack"/"cinder-api-config-data": failed to list *v1.Secret: secrets "cinder-api-config-data" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 01 08:47:25 crc kubenswrapper[4867]: E0101 08:47:25.864328 4867 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-api-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cinder-api-config-data\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.899244 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz6wl\" (UniqueName: \"kubernetes.io/projected/75ce895d-6831-4af5-9e10-481ce05ec976-kube-api-access-vz6wl\") pod \"dnsmasq-dns-b79d6d5d9-r54bp\" (UID: \"75ce895d-6831-4af5-9e10-481ce05ec976\") " pod="openstack/dnsmasq-dns-b79d6d5d9-r54bp" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.899297 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75ce895d-6831-4af5-9e10-481ce05ec976-dns-swift-storage-0\") pod \"dnsmasq-dns-b79d6d5d9-r54bp\" (UID: \"75ce895d-6831-4af5-9e10-481ce05ec976\") " pod="openstack/dnsmasq-dns-b79d6d5d9-r54bp" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.900397 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75ce895d-6831-4af5-9e10-481ce05ec976-config\") pod \"dnsmasq-dns-b79d6d5d9-r54bp\" (UID: \"75ce895d-6831-4af5-9e10-481ce05ec976\") " pod="openstack/dnsmasq-dns-b79d6d5d9-r54bp" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.900441 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75ce895d-6831-4af5-9e10-481ce05ec976-ovsdbserver-sb\") pod \"dnsmasq-dns-b79d6d5d9-r54bp\" (UID: \"75ce895d-6831-4af5-9e10-481ce05ec976\") " pod="openstack/dnsmasq-dns-b79d6d5d9-r54bp" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.900469 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75ce895d-6831-4af5-9e10-481ce05ec976-dns-svc\") pod \"dnsmasq-dns-b79d6d5d9-r54bp\" (UID: \"75ce895d-6831-4af5-9e10-481ce05ec976\") " pod="openstack/dnsmasq-dns-b79d6d5d9-r54bp" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.900495 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75ce895d-6831-4af5-9e10-481ce05ec976-ovsdbserver-nb\") pod \"dnsmasq-dns-b79d6d5d9-r54bp\" (UID: \"75ce895d-6831-4af5-9e10-481ce05ec976\") " pod="openstack/dnsmasq-dns-b79d6d5d9-r54bp" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.900540 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhfht\" (UniqueName: \"kubernetes.io/projected/eaa076e1-87b1-4118-b601-ba85239d1239-kube-api-access-xhfht\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.900674 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75ce895d-6831-4af5-9e10-481ce05ec976-dns-swift-storage-0\") pod \"dnsmasq-dns-b79d6d5d9-r54bp\" (UID: \"75ce895d-6831-4af5-9e10-481ce05ec976\") " pod="openstack/dnsmasq-dns-b79d6d5d9-r54bp" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.901286 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75ce895d-6831-4af5-9e10-481ce05ec976-ovsdbserver-sb\") pod \"dnsmasq-dns-b79d6d5d9-r54bp\" (UID: \"75ce895d-6831-4af5-9e10-481ce05ec976\") " pod="openstack/dnsmasq-dns-b79d6d5d9-r54bp" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.903129 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75ce895d-6831-4af5-9e10-481ce05ec976-dns-svc\") pod \"dnsmasq-dns-b79d6d5d9-r54bp\" (UID: \"75ce895d-6831-4af5-9e10-481ce05ec976\") " pod="openstack/dnsmasq-dns-b79d6d5d9-r54bp" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.904681 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75ce895d-6831-4af5-9e10-481ce05ec976-config\") pod \"dnsmasq-dns-b79d6d5d9-r54bp\" (UID: \"75ce895d-6831-4af5-9e10-481ce05ec976\") " pod="openstack/dnsmasq-dns-b79d6d5d9-r54bp" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.905247 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75ce895d-6831-4af5-9e10-481ce05ec976-ovsdbserver-nb\") pod \"dnsmasq-dns-b79d6d5d9-r54bp\" (UID: \"75ce895d-6831-4af5-9e10-481ce05ec976\") " pod="openstack/dnsmasq-dns-b79d6d5d9-r54bp" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.938937 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.946087 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaa076e1-87b1-4118-b601-ba85239d1239-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "eaa076e1-87b1-4118-b601-ba85239d1239" (UID: "eaa076e1-87b1-4118-b601-ba85239d1239"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.949053 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz6wl\" (UniqueName: \"kubernetes.io/projected/75ce895d-6831-4af5-9e10-481ce05ec976-kube-api-access-vz6wl\") pod \"dnsmasq-dns-b79d6d5d9-r54bp\" (UID: \"75ce895d-6831-4af5-9e10-481ce05ec976\") " pod="openstack/dnsmasq-dns-b79d6d5d9-r54bp" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.961187 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaa076e1-87b1-4118-b601-ba85239d1239-config" (OuterVolumeSpecName: "config") pod "eaa076e1-87b1-4118-b601-ba85239d1239" (UID: "eaa076e1-87b1-4118-b601-ba85239d1239"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.971364 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaa076e1-87b1-4118-b601-ba85239d1239-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eaa076e1-87b1-4118-b601-ba85239d1239" (UID: "eaa076e1-87b1-4118-b601-ba85239d1239"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.979305 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaa076e1-87b1-4118-b601-ba85239d1239-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eaa076e1-87b1-4118-b601-ba85239d1239" (UID: "eaa076e1-87b1-4118-b601-ba85239d1239"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:47:25 crc kubenswrapper[4867]: I0101 08:47:25.981632 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaa076e1-87b1-4118-b601-ba85239d1239-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eaa076e1-87b1-4118-b601-ba85239d1239" (UID: "eaa076e1-87b1-4118-b601-ba85239d1239"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:47:26 crc kubenswrapper[4867]: I0101 08:47:26.002380 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwv8f\" (UniqueName: \"kubernetes.io/projected/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-kube-api-access-vwv8f\") pod \"cinder-api-0\" (UID: \"dc3d3ca2-7307-41e0-9041-4dd6cacfa63e\") " pod="openstack/cinder-api-0" Jan 01 08:47:26 crc kubenswrapper[4867]: I0101 08:47:26.002563 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-config-data\") pod \"cinder-api-0\" (UID: \"dc3d3ca2-7307-41e0-9041-4dd6cacfa63e\") " pod="openstack/cinder-api-0" Jan 01 08:47:26 crc kubenswrapper[4867]: I0101 08:47:26.002610 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"dc3d3ca2-7307-41e0-9041-4dd6cacfa63e\") " pod="openstack/cinder-api-0" Jan 01 08:47:26 crc kubenswrapper[4867]: I0101 08:47:26.002636 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"dc3d3ca2-7307-41e0-9041-4dd6cacfa63e\") " pod="openstack/cinder-api-0" Jan 01 08:47:26 crc kubenswrapper[4867]: I0101 08:47:26.002824 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-config-data-custom\") pod \"cinder-api-0\" (UID: \"dc3d3ca2-7307-41e0-9041-4dd6cacfa63e\") " pod="openstack/cinder-api-0" Jan 01 08:47:26 crc kubenswrapper[4867]: I0101 08:47:26.002864 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-scripts\") pod \"cinder-api-0\" (UID: \"dc3d3ca2-7307-41e0-9041-4dd6cacfa63e\") " pod="openstack/cinder-api-0" Jan 01 08:47:26 crc kubenswrapper[4867]: I0101 08:47:26.002914 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-logs\") pod \"cinder-api-0\" (UID: \"dc3d3ca2-7307-41e0-9041-4dd6cacfa63e\") " pod="openstack/cinder-api-0" Jan 01 08:47:26 crc kubenswrapper[4867]: I0101 08:47:26.003022 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eaa076e1-87b1-4118-b601-ba85239d1239-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:26 crc kubenswrapper[4867]: I0101 08:47:26.003054 4867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eaa076e1-87b1-4118-b601-ba85239d1239-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:26 crc kubenswrapper[4867]: I0101 08:47:26.003071 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa076e1-87b1-4118-b601-ba85239d1239-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:26 crc kubenswrapper[4867]: I0101 08:47:26.003081 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaa076e1-87b1-4118-b601-ba85239d1239-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:26 crc kubenswrapper[4867]: I0101 08:47:26.003092 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eaa076e1-87b1-4118-b601-ba85239d1239-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:26 crc kubenswrapper[4867]: I0101 08:47:26.074407 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 01 08:47:26 crc kubenswrapper[4867]: I0101 08:47:26.088404 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b79d6d5d9-r54bp" Jan 01 08:47:26 crc kubenswrapper[4867]: I0101 08:47:26.104314 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwv8f\" (UniqueName: \"kubernetes.io/projected/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-kube-api-access-vwv8f\") pod \"cinder-api-0\" (UID: \"dc3d3ca2-7307-41e0-9041-4dd6cacfa63e\") " pod="openstack/cinder-api-0" Jan 01 08:47:26 crc kubenswrapper[4867]: I0101 08:47:26.104595 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-config-data\") pod \"cinder-api-0\" (UID: \"dc3d3ca2-7307-41e0-9041-4dd6cacfa63e\") " pod="openstack/cinder-api-0" Jan 01 08:47:26 crc kubenswrapper[4867]: I0101 08:47:26.104623 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"dc3d3ca2-7307-41e0-9041-4dd6cacfa63e\") " pod="openstack/cinder-api-0" Jan 01 08:47:26 crc kubenswrapper[4867]: I0101 08:47:26.104641 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"dc3d3ca2-7307-41e0-9041-4dd6cacfa63e\") " pod="openstack/cinder-api-0" Jan 01 08:47:26 crc kubenswrapper[4867]: I0101 08:47:26.104693 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-config-data-custom\") pod \"cinder-api-0\" (UID: \"dc3d3ca2-7307-41e0-9041-4dd6cacfa63e\") " pod="openstack/cinder-api-0" Jan 01 08:47:26 crc kubenswrapper[4867]: I0101 08:47:26.104711 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-scripts\") pod \"cinder-api-0\" (UID: \"dc3d3ca2-7307-41e0-9041-4dd6cacfa63e\") " pod="openstack/cinder-api-0" Jan 01 08:47:26 crc kubenswrapper[4867]: I0101 08:47:26.104732 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-logs\") pod \"cinder-api-0\" (UID: \"dc3d3ca2-7307-41e0-9041-4dd6cacfa63e\") " pod="openstack/cinder-api-0" Jan 01 08:47:26 crc kubenswrapper[4867]: I0101 08:47:26.105055 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"dc3d3ca2-7307-41e0-9041-4dd6cacfa63e\") " pod="openstack/cinder-api-0" Jan 01 08:47:26 crc kubenswrapper[4867]: I0101 08:47:26.105534 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-logs\") pod \"cinder-api-0\" (UID: \"dc3d3ca2-7307-41e0-9041-4dd6cacfa63e\") " pod="openstack/cinder-api-0" Jan 01 08:47:26 crc kubenswrapper[4867]: I0101 08:47:26.111301 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"dc3d3ca2-7307-41e0-9041-4dd6cacfa63e\") " pod="openstack/cinder-api-0" Jan 01 08:47:26 crc kubenswrapper[4867]: I0101 08:47:26.111332 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-scripts\") pod \"cinder-api-0\" (UID: \"dc3d3ca2-7307-41e0-9041-4dd6cacfa63e\") " pod="openstack/cinder-api-0" Jan 01 08:47:26 crc kubenswrapper[4867]: I0101 08:47:26.113789 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-config-data\") pod \"cinder-api-0\" (UID: \"dc3d3ca2-7307-41e0-9041-4dd6cacfa63e\") " pod="openstack/cinder-api-0" Jan 01 08:47:26 crc kubenswrapper[4867]: I0101 08:47:26.120333 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwv8f\" (UniqueName: \"kubernetes.io/projected/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-kube-api-access-vwv8f\") pod \"cinder-api-0\" (UID: \"dc3d3ca2-7307-41e0-9041-4dd6cacfa63e\") " pod="openstack/cinder-api-0" Jan 01 08:47:26 crc kubenswrapper[4867]: I0101 08:47:26.453822 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647d8845b5-mtlg7" Jan 01 08:47:26 crc kubenswrapper[4867]: I0101 08:47:26.453707 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647d8845b5-mtlg7" event={"ID":"eaa076e1-87b1-4118-b601-ba85239d1239","Type":"ContainerDied","Data":"7d0a4e0216f0d87b49a3b7e5a6654da96c6bc6892454e3a906121a364a900e08"} Jan 01 08:47:26 crc kubenswrapper[4867]: I0101 08:47:26.455089 4867 scope.go:117] "RemoveContainer" containerID="0b06e88cc4d2f8b2777aa114152eca62a31a01c35345a95786744159b492aacf" Jan 01 08:47:26 crc kubenswrapper[4867]: I0101 08:47:26.497980 4867 scope.go:117] "RemoveContainer" containerID="dbf530f0282d8b762efc12f6f66f07fd30d84690c654b9848c9a7932080aee66" Jan 01 08:47:26 crc kubenswrapper[4867]: I0101 08:47:26.503334 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-647d8845b5-mtlg7"] Jan 01 08:47:26 crc kubenswrapper[4867]: I0101 08:47:26.512712 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-647d8845b5-mtlg7"] Jan 01 08:47:26 crc kubenswrapper[4867]: I0101 08:47:26.619457 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 01 08:47:26 crc kubenswrapper[4867]: I0101 08:47:26.741227 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b79d6d5d9-r54bp"] Jan 01 08:47:26 crc kubenswrapper[4867]: W0101 08:47:26.752020 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75ce895d_6831_4af5_9e10_481ce05ec976.slice/crio-ff22583786a24ca906bb95ac067f368cd098fb90fade5ad13377558e08ffe79d WatchSource:0}: Error finding container ff22583786a24ca906bb95ac067f368cd098fb90fade5ad13377558e08ffe79d: Status 404 returned error can't find the container with id ff22583786a24ca906bb95ac067f368cd098fb90fade5ad13377558e08ffe79d Jan 01 08:47:26 crc kubenswrapper[4867]: I0101 08:47:26.940593 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7998fdfbd-7j4fm" Jan 01 08:47:27 crc kubenswrapper[4867]: I0101 08:47:27.011528 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7998fdfbd-7j4fm" Jan 01 08:47:27 crc kubenswrapper[4867]: E0101 08:47:27.105969 4867 secret.go:188] Couldn't get secret openstack/cinder-api-config-data: failed to sync secret cache: timed out waiting for the condition Jan 01 08:47:27 crc kubenswrapper[4867]: E0101 08:47:27.106057 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-config-data-custom podName:dc3d3ca2-7307-41e0-9041-4dd6cacfa63e nodeName:}" failed. No retries permitted until 2026-01-01 08:47:27.606040483 +0000 UTC m=+1256.741309252 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data-custom" (UniqueName: "kubernetes.io/secret/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-config-data-custom") pod "cinder-api-0" (UID: "dc3d3ca2-7307-41e0-9041-4dd6cacfa63e") : failed to sync secret cache: timed out waiting for the condition Jan 01 08:47:27 crc kubenswrapper[4867]: I0101 08:47:27.170097 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaa076e1-87b1-4118-b601-ba85239d1239" path="/var/lib/kubelet/pods/eaa076e1-87b1-4118-b601-ba85239d1239/volumes" Jan 01 08:47:27 crc kubenswrapper[4867]: I0101 08:47:27.404430 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 01 08:47:27 crc kubenswrapper[4867]: I0101 08:47:27.469588 4867 generic.go:334] "Generic (PLEG): container finished" podID="b51add02-d86c-4eb3-924d-1b2ac530e97b" containerID="0ffb3e321bea803090e9084b8864f32d053e1fb66b69554db5ac6b16f37a45db" exitCode=0 Jan 01 08:47:27 crc kubenswrapper[4867]: I0101 08:47:27.469791 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b51add02-d86c-4eb3-924d-1b2ac530e97b","Type":"ContainerDied","Data":"0ffb3e321bea803090e9084b8864f32d053e1fb66b69554db5ac6b16f37a45db"} Jan 01 08:47:27 crc kubenswrapper[4867]: I0101 08:47:27.472842 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d87039c4-162c-48a7-a367-176bf83b674f","Type":"ContainerStarted","Data":"1b4723a4505aa7e68c527d8225bf3e57cefde328bc126debc8c997a2401f3473"} Jan 01 08:47:27 crc kubenswrapper[4867]: I0101 08:47:27.474526 4867 generic.go:334] "Generic (PLEG): container finished" podID="75ce895d-6831-4af5-9e10-481ce05ec976" containerID="63ee244fea7e994dedf00c867ecb5721ef862ee406239ee082533fb853e9f50b" exitCode=0 Jan 01 08:47:27 crc kubenswrapper[4867]: I0101 08:47:27.474802 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b79d6d5d9-r54bp" event={"ID":"75ce895d-6831-4af5-9e10-481ce05ec976","Type":"ContainerDied","Data":"63ee244fea7e994dedf00c867ecb5721ef862ee406239ee082533fb853e9f50b"} Jan 01 08:47:27 crc kubenswrapper[4867]: I0101 08:47:27.474871 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b79d6d5d9-r54bp" event={"ID":"75ce895d-6831-4af5-9e10-481ce05ec976","Type":"ContainerStarted","Data":"ff22583786a24ca906bb95ac067f368cd098fb90fade5ad13377558e08ffe79d"} Jan 01 08:47:27 crc kubenswrapper[4867]: I0101 08:47:27.636488 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 01 08:47:27 crc kubenswrapper[4867]: E0101 08:47:27.637142 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config-data-custom], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/cinder-api-0" podUID="dc3d3ca2-7307-41e0-9041-4dd6cacfa63e" Jan 01 08:47:27 crc kubenswrapper[4867]: I0101 08:47:27.655508 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-config-data-custom\") pod \"cinder-api-0\" (UID: \"dc3d3ca2-7307-41e0-9041-4dd6cacfa63e\") " pod="openstack/cinder-api-0" Jan 01 08:47:27 crc kubenswrapper[4867]: I0101 08:47:27.661008 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-config-data-custom\") pod \"cinder-api-0\" (UID: \"dc3d3ca2-7307-41e0-9041-4dd6cacfa63e\") " pod="openstack/cinder-api-0" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.226609 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.271568 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b51add02-d86c-4eb3-924d-1b2ac530e97b-log-httpd\") pod \"b51add02-d86c-4eb3-924d-1b2ac530e97b\" (UID: \"b51add02-d86c-4eb3-924d-1b2ac530e97b\") " Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.271677 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b51add02-d86c-4eb3-924d-1b2ac530e97b-config-data\") pod \"b51add02-d86c-4eb3-924d-1b2ac530e97b\" (UID: \"b51add02-d86c-4eb3-924d-1b2ac530e97b\") " Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.271721 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8csl\" (UniqueName: \"kubernetes.io/projected/b51add02-d86c-4eb3-924d-1b2ac530e97b-kube-api-access-r8csl\") pod \"b51add02-d86c-4eb3-924d-1b2ac530e97b\" (UID: \"b51add02-d86c-4eb3-924d-1b2ac530e97b\") " Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.271770 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b51add02-d86c-4eb3-924d-1b2ac530e97b-sg-core-conf-yaml\") pod \"b51add02-d86c-4eb3-924d-1b2ac530e97b\" (UID: \"b51add02-d86c-4eb3-924d-1b2ac530e97b\") " Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.271793 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b51add02-d86c-4eb3-924d-1b2ac530e97b-run-httpd\") pod \"b51add02-d86c-4eb3-924d-1b2ac530e97b\" (UID: \"b51add02-d86c-4eb3-924d-1b2ac530e97b\") " Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.271976 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b51add02-d86c-4eb3-924d-1b2ac530e97b-scripts\") pod \"b51add02-d86c-4eb3-924d-1b2ac530e97b\" (UID: \"b51add02-d86c-4eb3-924d-1b2ac530e97b\") " Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.272008 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b51add02-d86c-4eb3-924d-1b2ac530e97b-combined-ca-bundle\") pod \"b51add02-d86c-4eb3-924d-1b2ac530e97b\" (UID: \"b51add02-d86c-4eb3-924d-1b2ac530e97b\") " Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.272482 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b51add02-d86c-4eb3-924d-1b2ac530e97b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b51add02-d86c-4eb3-924d-1b2ac530e97b" (UID: "b51add02-d86c-4eb3-924d-1b2ac530e97b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.272568 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b51add02-d86c-4eb3-924d-1b2ac530e97b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b51add02-d86c-4eb3-924d-1b2ac530e97b" (UID: "b51add02-d86c-4eb3-924d-1b2ac530e97b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.294193 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b51add02-d86c-4eb3-924d-1b2ac530e97b-kube-api-access-r8csl" (OuterVolumeSpecName: "kube-api-access-r8csl") pod "b51add02-d86c-4eb3-924d-1b2ac530e97b" (UID: "b51add02-d86c-4eb3-924d-1b2ac530e97b"). InnerVolumeSpecName "kube-api-access-r8csl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.301130 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b51add02-d86c-4eb3-924d-1b2ac530e97b-scripts" (OuterVolumeSpecName: "scripts") pod "b51add02-d86c-4eb3-924d-1b2ac530e97b" (UID: "b51add02-d86c-4eb3-924d-1b2ac530e97b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.377069 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b51add02-d86c-4eb3-924d-1b2ac530e97b-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.377179 4867 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b51add02-d86c-4eb3-924d-1b2ac530e97b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.377243 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8csl\" (UniqueName: \"kubernetes.io/projected/b51add02-d86c-4eb3-924d-1b2ac530e97b-kube-api-access-r8csl\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.377308 4867 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b51add02-d86c-4eb3-924d-1b2ac530e97b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.379819 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b51add02-d86c-4eb3-924d-1b2ac530e97b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b51add02-d86c-4eb3-924d-1b2ac530e97b" (UID: "b51add02-d86c-4eb3-924d-1b2ac530e97b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.431669 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b51add02-d86c-4eb3-924d-1b2ac530e97b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b51add02-d86c-4eb3-924d-1b2ac530e97b" (UID: "b51add02-d86c-4eb3-924d-1b2ac530e97b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.460595 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b51add02-d86c-4eb3-924d-1b2ac530e97b-config-data" (OuterVolumeSpecName: "config-data") pod "b51add02-d86c-4eb3-924d-1b2ac530e97b" (UID: "b51add02-d86c-4eb3-924d-1b2ac530e97b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.481911 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b51add02-d86c-4eb3-924d-1b2ac530e97b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.481937 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b51add02-d86c-4eb3-924d-1b2ac530e97b-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.481947 4867 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b51add02-d86c-4eb3-924d-1b2ac530e97b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.508647 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b79d6d5d9-r54bp" event={"ID":"75ce895d-6831-4af5-9e10-481ce05ec976","Type":"ContainerStarted","Data":"28e9aa24e604b0ba373ef4c4672912d39dad49eeb4da59f06c89ed951c249b9b"} Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.510125 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b79d6d5d9-r54bp" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.512113 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b51add02-d86c-4eb3-924d-1b2ac530e97b","Type":"ContainerDied","Data":"f7ea4af71f99e6d8df6157f41a790842e230c2afeaaaac388b08575584fc52e6"} Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.512169 4867 scope.go:117] "RemoveContainer" containerID="2c9ba5e7e9909a9e42d08d566af0459042e7a0415bf8fbefbf672fdede258d9c" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.512285 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.528048 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.528651 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d87039c4-162c-48a7-a367-176bf83b674f","Type":"ContainerStarted","Data":"d5ce34e933852f0ec86e4268745f1b37011e75cab812300e5117d0490a47cb48"} Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.541267 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b79d6d5d9-r54bp" podStartSLOduration=3.541241381 podStartE2EDuration="3.541241381s" podCreationTimestamp="2026-01-01 08:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:47:28.532971859 +0000 UTC m=+1257.668240638" watchObservedRunningTime="2026-01-01 08:47:28.541241381 +0000 UTC m=+1257.676510150" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.593460 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.661674 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.663773 4867 scope.go:117] "RemoveContainer" containerID="cf8d5a63e66d616af8da42126dcb234a35029b480562ae50fceae31231a65b57" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.685044 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.685692 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-config-data-custom\") pod \"dc3d3ca2-7307-41e0-9041-4dd6cacfa63e\" (UID: \"dc3d3ca2-7307-41e0-9041-4dd6cacfa63e\") " Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.685990 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-combined-ca-bundle\") pod \"dc3d3ca2-7307-41e0-9041-4dd6cacfa63e\" (UID: \"dc3d3ca2-7307-41e0-9041-4dd6cacfa63e\") " Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.686103 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-scripts\") pod \"dc3d3ca2-7307-41e0-9041-4dd6cacfa63e\" (UID: \"dc3d3ca2-7307-41e0-9041-4dd6cacfa63e\") " Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.686190 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-logs\") pod \"dc3d3ca2-7307-41e0-9041-4dd6cacfa63e\" (UID: \"dc3d3ca2-7307-41e0-9041-4dd6cacfa63e\") " Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.686237 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwv8f\" (UniqueName: \"kubernetes.io/projected/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-kube-api-access-vwv8f\") pod \"dc3d3ca2-7307-41e0-9041-4dd6cacfa63e\" (UID: \"dc3d3ca2-7307-41e0-9041-4dd6cacfa63e\") " Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.686256 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-config-data\") pod \"dc3d3ca2-7307-41e0-9041-4dd6cacfa63e\" (UID: \"dc3d3ca2-7307-41e0-9041-4dd6cacfa63e\") " Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.686286 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-etc-machine-id\") pod \"dc3d3ca2-7307-41e0-9041-4dd6cacfa63e\" (UID: \"dc3d3ca2-7307-41e0-9041-4dd6cacfa63e\") " Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.686695 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "dc3d3ca2-7307-41e0-9041-4dd6cacfa63e" (UID: "dc3d3ca2-7307-41e0-9041-4dd6cacfa63e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.687511 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-logs" (OuterVolumeSpecName: "logs") pod "dc3d3ca2-7307-41e0-9041-4dd6cacfa63e" (UID: "dc3d3ca2-7307-41e0-9041-4dd6cacfa63e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.693472 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-kube-api-access-vwv8f" (OuterVolumeSpecName: "kube-api-access-vwv8f") pod "dc3d3ca2-7307-41e0-9041-4dd6cacfa63e" (UID: "dc3d3ca2-7307-41e0-9041-4dd6cacfa63e"). InnerVolumeSpecName "kube-api-access-vwv8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.700031 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc3d3ca2-7307-41e0-9041-4dd6cacfa63e" (UID: "dc3d3ca2-7307-41e0-9041-4dd6cacfa63e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.700103 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.713540 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-scripts" (OuterVolumeSpecName: "scripts") pod "dc3d3ca2-7307-41e0-9041-4dd6cacfa63e" (UID: "dc3d3ca2-7307-41e0-9041-4dd6cacfa63e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:28 crc kubenswrapper[4867]: E0101 08:47:28.715733 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b51add02-d86c-4eb3-924d-1b2ac530e97b" containerName="sg-core" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.715757 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b51add02-d86c-4eb3-924d-1b2ac530e97b" containerName="sg-core" Jan 01 08:47:28 crc kubenswrapper[4867]: E0101 08:47:28.715858 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b51add02-d86c-4eb3-924d-1b2ac530e97b" containerName="proxy-httpd" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.715872 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b51add02-d86c-4eb3-924d-1b2ac530e97b" containerName="proxy-httpd" Jan 01 08:47:28 crc kubenswrapper[4867]: E0101 08:47:28.715897 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b51add02-d86c-4eb3-924d-1b2ac530e97b" containerName="ceilometer-notification-agent" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.715906 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b51add02-d86c-4eb3-924d-1b2ac530e97b" containerName="ceilometer-notification-agent" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.719158 4867 scope.go:117] "RemoveContainer" containerID="0ffb3e321bea803090e9084b8864f32d053e1fb66b69554db5ac6b16f37a45db" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.726248 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b51add02-d86c-4eb3-924d-1b2ac530e97b" containerName="proxy-httpd" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.726327 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b51add02-d86c-4eb3-924d-1b2ac530e97b" containerName="ceilometer-notification-agent" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.726339 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b51add02-d86c-4eb3-924d-1b2ac530e97b" containerName="sg-core" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.738875 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.745458 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.745763 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.750250 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "dc3d3ca2-7307-41e0-9041-4dd6cacfa63e" (UID: "dc3d3ca2-7307-41e0-9041-4dd6cacfa63e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.761407 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-config-data" (OuterVolumeSpecName: "config-data") pod "dc3d3ca2-7307-41e0-9041-4dd6cacfa63e" (UID: "dc3d3ca2-7307-41e0-9041-4dd6cacfa63e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.779425 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.787915 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e2c63d-ba70-446b-881f-a0a66f440016-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"15e2c63d-ba70-446b-881f-a0a66f440016\") " pod="openstack/ceilometer-0" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.788089 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15e2c63d-ba70-446b-881f-a0a66f440016-log-httpd\") pod \"ceilometer-0\" (UID: \"15e2c63d-ba70-446b-881f-a0a66f440016\") " pod="openstack/ceilometer-0" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.788122 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e2c63d-ba70-446b-881f-a0a66f440016-config-data\") pod \"ceilometer-0\" (UID: \"15e2c63d-ba70-446b-881f-a0a66f440016\") " pod="openstack/ceilometer-0" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.788180 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15e2c63d-ba70-446b-881f-a0a66f440016-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"15e2c63d-ba70-446b-881f-a0a66f440016\") " pod="openstack/ceilometer-0" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.788256 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15e2c63d-ba70-446b-881f-a0a66f440016-scripts\") pod \"ceilometer-0\" (UID: \"15e2c63d-ba70-446b-881f-a0a66f440016\") " pod="openstack/ceilometer-0" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.788279 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15e2c63d-ba70-446b-881f-a0a66f440016-run-httpd\") pod \"ceilometer-0\" (UID: \"15e2c63d-ba70-446b-881f-a0a66f440016\") " pod="openstack/ceilometer-0" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.788305 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swctj\" (UniqueName: \"kubernetes.io/projected/15e2c63d-ba70-446b-881f-a0a66f440016-kube-api-access-swctj\") pod \"ceilometer-0\" (UID: \"15e2c63d-ba70-446b-881f-a0a66f440016\") " pod="openstack/ceilometer-0" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.788382 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-logs\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.788426 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwv8f\" (UniqueName: \"kubernetes.io/projected/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-kube-api-access-vwv8f\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.788437 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.788446 4867 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.788454 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.788463 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.788472 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.890816 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15e2c63d-ba70-446b-881f-a0a66f440016-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"15e2c63d-ba70-446b-881f-a0a66f440016\") " pod="openstack/ceilometer-0" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.890911 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15e2c63d-ba70-446b-881f-a0a66f440016-scripts\") pod \"ceilometer-0\" (UID: \"15e2c63d-ba70-446b-881f-a0a66f440016\") " pod="openstack/ceilometer-0" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.890933 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15e2c63d-ba70-446b-881f-a0a66f440016-run-httpd\") pod \"ceilometer-0\" (UID: \"15e2c63d-ba70-446b-881f-a0a66f440016\") " pod="openstack/ceilometer-0" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.890963 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swctj\" (UniqueName: \"kubernetes.io/projected/15e2c63d-ba70-446b-881f-a0a66f440016-kube-api-access-swctj\") pod \"ceilometer-0\" (UID: \"15e2c63d-ba70-446b-881f-a0a66f440016\") " pod="openstack/ceilometer-0" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.891003 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e2c63d-ba70-446b-881f-a0a66f440016-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"15e2c63d-ba70-446b-881f-a0a66f440016\") " pod="openstack/ceilometer-0" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.891038 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15e2c63d-ba70-446b-881f-a0a66f440016-log-httpd\") pod \"ceilometer-0\" (UID: \"15e2c63d-ba70-446b-881f-a0a66f440016\") " pod="openstack/ceilometer-0" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.891055 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e2c63d-ba70-446b-881f-a0a66f440016-config-data\") pod \"ceilometer-0\" (UID: \"15e2c63d-ba70-446b-881f-a0a66f440016\") " pod="openstack/ceilometer-0" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.891600 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15e2c63d-ba70-446b-881f-a0a66f440016-run-httpd\") pod \"ceilometer-0\" (UID: \"15e2c63d-ba70-446b-881f-a0a66f440016\") " pod="openstack/ceilometer-0" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.891612 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15e2c63d-ba70-446b-881f-a0a66f440016-log-httpd\") pod \"ceilometer-0\" (UID: \"15e2c63d-ba70-446b-881f-a0a66f440016\") " pod="openstack/ceilometer-0" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.895049 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15e2c63d-ba70-446b-881f-a0a66f440016-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"15e2c63d-ba70-446b-881f-a0a66f440016\") " pod="openstack/ceilometer-0" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.899809 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e2c63d-ba70-446b-881f-a0a66f440016-config-data\") pod \"ceilometer-0\" (UID: \"15e2c63d-ba70-446b-881f-a0a66f440016\") " pod="openstack/ceilometer-0" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.903618 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e2c63d-ba70-446b-881f-a0a66f440016-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"15e2c63d-ba70-446b-881f-a0a66f440016\") " pod="openstack/ceilometer-0" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.903730 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15e2c63d-ba70-446b-881f-a0a66f440016-scripts\") pod \"ceilometer-0\" (UID: \"15e2c63d-ba70-446b-881f-a0a66f440016\") " pod="openstack/ceilometer-0" Jan 01 08:47:28 crc kubenswrapper[4867]: I0101 08:47:28.917297 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swctj\" (UniqueName: \"kubernetes.io/projected/15e2c63d-ba70-446b-881f-a0a66f440016-kube-api-access-swctj\") pod \"ceilometer-0\" (UID: \"15e2c63d-ba70-446b-881f-a0a66f440016\") " pod="openstack/ceilometer-0" Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.107684 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.142110 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b51add02-d86c-4eb3-924d-1b2ac530e97b" path="/var/lib/kubelet/pods/b51add02-d86c-4eb3-924d-1b2ac530e97b/volumes" Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.170626 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-58dc5bfddd-522rc" Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.181369 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-58dc5bfddd-522rc" Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.275278 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7998fdfbd-7j4fm"] Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.275927 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7998fdfbd-7j4fm" podUID="7970bf01-94d4-4ceb-9289-f6e4f7a00f86" containerName="barbican-api-log" containerID="cri-o://8620038bc7a216bcaa2c84ee2c6daf70d342571e9e35270dd64fc523d1d2ed14" gracePeriod=30 Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.276053 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7998fdfbd-7j4fm" podUID="7970bf01-94d4-4ceb-9289-f6e4f7a00f86" containerName="barbican-api" containerID="cri-o://f0faa4e5c265cb6fd5c37a90e5f1fd1d09f968338daf05e4b9a3007515c35263" gracePeriod=30 Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.545476 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d87039c4-162c-48a7-a367-176bf83b674f","Type":"ContainerStarted","Data":"04323ba633c2a2b2eb2b7f37c7da882f6948f4f760470abbef21d72e6eb06f2e"} Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.550856 4867 generic.go:334] "Generic (PLEG): container finished" podID="7970bf01-94d4-4ceb-9289-f6e4f7a00f86" containerID="8620038bc7a216bcaa2c84ee2c6daf70d342571e9e35270dd64fc523d1d2ed14" exitCode=143 Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.550924 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7998fdfbd-7j4fm" event={"ID":"7970bf01-94d4-4ceb-9289-f6e4f7a00f86","Type":"ContainerDied","Data":"8620038bc7a216bcaa2c84ee2c6daf70d342571e9e35270dd64fc523d1d2ed14"} Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.551007 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.565250 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.680494026 podStartE2EDuration="4.565233254s" podCreationTimestamp="2026-01-01 08:47:25 +0000 UTC" firstStartedPulling="2026-01-01 08:47:26.613034344 +0000 UTC m=+1255.748303113" lastFinishedPulling="2026-01-01 08:47:27.497773572 +0000 UTC m=+1256.633042341" observedRunningTime="2026-01-01 08:47:29.561073008 +0000 UTC m=+1258.696341797" watchObservedRunningTime="2026-01-01 08:47:29.565233254 +0000 UTC m=+1258.700502023" Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.638130 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.654267 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.664055 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.672944 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.674831 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.677940 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.678141 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.678434 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.678679 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.719546 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff82f43d-33bd-47f0-9864-83bb3048f9b2-config-data\") pod \"cinder-api-0\" (UID: \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\") " pod="openstack/cinder-api-0" Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.719620 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff82f43d-33bd-47f0-9864-83bb3048f9b2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\") " pod="openstack/cinder-api-0" Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.719709 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff82f43d-33bd-47f0-9864-83bb3048f9b2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\") " pod="openstack/cinder-api-0" Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.719751 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff82f43d-33bd-47f0-9864-83bb3048f9b2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\") " pod="openstack/cinder-api-0" Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.719774 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff82f43d-33bd-47f0-9864-83bb3048f9b2-scripts\") pod \"cinder-api-0\" (UID: \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\") " pod="openstack/cinder-api-0" Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.719827 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff82f43d-33bd-47f0-9864-83bb3048f9b2-logs\") pod \"cinder-api-0\" (UID: \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\") " pod="openstack/cinder-api-0" Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.719850 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmcs8\" (UniqueName: \"kubernetes.io/projected/ff82f43d-33bd-47f0-9864-83bb3048f9b2-kube-api-access-rmcs8\") pod \"cinder-api-0\" (UID: \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\") " pod="openstack/cinder-api-0" Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.719921 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff82f43d-33bd-47f0-9864-83bb3048f9b2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\") " pod="openstack/cinder-api-0" Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.719959 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff82f43d-33bd-47f0-9864-83bb3048f9b2-config-data-custom\") pod \"cinder-api-0\" (UID: \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\") " pod="openstack/cinder-api-0" Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.821370 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff82f43d-33bd-47f0-9864-83bb3048f9b2-config-data\") pod \"cinder-api-0\" (UID: \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\") " pod="openstack/cinder-api-0" Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.821695 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff82f43d-33bd-47f0-9864-83bb3048f9b2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\") " pod="openstack/cinder-api-0" Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.821767 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff82f43d-33bd-47f0-9864-83bb3048f9b2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\") " pod="openstack/cinder-api-0" Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.821812 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff82f43d-33bd-47f0-9864-83bb3048f9b2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\") " pod="openstack/cinder-api-0" Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.821848 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff82f43d-33bd-47f0-9864-83bb3048f9b2-scripts\") pod \"cinder-api-0\" (UID: \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\") " pod="openstack/cinder-api-0" Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.821915 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff82f43d-33bd-47f0-9864-83bb3048f9b2-logs\") pod \"cinder-api-0\" (UID: \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\") " pod="openstack/cinder-api-0" Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.821943 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmcs8\" (UniqueName: \"kubernetes.io/projected/ff82f43d-33bd-47f0-9864-83bb3048f9b2-kube-api-access-rmcs8\") pod \"cinder-api-0\" (UID: \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\") " pod="openstack/cinder-api-0" Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.821997 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff82f43d-33bd-47f0-9864-83bb3048f9b2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\") " pod="openstack/cinder-api-0" Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.822040 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff82f43d-33bd-47f0-9864-83bb3048f9b2-config-data-custom\") pod \"cinder-api-0\" (UID: \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\") " pod="openstack/cinder-api-0" Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.822373 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff82f43d-33bd-47f0-9864-83bb3048f9b2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\") " pod="openstack/cinder-api-0" Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.823011 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff82f43d-33bd-47f0-9864-83bb3048f9b2-logs\") pod \"cinder-api-0\" (UID: \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\") " pod="openstack/cinder-api-0" Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.828541 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff82f43d-33bd-47f0-9864-83bb3048f9b2-config-data\") pod \"cinder-api-0\" (UID: \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\") " pod="openstack/cinder-api-0" Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.829208 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff82f43d-33bd-47f0-9864-83bb3048f9b2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\") " pod="openstack/cinder-api-0" Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.829380 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff82f43d-33bd-47f0-9864-83bb3048f9b2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\") " pod="openstack/cinder-api-0" Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.830025 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff82f43d-33bd-47f0-9864-83bb3048f9b2-config-data-custom\") pod \"cinder-api-0\" (UID: \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\") " pod="openstack/cinder-api-0" Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.837140 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff82f43d-33bd-47f0-9864-83bb3048f9b2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\") " pod="openstack/cinder-api-0" Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.839308 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff82f43d-33bd-47f0-9864-83bb3048f9b2-scripts\") pod \"cinder-api-0\" (UID: \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\") " pod="openstack/cinder-api-0" Jan 01 08:47:29 crc kubenswrapper[4867]: I0101 08:47:29.843656 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmcs8\" (UniqueName: \"kubernetes.io/projected/ff82f43d-33bd-47f0-9864-83bb3048f9b2-kube-api-access-rmcs8\") pod \"cinder-api-0\" (UID: \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\") " pod="openstack/cinder-api-0" Jan 01 08:47:30 crc kubenswrapper[4867]: I0101 08:47:30.013600 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 01 08:47:30 crc kubenswrapper[4867]: I0101 08:47:30.568076 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15e2c63d-ba70-446b-881f-a0a66f440016","Type":"ContainerStarted","Data":"8b65e10b915437b913d49d0bd8a94e4b8637ca66dbf5be348e3964bab8fc065b"} Jan 01 08:47:30 crc kubenswrapper[4867]: I0101 08:47:30.568374 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15e2c63d-ba70-446b-881f-a0a66f440016","Type":"ContainerStarted","Data":"8af3531642ab8ac0e16211ac784e9dfffead601b9d60131ee77b6e40d76240b3"} Jan 01 08:47:30 crc kubenswrapper[4867]: I0101 08:47:30.573493 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 01 08:47:30 crc kubenswrapper[4867]: W0101 08:47:30.576402 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff82f43d_33bd_47f0_9864_83bb3048f9b2.slice/crio-6c9faa3db976983e95098561d1c91bd4834cacbb33c6fce0bc3a0ed585a19543 WatchSource:0}: Error finding container 6c9faa3db976983e95098561d1c91bd4834cacbb33c6fce0bc3a0ed585a19543: Status 404 returned error can't find the container with id 6c9faa3db976983e95098561d1c91bd4834cacbb33c6fce0bc3a0ed585a19543 Jan 01 08:47:31 crc kubenswrapper[4867]: I0101 08:47:31.074551 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 01 08:47:31 crc kubenswrapper[4867]: I0101 08:47:31.150948 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc3d3ca2-7307-41e0-9041-4dd6cacfa63e" path="/var/lib/kubelet/pods/dc3d3ca2-7307-41e0-9041-4dd6cacfa63e/volumes" Jan 01 08:47:31 crc kubenswrapper[4867]: I0101 08:47:31.582651 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15e2c63d-ba70-446b-881f-a0a66f440016","Type":"ContainerStarted","Data":"cfc432020325ce046b0c180cb1225445390848132f6940931cdb72dda653b974"} Jan 01 08:47:31 crc kubenswrapper[4867]: I0101 08:47:31.586783 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ff82f43d-33bd-47f0-9864-83bb3048f9b2","Type":"ContainerStarted","Data":"faeef81012a39d5e86ee47c82b3d29f10718732a72e3a4c2371bd4f1d2e7f489"} Jan 01 08:47:31 crc kubenswrapper[4867]: I0101 08:47:31.586843 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ff82f43d-33bd-47f0-9864-83bb3048f9b2","Type":"ContainerStarted","Data":"6c9faa3db976983e95098561d1c91bd4834cacbb33c6fce0bc3a0ed585a19543"} Jan 01 08:47:32 crc kubenswrapper[4867]: I0101 08:47:32.456510 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7998fdfbd-7j4fm" podUID="7970bf01-94d4-4ceb-9289-f6e4f7a00f86" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:58980->10.217.0.160:9311: read: connection reset by peer" Jan 01 08:47:32 crc kubenswrapper[4867]: I0101 08:47:32.457784 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7998fdfbd-7j4fm" podUID="7970bf01-94d4-4ceb-9289-f6e4f7a00f86" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:58974->10.217.0.160:9311: read: connection reset by peer" Jan 01 08:47:32 crc kubenswrapper[4867]: I0101 08:47:32.602505 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ff82f43d-33bd-47f0-9864-83bb3048f9b2","Type":"ContainerStarted","Data":"c1322607e2d2d81092b3e995c7264c64ede61c6ce739cb323ee27a1ce97fbebb"} Jan 01 08:47:32 crc kubenswrapper[4867]: I0101 08:47:32.602803 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 01 08:47:32 crc kubenswrapper[4867]: I0101 08:47:32.607539 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15e2c63d-ba70-446b-881f-a0a66f440016","Type":"ContainerStarted","Data":"cdc5a23f2185ee74e9ba9c57d42b06575e428c4549a4bdbabc7cabcbaf71d0ee"} Jan 01 08:47:32 crc kubenswrapper[4867]: I0101 08:47:32.611356 4867 generic.go:334] "Generic (PLEG): container finished" podID="7970bf01-94d4-4ceb-9289-f6e4f7a00f86" containerID="f0faa4e5c265cb6fd5c37a90e5f1fd1d09f968338daf05e4b9a3007515c35263" exitCode=0 Jan 01 08:47:32 crc kubenswrapper[4867]: I0101 08:47:32.611397 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7998fdfbd-7j4fm" event={"ID":"7970bf01-94d4-4ceb-9289-f6e4f7a00f86","Type":"ContainerDied","Data":"f0faa4e5c265cb6fd5c37a90e5f1fd1d09f968338daf05e4b9a3007515c35263"} Jan 01 08:47:32 crc kubenswrapper[4867]: I0101 08:47:32.640688 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.640662968 podStartE2EDuration="3.640662968s" podCreationTimestamp="2026-01-01 08:47:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:47:32.627219382 +0000 UTC m=+1261.762488211" watchObservedRunningTime="2026-01-01 08:47:32.640662968 +0000 UTC m=+1261.775931777" Jan 01 08:47:32 crc kubenswrapper[4867]: I0101 08:47:32.964815 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7998fdfbd-7j4fm" Jan 01 08:47:33 crc kubenswrapper[4867]: I0101 08:47:33.103330 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7970bf01-94d4-4ceb-9289-f6e4f7a00f86-logs\") pod \"7970bf01-94d4-4ceb-9289-f6e4f7a00f86\" (UID: \"7970bf01-94d4-4ceb-9289-f6e4f7a00f86\") " Jan 01 08:47:33 crc kubenswrapper[4867]: I0101 08:47:33.103408 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7970bf01-94d4-4ceb-9289-f6e4f7a00f86-combined-ca-bundle\") pod \"7970bf01-94d4-4ceb-9289-f6e4f7a00f86\" (UID: \"7970bf01-94d4-4ceb-9289-f6e4f7a00f86\") " Jan 01 08:47:33 crc kubenswrapper[4867]: I0101 08:47:33.103531 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqbnw\" (UniqueName: \"kubernetes.io/projected/7970bf01-94d4-4ceb-9289-f6e4f7a00f86-kube-api-access-pqbnw\") pod \"7970bf01-94d4-4ceb-9289-f6e4f7a00f86\" (UID: \"7970bf01-94d4-4ceb-9289-f6e4f7a00f86\") " Jan 01 08:47:33 crc kubenswrapper[4867]: I0101 08:47:33.103748 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7970bf01-94d4-4ceb-9289-f6e4f7a00f86-config-data-custom\") pod \"7970bf01-94d4-4ceb-9289-f6e4f7a00f86\" (UID: \"7970bf01-94d4-4ceb-9289-f6e4f7a00f86\") " Jan 01 08:47:33 crc kubenswrapper[4867]: I0101 08:47:33.103915 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7970bf01-94d4-4ceb-9289-f6e4f7a00f86-config-data\") pod \"7970bf01-94d4-4ceb-9289-f6e4f7a00f86\" (UID: \"7970bf01-94d4-4ceb-9289-f6e4f7a00f86\") " Jan 01 08:47:33 crc kubenswrapper[4867]: I0101 08:47:33.106098 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7970bf01-94d4-4ceb-9289-f6e4f7a00f86-logs" (OuterVolumeSpecName: "logs") pod "7970bf01-94d4-4ceb-9289-f6e4f7a00f86" (UID: "7970bf01-94d4-4ceb-9289-f6e4f7a00f86"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:47:33 crc kubenswrapper[4867]: I0101 08:47:33.110722 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7970bf01-94d4-4ceb-9289-f6e4f7a00f86-kube-api-access-pqbnw" (OuterVolumeSpecName: "kube-api-access-pqbnw") pod "7970bf01-94d4-4ceb-9289-f6e4f7a00f86" (UID: "7970bf01-94d4-4ceb-9289-f6e4f7a00f86"). InnerVolumeSpecName "kube-api-access-pqbnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:47:33 crc kubenswrapper[4867]: I0101 08:47:33.118531 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7970bf01-94d4-4ceb-9289-f6e4f7a00f86-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7970bf01-94d4-4ceb-9289-f6e4f7a00f86" (UID: "7970bf01-94d4-4ceb-9289-f6e4f7a00f86"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:33 crc kubenswrapper[4867]: I0101 08:47:33.160355 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7970bf01-94d4-4ceb-9289-f6e4f7a00f86-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7970bf01-94d4-4ceb-9289-f6e4f7a00f86" (UID: "7970bf01-94d4-4ceb-9289-f6e4f7a00f86"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:33 crc kubenswrapper[4867]: I0101 08:47:33.180652 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7970bf01-94d4-4ceb-9289-f6e4f7a00f86-config-data" (OuterVolumeSpecName: "config-data") pod "7970bf01-94d4-4ceb-9289-f6e4f7a00f86" (UID: "7970bf01-94d4-4ceb-9289-f6e4f7a00f86"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:33 crc kubenswrapper[4867]: I0101 08:47:33.211272 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7970bf01-94d4-4ceb-9289-f6e4f7a00f86-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:33 crc kubenswrapper[4867]: I0101 08:47:33.211320 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7970bf01-94d4-4ceb-9289-f6e4f7a00f86-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:33 crc kubenswrapper[4867]: I0101 08:47:33.211333 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7970bf01-94d4-4ceb-9289-f6e4f7a00f86-logs\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:33 crc kubenswrapper[4867]: I0101 08:47:33.211345 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7970bf01-94d4-4ceb-9289-f6e4f7a00f86-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:33 crc kubenswrapper[4867]: I0101 08:47:33.211356 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqbnw\" (UniqueName: \"kubernetes.io/projected/7970bf01-94d4-4ceb-9289-f6e4f7a00f86-kube-api-access-pqbnw\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:33 crc kubenswrapper[4867]: I0101 08:47:33.629547 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7998fdfbd-7j4fm" event={"ID":"7970bf01-94d4-4ceb-9289-f6e4f7a00f86","Type":"ContainerDied","Data":"7e54efc172f8be24d7b83a42610ea91a2ecd55045c360979da0d30c809b0ec2f"} Jan 01 08:47:33 crc kubenswrapper[4867]: I0101 08:47:33.629829 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7998fdfbd-7j4fm" Jan 01 08:47:33 crc kubenswrapper[4867]: I0101 08:47:33.631363 4867 scope.go:117] "RemoveContainer" containerID="f0faa4e5c265cb6fd5c37a90e5f1fd1d09f968338daf05e4b9a3007515c35263" Jan 01 08:47:33 crc kubenswrapper[4867]: I0101 08:47:33.635541 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15e2c63d-ba70-446b-881f-a0a66f440016","Type":"ContainerStarted","Data":"42e2e9e1f1dd8ff8880e93a133c2f44e1a1ff99130135a84e1796a570c13904f"} Jan 01 08:47:33 crc kubenswrapper[4867]: I0101 08:47:33.635688 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 01 08:47:33 crc kubenswrapper[4867]: I0101 08:47:33.670468 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9588663880000001 podStartE2EDuration="5.670446614s" podCreationTimestamp="2026-01-01 08:47:28 +0000 UTC" firstStartedPulling="2026-01-01 08:47:29.664907058 +0000 UTC m=+1258.800175817" lastFinishedPulling="2026-01-01 08:47:33.376487274 +0000 UTC m=+1262.511756043" observedRunningTime="2026-01-01 08:47:33.659700502 +0000 UTC m=+1262.794969311" watchObservedRunningTime="2026-01-01 08:47:33.670446614 +0000 UTC m=+1262.805715403" Jan 01 08:47:33 crc kubenswrapper[4867]: I0101 08:47:33.671712 4867 scope.go:117] "RemoveContainer" containerID="8620038bc7a216bcaa2c84ee2c6daf70d342571e9e35270dd64fc523d1d2ed14" Jan 01 08:47:33 crc kubenswrapper[4867]: I0101 08:47:33.686662 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7998fdfbd-7j4fm"] Jan 01 08:47:33 crc kubenswrapper[4867]: I0101 08:47:33.693696 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7998fdfbd-7j4fm"] Jan 01 08:47:35 crc kubenswrapper[4867]: I0101 08:47:35.140145 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7970bf01-94d4-4ceb-9289-f6e4f7a00f86" path="/var/lib/kubelet/pods/7970bf01-94d4-4ceb-9289-f6e4f7a00f86/volumes" Jan 01 08:47:35 crc kubenswrapper[4867]: I0101 08:47:35.535996 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6bccf6db66-lbtdw" Jan 01 08:47:36 crc kubenswrapper[4867]: I0101 08:47:36.089026 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b79d6d5d9-r54bp" Jan 01 08:47:36 crc kubenswrapper[4867]: I0101 08:47:36.187660 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b4c6f4469-xj4b9"] Jan 01 08:47:36 crc kubenswrapper[4867]: I0101 08:47:36.188482 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b4c6f4469-xj4b9" podUID="79f8db2c-a9dd-4a94-a0fc-c24d551c2baa" containerName="dnsmasq-dns" containerID="cri-o://3ff9506c60682e3449b67de6a85c0c9e60bafe26d40e57a74aeb41a25472b5b4" gracePeriod=10 Jan 01 08:47:36 crc kubenswrapper[4867]: I0101 08:47:36.355066 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 01 08:47:36 crc kubenswrapper[4867]: I0101 08:47:36.392386 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 01 08:47:36 crc kubenswrapper[4867]: I0101 08:47:36.660927 4867 generic.go:334] "Generic (PLEG): container finished" podID="79f8db2c-a9dd-4a94-a0fc-c24d551c2baa" containerID="3ff9506c60682e3449b67de6a85c0c9e60bafe26d40e57a74aeb41a25472b5b4" exitCode=0 Jan 01 08:47:36 crc kubenswrapper[4867]: I0101 08:47:36.661143 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b4c6f4469-xj4b9" event={"ID":"79f8db2c-a9dd-4a94-a0fc-c24d551c2baa","Type":"ContainerDied","Data":"3ff9506c60682e3449b67de6a85c0c9e60bafe26d40e57a74aeb41a25472b5b4"} Jan 01 08:47:36 crc kubenswrapper[4867]: I0101 08:47:36.661279 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b4c6f4469-xj4b9" event={"ID":"79f8db2c-a9dd-4a94-a0fc-c24d551c2baa","Type":"ContainerDied","Data":"d807638e9520261dde0834d8a7855aeac43a8567e9921c38259f4691a0cc6700"} Jan 01 08:47:36 crc kubenswrapper[4867]: I0101 08:47:36.661351 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d807638e9520261dde0834d8a7855aeac43a8567e9921c38259f4691a0cc6700" Jan 01 08:47:36 crc kubenswrapper[4867]: I0101 08:47:36.661156 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d87039c4-162c-48a7-a367-176bf83b674f" containerName="cinder-scheduler" containerID="cri-o://d5ce34e933852f0ec86e4268745f1b37011e75cab812300e5117d0490a47cb48" gracePeriod=30 Jan 01 08:47:36 crc kubenswrapper[4867]: I0101 08:47:36.661252 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d87039c4-162c-48a7-a367-176bf83b674f" containerName="probe" containerID="cri-o://04323ba633c2a2b2eb2b7f37c7da882f6948f4f760470abbef21d72e6eb06f2e" gracePeriod=30 Jan 01 08:47:36 crc kubenswrapper[4867]: I0101 08:47:36.698898 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b4c6f4469-xj4b9" Jan 01 08:47:36 crc kubenswrapper[4867]: I0101 08:47:36.811618 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79f8db2c-a9dd-4a94-a0fc-c24d551c2baa-dns-swift-storage-0\") pod \"79f8db2c-a9dd-4a94-a0fc-c24d551c2baa\" (UID: \"79f8db2c-a9dd-4a94-a0fc-c24d551c2baa\") " Jan 01 08:47:36 crc kubenswrapper[4867]: I0101 08:47:36.811807 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79f8db2c-a9dd-4a94-a0fc-c24d551c2baa-config\") pod \"79f8db2c-a9dd-4a94-a0fc-c24d551c2baa\" (UID: \"79f8db2c-a9dd-4a94-a0fc-c24d551c2baa\") " Jan 01 08:47:36 crc kubenswrapper[4867]: I0101 08:47:36.811841 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79f8db2c-a9dd-4a94-a0fc-c24d551c2baa-ovsdbserver-nb\") pod \"79f8db2c-a9dd-4a94-a0fc-c24d551c2baa\" (UID: \"79f8db2c-a9dd-4a94-a0fc-c24d551c2baa\") " Jan 01 08:47:36 crc kubenswrapper[4867]: I0101 08:47:36.811930 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfvs2\" (UniqueName: \"kubernetes.io/projected/79f8db2c-a9dd-4a94-a0fc-c24d551c2baa-kube-api-access-rfvs2\") pod \"79f8db2c-a9dd-4a94-a0fc-c24d551c2baa\" (UID: \"79f8db2c-a9dd-4a94-a0fc-c24d551c2baa\") " Jan 01 08:47:36 crc kubenswrapper[4867]: I0101 08:47:36.811955 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79f8db2c-a9dd-4a94-a0fc-c24d551c2baa-ovsdbserver-sb\") pod \"79f8db2c-a9dd-4a94-a0fc-c24d551c2baa\" (UID: \"79f8db2c-a9dd-4a94-a0fc-c24d551c2baa\") " Jan 01 08:47:36 crc kubenswrapper[4867]: I0101 08:47:36.812015 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79f8db2c-a9dd-4a94-a0fc-c24d551c2baa-dns-svc\") pod \"79f8db2c-a9dd-4a94-a0fc-c24d551c2baa\" (UID: \"79f8db2c-a9dd-4a94-a0fc-c24d551c2baa\") " Jan 01 08:47:36 crc kubenswrapper[4867]: I0101 08:47:36.831077 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79f8db2c-a9dd-4a94-a0fc-c24d551c2baa-kube-api-access-rfvs2" (OuterVolumeSpecName: "kube-api-access-rfvs2") pod "79f8db2c-a9dd-4a94-a0fc-c24d551c2baa" (UID: "79f8db2c-a9dd-4a94-a0fc-c24d551c2baa"). InnerVolumeSpecName "kube-api-access-rfvs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:47:36 crc kubenswrapper[4867]: I0101 08:47:36.874144 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79f8db2c-a9dd-4a94-a0fc-c24d551c2baa-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "79f8db2c-a9dd-4a94-a0fc-c24d551c2baa" (UID: "79f8db2c-a9dd-4a94-a0fc-c24d551c2baa"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:47:36 crc kubenswrapper[4867]: I0101 08:47:36.876214 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79f8db2c-a9dd-4a94-a0fc-c24d551c2baa-config" (OuterVolumeSpecName: "config") pod "79f8db2c-a9dd-4a94-a0fc-c24d551c2baa" (UID: "79f8db2c-a9dd-4a94-a0fc-c24d551c2baa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:47:36 crc kubenswrapper[4867]: I0101 08:47:36.879768 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79f8db2c-a9dd-4a94-a0fc-c24d551c2baa-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "79f8db2c-a9dd-4a94-a0fc-c24d551c2baa" (UID: "79f8db2c-a9dd-4a94-a0fc-c24d551c2baa"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:47:36 crc kubenswrapper[4867]: I0101 08:47:36.886636 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79f8db2c-a9dd-4a94-a0fc-c24d551c2baa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "79f8db2c-a9dd-4a94-a0fc-c24d551c2baa" (UID: "79f8db2c-a9dd-4a94-a0fc-c24d551c2baa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:47:36 crc kubenswrapper[4867]: I0101 08:47:36.893315 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79f8db2c-a9dd-4a94-a0fc-c24d551c2baa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "79f8db2c-a9dd-4a94-a0fc-c24d551c2baa" (UID: "79f8db2c-a9dd-4a94-a0fc-c24d551c2baa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:47:36 crc kubenswrapper[4867]: I0101 08:47:36.913673 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79f8db2c-a9dd-4a94-a0fc-c24d551c2baa-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:36 crc kubenswrapper[4867]: I0101 08:47:36.913706 4867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79f8db2c-a9dd-4a94-a0fc-c24d551c2baa-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:36 crc kubenswrapper[4867]: I0101 08:47:36.913716 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79f8db2c-a9dd-4a94-a0fc-c24d551c2baa-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:36 crc kubenswrapper[4867]: I0101 08:47:36.913727 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79f8db2c-a9dd-4a94-a0fc-c24d551c2baa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:36 crc kubenswrapper[4867]: I0101 08:47:36.913736 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfvs2\" (UniqueName: \"kubernetes.io/projected/79f8db2c-a9dd-4a94-a0fc-c24d551c2baa-kube-api-access-rfvs2\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:36 crc kubenswrapper[4867]: I0101 08:47:36.913743 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79f8db2c-a9dd-4a94-a0fc-c24d551c2baa-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:37 crc kubenswrapper[4867]: I0101 08:47:37.679118 4867 generic.go:334] "Generic (PLEG): container finished" podID="d87039c4-162c-48a7-a367-176bf83b674f" containerID="04323ba633c2a2b2eb2b7f37c7da882f6948f4f760470abbef21d72e6eb06f2e" exitCode=0 Jan 01 08:47:37 crc kubenswrapper[4867]: I0101 08:47:37.679172 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d87039c4-162c-48a7-a367-176bf83b674f","Type":"ContainerDied","Data":"04323ba633c2a2b2eb2b7f37c7da882f6948f4f760470abbef21d72e6eb06f2e"} Jan 01 08:47:37 crc kubenswrapper[4867]: I0101 08:47:37.679248 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b4c6f4469-xj4b9" Jan 01 08:47:37 crc kubenswrapper[4867]: I0101 08:47:37.718465 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b4c6f4469-xj4b9"] Jan 01 08:47:37 crc kubenswrapper[4867]: I0101 08:47:37.731774 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b4c6f4469-xj4b9"] Jan 01 08:47:38 crc kubenswrapper[4867]: I0101 08:47:38.636849 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5fb785fd89-9d8g9" Jan 01 08:47:38 crc kubenswrapper[4867]: I0101 08:47:38.724857 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6bccf6db66-lbtdw"] Jan 01 08:47:38 crc kubenswrapper[4867]: I0101 08:47:38.725473 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6bccf6db66-lbtdw" podUID="96b7e6f9-7a1c-4f53-8317-f11e46a64ee4" containerName="neutron-api" containerID="cri-o://8c31e833d9d875c10ee9ea92d077898d1c1a4d514dd0e1b6017de6ae661ea19e" gracePeriod=30 Jan 01 08:47:38 crc kubenswrapper[4867]: I0101 08:47:38.725983 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6bccf6db66-lbtdw" podUID="96b7e6f9-7a1c-4f53-8317-f11e46a64ee4" containerName="neutron-httpd" containerID="cri-o://84372a0b96d302680aac52645226307b41fb5dee0164942592b278b11e466ac9" gracePeriod=30 Jan 01 08:47:39 crc kubenswrapper[4867]: I0101 08:47:39.142872 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79f8db2c-a9dd-4a94-a0fc-c24d551c2baa" path="/var/lib/kubelet/pods/79f8db2c-a9dd-4a94-a0fc-c24d551c2baa/volumes" Jan 01 08:47:39 crc kubenswrapper[4867]: I0101 08:47:39.699452 4867 generic.go:334] "Generic (PLEG): container finished" podID="96b7e6f9-7a1c-4f53-8317-f11e46a64ee4" containerID="84372a0b96d302680aac52645226307b41fb5dee0164942592b278b11e466ac9" exitCode=0 Jan 01 08:47:39 crc kubenswrapper[4867]: I0101 08:47:39.699495 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bccf6db66-lbtdw" event={"ID":"96b7e6f9-7a1c-4f53-8317-f11e46a64ee4","Type":"ContainerDied","Data":"84372a0b96d302680aac52645226307b41fb5dee0164942592b278b11e466ac9"} Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.121795 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.284591 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d87039c4-162c-48a7-a367-176bf83b674f-config-data-custom\") pod \"d87039c4-162c-48a7-a367-176bf83b674f\" (UID: \"d87039c4-162c-48a7-a367-176bf83b674f\") " Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.284689 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d87039c4-162c-48a7-a367-176bf83b674f-etc-machine-id\") pod \"d87039c4-162c-48a7-a367-176bf83b674f\" (UID: \"d87039c4-162c-48a7-a367-176bf83b674f\") " Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.284763 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7gxf\" (UniqueName: \"kubernetes.io/projected/d87039c4-162c-48a7-a367-176bf83b674f-kube-api-access-t7gxf\") pod \"d87039c4-162c-48a7-a367-176bf83b674f\" (UID: \"d87039c4-162c-48a7-a367-176bf83b674f\") " Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.284798 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d87039c4-162c-48a7-a367-176bf83b674f-config-data\") pod \"d87039c4-162c-48a7-a367-176bf83b674f\" (UID: \"d87039c4-162c-48a7-a367-176bf83b674f\") " Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.284830 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d87039c4-162c-48a7-a367-176bf83b674f-combined-ca-bundle\") pod \"d87039c4-162c-48a7-a367-176bf83b674f\" (UID: \"d87039c4-162c-48a7-a367-176bf83b674f\") " Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.284855 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d87039c4-162c-48a7-a367-176bf83b674f-scripts\") pod \"d87039c4-162c-48a7-a367-176bf83b674f\" (UID: \"d87039c4-162c-48a7-a367-176bf83b674f\") " Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.285957 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d87039c4-162c-48a7-a367-176bf83b674f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d87039c4-162c-48a7-a367-176bf83b674f" (UID: "d87039c4-162c-48a7-a367-176bf83b674f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.291788 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d87039c4-162c-48a7-a367-176bf83b674f-scripts" (OuterVolumeSpecName: "scripts") pod "d87039c4-162c-48a7-a367-176bf83b674f" (UID: "d87039c4-162c-48a7-a367-176bf83b674f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.302610 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d87039c4-162c-48a7-a367-176bf83b674f-kube-api-access-t7gxf" (OuterVolumeSpecName: "kube-api-access-t7gxf") pod "d87039c4-162c-48a7-a367-176bf83b674f" (UID: "d87039c4-162c-48a7-a367-176bf83b674f"). InnerVolumeSpecName "kube-api-access-t7gxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.311016 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d87039c4-162c-48a7-a367-176bf83b674f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d87039c4-162c-48a7-a367-176bf83b674f" (UID: "d87039c4-162c-48a7-a367-176bf83b674f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.344417 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d87039c4-162c-48a7-a367-176bf83b674f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d87039c4-162c-48a7-a367-176bf83b674f" (UID: "d87039c4-162c-48a7-a367-176bf83b674f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.386683 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d87039c4-162c-48a7-a367-176bf83b674f-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.386710 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d87039c4-162c-48a7-a367-176bf83b674f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.386721 4867 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d87039c4-162c-48a7-a367-176bf83b674f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.386730 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7gxf\" (UniqueName: \"kubernetes.io/projected/d87039c4-162c-48a7-a367-176bf83b674f-kube-api-access-t7gxf\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.386740 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d87039c4-162c-48a7-a367-176bf83b674f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.389368 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d87039c4-162c-48a7-a367-176bf83b674f-config-data" (OuterVolumeSpecName: "config-data") pod "d87039c4-162c-48a7-a367-176bf83b674f" (UID: "d87039c4-162c-48a7-a367-176bf83b674f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.488606 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d87039c4-162c-48a7-a367-176bf83b674f-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.710458 4867 generic.go:334] "Generic (PLEG): container finished" podID="d87039c4-162c-48a7-a367-176bf83b674f" containerID="d5ce34e933852f0ec86e4268745f1b37011e75cab812300e5117d0490a47cb48" exitCode=0 Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.710498 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d87039c4-162c-48a7-a367-176bf83b674f","Type":"ContainerDied","Data":"d5ce34e933852f0ec86e4268745f1b37011e75cab812300e5117d0490a47cb48"} Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.710601 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.710665 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d87039c4-162c-48a7-a367-176bf83b674f","Type":"ContainerDied","Data":"1b4723a4505aa7e68c527d8225bf3e57cefde328bc126debc8c997a2401f3473"} Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.710689 4867 scope.go:117] "RemoveContainer" containerID="04323ba633c2a2b2eb2b7f37c7da882f6948f4f760470abbef21d72e6eb06f2e" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.737315 4867 scope.go:117] "RemoveContainer" containerID="d5ce34e933852f0ec86e4268745f1b37011e75cab812300e5117d0490a47cb48" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.767285 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.785672 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.798639 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 01 08:47:40 crc kubenswrapper[4867]: E0101 08:47:40.799098 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7970bf01-94d4-4ceb-9289-f6e4f7a00f86" containerName="barbican-api-log" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.799117 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7970bf01-94d4-4ceb-9289-f6e4f7a00f86" containerName="barbican-api-log" Jan 01 08:47:40 crc kubenswrapper[4867]: E0101 08:47:40.799133 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d87039c4-162c-48a7-a367-176bf83b674f" containerName="probe" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.799140 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d87039c4-162c-48a7-a367-176bf83b674f" containerName="probe" Jan 01 08:47:40 crc kubenswrapper[4867]: E0101 08:47:40.799158 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d87039c4-162c-48a7-a367-176bf83b674f" containerName="cinder-scheduler" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.799164 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d87039c4-162c-48a7-a367-176bf83b674f" containerName="cinder-scheduler" Jan 01 08:47:40 crc kubenswrapper[4867]: E0101 08:47:40.799182 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7970bf01-94d4-4ceb-9289-f6e4f7a00f86" containerName="barbican-api" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.799187 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7970bf01-94d4-4ceb-9289-f6e4f7a00f86" containerName="barbican-api" Jan 01 08:47:40 crc kubenswrapper[4867]: E0101 08:47:40.799197 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79f8db2c-a9dd-4a94-a0fc-c24d551c2baa" containerName="dnsmasq-dns" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.799204 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="79f8db2c-a9dd-4a94-a0fc-c24d551c2baa" containerName="dnsmasq-dns" Jan 01 08:47:40 crc kubenswrapper[4867]: E0101 08:47:40.799217 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79f8db2c-a9dd-4a94-a0fc-c24d551c2baa" containerName="init" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.799223 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="79f8db2c-a9dd-4a94-a0fc-c24d551c2baa" containerName="init" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.799398 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="7970bf01-94d4-4ceb-9289-f6e4f7a00f86" containerName="barbican-api" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.799416 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="79f8db2c-a9dd-4a94-a0fc-c24d551c2baa" containerName="dnsmasq-dns" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.799430 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="7970bf01-94d4-4ceb-9289-f6e4f7a00f86" containerName="barbican-api-log" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.799442 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="d87039c4-162c-48a7-a367-176bf83b674f" containerName="probe" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.799453 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="d87039c4-162c-48a7-a367-176bf83b674f" containerName="cinder-scheduler" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.800396 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.802479 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.810406 4867 scope.go:117] "RemoveContainer" containerID="04323ba633c2a2b2eb2b7f37c7da882f6948f4f760470abbef21d72e6eb06f2e" Jan 01 08:47:40 crc kubenswrapper[4867]: E0101 08:47:40.810938 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04323ba633c2a2b2eb2b7f37c7da882f6948f4f760470abbef21d72e6eb06f2e\": container with ID starting with 04323ba633c2a2b2eb2b7f37c7da882f6948f4f760470abbef21d72e6eb06f2e not found: ID does not exist" containerID="04323ba633c2a2b2eb2b7f37c7da882f6948f4f760470abbef21d72e6eb06f2e" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.810986 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04323ba633c2a2b2eb2b7f37c7da882f6948f4f760470abbef21d72e6eb06f2e"} err="failed to get container status \"04323ba633c2a2b2eb2b7f37c7da882f6948f4f760470abbef21d72e6eb06f2e\": rpc error: code = NotFound desc = could not find container \"04323ba633c2a2b2eb2b7f37c7da882f6948f4f760470abbef21d72e6eb06f2e\": container with ID starting with 04323ba633c2a2b2eb2b7f37c7da882f6948f4f760470abbef21d72e6eb06f2e not found: ID does not exist" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.811006 4867 scope.go:117] "RemoveContainer" containerID="d5ce34e933852f0ec86e4268745f1b37011e75cab812300e5117d0490a47cb48" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.812397 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 01 08:47:40 crc kubenswrapper[4867]: E0101 08:47:40.812929 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5ce34e933852f0ec86e4268745f1b37011e75cab812300e5117d0490a47cb48\": container with ID starting with d5ce34e933852f0ec86e4268745f1b37011e75cab812300e5117d0490a47cb48 not found: ID does not exist" containerID="d5ce34e933852f0ec86e4268745f1b37011e75cab812300e5117d0490a47cb48" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.812955 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5ce34e933852f0ec86e4268745f1b37011e75cab812300e5117d0490a47cb48"} err="failed to get container status \"d5ce34e933852f0ec86e4268745f1b37011e75cab812300e5117d0490a47cb48\": rpc error: code = NotFound desc = could not find container \"d5ce34e933852f0ec86e4268745f1b37011e75cab812300e5117d0490a47cb48\": container with ID starting with d5ce34e933852f0ec86e4268745f1b37011e75cab812300e5117d0490a47cb48 not found: ID does not exist" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.895167 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3205b065-c067-4035-8afb-e2bbcc7d8a42-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3205b065-c067-4035-8afb-e2bbcc7d8a42\") " pod="openstack/cinder-scheduler-0" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.895219 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgqjm\" (UniqueName: \"kubernetes.io/projected/3205b065-c067-4035-8afb-e2bbcc7d8a42-kube-api-access-xgqjm\") pod \"cinder-scheduler-0\" (UID: \"3205b065-c067-4035-8afb-e2bbcc7d8a42\") " pod="openstack/cinder-scheduler-0" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.895399 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3205b065-c067-4035-8afb-e2bbcc7d8a42-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3205b065-c067-4035-8afb-e2bbcc7d8a42\") " pod="openstack/cinder-scheduler-0" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.895502 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3205b065-c067-4035-8afb-e2bbcc7d8a42-scripts\") pod \"cinder-scheduler-0\" (UID: \"3205b065-c067-4035-8afb-e2bbcc7d8a42\") " pod="openstack/cinder-scheduler-0" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.895545 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3205b065-c067-4035-8afb-e2bbcc7d8a42-config-data\") pod \"cinder-scheduler-0\" (UID: \"3205b065-c067-4035-8afb-e2bbcc7d8a42\") " pod="openstack/cinder-scheduler-0" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.895584 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3205b065-c067-4035-8afb-e2bbcc7d8a42-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3205b065-c067-4035-8afb-e2bbcc7d8a42\") " pod="openstack/cinder-scheduler-0" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.996649 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgqjm\" (UniqueName: \"kubernetes.io/projected/3205b065-c067-4035-8afb-e2bbcc7d8a42-kube-api-access-xgqjm\") pod \"cinder-scheduler-0\" (UID: \"3205b065-c067-4035-8afb-e2bbcc7d8a42\") " pod="openstack/cinder-scheduler-0" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.997072 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3205b065-c067-4035-8afb-e2bbcc7d8a42-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3205b065-c067-4035-8afb-e2bbcc7d8a42\") " pod="openstack/cinder-scheduler-0" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.997101 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3205b065-c067-4035-8afb-e2bbcc7d8a42-scripts\") pod \"cinder-scheduler-0\" (UID: \"3205b065-c067-4035-8afb-e2bbcc7d8a42\") " pod="openstack/cinder-scheduler-0" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.997132 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3205b065-c067-4035-8afb-e2bbcc7d8a42-config-data\") pod \"cinder-scheduler-0\" (UID: \"3205b065-c067-4035-8afb-e2bbcc7d8a42\") " pod="openstack/cinder-scheduler-0" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.997174 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3205b065-c067-4035-8afb-e2bbcc7d8a42-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3205b065-c067-4035-8afb-e2bbcc7d8a42\") " pod="openstack/cinder-scheduler-0" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.997209 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3205b065-c067-4035-8afb-e2bbcc7d8a42-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3205b065-c067-4035-8afb-e2bbcc7d8a42\") " pod="openstack/cinder-scheduler-0" Jan 01 08:47:40 crc kubenswrapper[4867]: I0101 08:47:40.997128 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3205b065-c067-4035-8afb-e2bbcc7d8a42-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3205b065-c067-4035-8afb-e2bbcc7d8a42\") " pod="openstack/cinder-scheduler-0" Jan 01 08:47:41 crc kubenswrapper[4867]: I0101 08:47:41.002117 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3205b065-c067-4035-8afb-e2bbcc7d8a42-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3205b065-c067-4035-8afb-e2bbcc7d8a42\") " pod="openstack/cinder-scheduler-0" Jan 01 08:47:41 crc kubenswrapper[4867]: I0101 08:47:41.002514 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3205b065-c067-4035-8afb-e2bbcc7d8a42-scripts\") pod \"cinder-scheduler-0\" (UID: \"3205b065-c067-4035-8afb-e2bbcc7d8a42\") " pod="openstack/cinder-scheduler-0" Jan 01 08:47:41 crc kubenswrapper[4867]: I0101 08:47:41.004233 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3205b065-c067-4035-8afb-e2bbcc7d8a42-config-data\") pod \"cinder-scheduler-0\" (UID: \"3205b065-c067-4035-8afb-e2bbcc7d8a42\") " pod="openstack/cinder-scheduler-0" Jan 01 08:47:41 crc kubenswrapper[4867]: I0101 08:47:41.011239 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3205b065-c067-4035-8afb-e2bbcc7d8a42-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3205b065-c067-4035-8afb-e2bbcc7d8a42\") " pod="openstack/cinder-scheduler-0" Jan 01 08:47:41 crc kubenswrapper[4867]: I0101 08:47:41.015284 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgqjm\" (UniqueName: \"kubernetes.io/projected/3205b065-c067-4035-8afb-e2bbcc7d8a42-kube-api-access-xgqjm\") pod \"cinder-scheduler-0\" (UID: \"3205b065-c067-4035-8afb-e2bbcc7d8a42\") " pod="openstack/cinder-scheduler-0" Jan 01 08:47:41 crc kubenswrapper[4867]: I0101 08:47:41.123503 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 01 08:47:41 crc kubenswrapper[4867]: I0101 08:47:41.154231 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d87039c4-162c-48a7-a367-176bf83b674f" path="/var/lib/kubelet/pods/d87039c4-162c-48a7-a367-176bf83b674f/volumes" Jan 01 08:47:41 crc kubenswrapper[4867]: I0101 08:47:41.575668 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 01 08:47:41 crc kubenswrapper[4867]: I0101 08:47:41.708507 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 01 08:47:41 crc kubenswrapper[4867]: I0101 08:47:41.744566 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3205b065-c067-4035-8afb-e2bbcc7d8a42","Type":"ContainerStarted","Data":"e4046b3e49161a2c6897ac9a886e9820415272b0c4b53c73b3dd45eff1499813"} Jan 01 08:47:42 crc kubenswrapper[4867]: I0101 08:47:42.768974 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3205b065-c067-4035-8afb-e2bbcc7d8a42","Type":"ContainerStarted","Data":"eb7dcef39a55694c9e76f1f5778b1c287c9ba1f1a1711c0d8fbaaad900a62405"} Jan 01 08:47:43 crc kubenswrapper[4867]: I0101 08:47:43.002235 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-67dd85d5b6-ww7ll" Jan 01 08:47:43 crc kubenswrapper[4867]: I0101 08:47:43.529662 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6bccf6db66-lbtdw" Jan 01 08:47:43 crc kubenswrapper[4867]: I0101 08:47:43.660258 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/96b7e6f9-7a1c-4f53-8317-f11e46a64ee4-config\") pod \"96b7e6f9-7a1c-4f53-8317-f11e46a64ee4\" (UID: \"96b7e6f9-7a1c-4f53-8317-f11e46a64ee4\") " Jan 01 08:47:43 crc kubenswrapper[4867]: I0101 08:47:43.660364 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96b7e6f9-7a1c-4f53-8317-f11e46a64ee4-combined-ca-bundle\") pod \"96b7e6f9-7a1c-4f53-8317-f11e46a64ee4\" (UID: \"96b7e6f9-7a1c-4f53-8317-f11e46a64ee4\") " Jan 01 08:47:43 crc kubenswrapper[4867]: I0101 08:47:43.660417 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpt69\" (UniqueName: \"kubernetes.io/projected/96b7e6f9-7a1c-4f53-8317-f11e46a64ee4-kube-api-access-kpt69\") pod \"96b7e6f9-7a1c-4f53-8317-f11e46a64ee4\" (UID: \"96b7e6f9-7a1c-4f53-8317-f11e46a64ee4\") " Jan 01 08:47:43 crc kubenswrapper[4867]: I0101 08:47:43.660528 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/96b7e6f9-7a1c-4f53-8317-f11e46a64ee4-ovndb-tls-certs\") pod \"96b7e6f9-7a1c-4f53-8317-f11e46a64ee4\" (UID: \"96b7e6f9-7a1c-4f53-8317-f11e46a64ee4\") " Jan 01 08:47:43 crc kubenswrapper[4867]: I0101 08:47:43.660570 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/96b7e6f9-7a1c-4f53-8317-f11e46a64ee4-httpd-config\") pod \"96b7e6f9-7a1c-4f53-8317-f11e46a64ee4\" (UID: \"96b7e6f9-7a1c-4f53-8317-f11e46a64ee4\") " Jan 01 08:47:43 crc kubenswrapper[4867]: I0101 08:47:43.669380 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b7e6f9-7a1c-4f53-8317-f11e46a64ee4-kube-api-access-kpt69" (OuterVolumeSpecName: "kube-api-access-kpt69") pod "96b7e6f9-7a1c-4f53-8317-f11e46a64ee4" (UID: "96b7e6f9-7a1c-4f53-8317-f11e46a64ee4"). InnerVolumeSpecName "kube-api-access-kpt69". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:47:43 crc kubenswrapper[4867]: I0101 08:47:43.681123 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b7e6f9-7a1c-4f53-8317-f11e46a64ee4-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "96b7e6f9-7a1c-4f53-8317-f11e46a64ee4" (UID: "96b7e6f9-7a1c-4f53-8317-f11e46a64ee4"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:43 crc kubenswrapper[4867]: I0101 08:47:43.709689 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b7e6f9-7a1c-4f53-8317-f11e46a64ee4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96b7e6f9-7a1c-4f53-8317-f11e46a64ee4" (UID: "96b7e6f9-7a1c-4f53-8317-f11e46a64ee4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:43 crc kubenswrapper[4867]: I0101 08:47:43.737783 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b7e6f9-7a1c-4f53-8317-f11e46a64ee4-config" (OuterVolumeSpecName: "config") pod "96b7e6f9-7a1c-4f53-8317-f11e46a64ee4" (UID: "96b7e6f9-7a1c-4f53-8317-f11e46a64ee4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:43 crc kubenswrapper[4867]: I0101 08:47:43.762810 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/96b7e6f9-7a1c-4f53-8317-f11e46a64ee4-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:43 crc kubenswrapper[4867]: I0101 08:47:43.763141 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96b7e6f9-7a1c-4f53-8317-f11e46a64ee4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:43 crc kubenswrapper[4867]: I0101 08:47:43.763497 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpt69\" (UniqueName: \"kubernetes.io/projected/96b7e6f9-7a1c-4f53-8317-f11e46a64ee4-kube-api-access-kpt69\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:43 crc kubenswrapper[4867]: I0101 08:47:43.763585 4867 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/96b7e6f9-7a1c-4f53-8317-f11e46a64ee4-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:43 crc kubenswrapper[4867]: I0101 08:47:43.763861 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b7e6f9-7a1c-4f53-8317-f11e46a64ee4-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "96b7e6f9-7a1c-4f53-8317-f11e46a64ee4" (UID: "96b7e6f9-7a1c-4f53-8317-f11e46a64ee4"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:43 crc kubenswrapper[4867]: I0101 08:47:43.789601 4867 generic.go:334] "Generic (PLEG): container finished" podID="96b7e6f9-7a1c-4f53-8317-f11e46a64ee4" containerID="8c31e833d9d875c10ee9ea92d077898d1c1a4d514dd0e1b6017de6ae661ea19e" exitCode=0 Jan 01 08:47:43 crc kubenswrapper[4867]: I0101 08:47:43.789694 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6bccf6db66-lbtdw" Jan 01 08:47:43 crc kubenswrapper[4867]: I0101 08:47:43.790723 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bccf6db66-lbtdw" event={"ID":"96b7e6f9-7a1c-4f53-8317-f11e46a64ee4","Type":"ContainerDied","Data":"8c31e833d9d875c10ee9ea92d077898d1c1a4d514dd0e1b6017de6ae661ea19e"} Jan 01 08:47:43 crc kubenswrapper[4867]: I0101 08:47:43.790859 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bccf6db66-lbtdw" event={"ID":"96b7e6f9-7a1c-4f53-8317-f11e46a64ee4","Type":"ContainerDied","Data":"c570023f8844b34978cd6e84f1a9270eb4eac37d778cd320e11ea3caa4df3bad"} Jan 01 08:47:43 crc kubenswrapper[4867]: I0101 08:47:43.790920 4867 scope.go:117] "RemoveContainer" containerID="84372a0b96d302680aac52645226307b41fb5dee0164942592b278b11e466ac9" Jan 01 08:47:43 crc kubenswrapper[4867]: I0101 08:47:43.791494 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3205b065-c067-4035-8afb-e2bbcc7d8a42","Type":"ContainerStarted","Data":"2308efd8efc29d35e443b922f20dee961e0822be16a9b0b3be84cb600b8719cd"} Jan 01 08:47:43 crc kubenswrapper[4867]: I0101 08:47:43.822029 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.822010363 podStartE2EDuration="3.822010363s" podCreationTimestamp="2026-01-01 08:47:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:47:43.815709716 +0000 UTC m=+1272.950978495" watchObservedRunningTime="2026-01-01 08:47:43.822010363 +0000 UTC m=+1272.957279132" Jan 01 08:47:43 crc kubenswrapper[4867]: I0101 08:47:43.837574 4867 scope.go:117] "RemoveContainer" containerID="8c31e833d9d875c10ee9ea92d077898d1c1a4d514dd0e1b6017de6ae661ea19e" Jan 01 08:47:43 crc kubenswrapper[4867]: I0101 08:47:43.839389 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6bccf6db66-lbtdw"] Jan 01 08:47:43 crc kubenswrapper[4867]: I0101 08:47:43.840287 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6498f7d58c-nhfz8" Jan 01 08:47:43 crc kubenswrapper[4867]: I0101 08:47:43.859431 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6bccf6db66-lbtdw"] Jan 01 08:47:43 crc kubenswrapper[4867]: I0101 08:47:43.863111 4867 scope.go:117] "RemoveContainer" containerID="84372a0b96d302680aac52645226307b41fb5dee0164942592b278b11e466ac9" Jan 01 08:47:43 crc kubenswrapper[4867]: E0101 08:47:43.867010 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84372a0b96d302680aac52645226307b41fb5dee0164942592b278b11e466ac9\": container with ID starting with 84372a0b96d302680aac52645226307b41fb5dee0164942592b278b11e466ac9 not found: ID does not exist" containerID="84372a0b96d302680aac52645226307b41fb5dee0164942592b278b11e466ac9" Jan 01 08:47:43 crc kubenswrapper[4867]: I0101 08:47:43.867070 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84372a0b96d302680aac52645226307b41fb5dee0164942592b278b11e466ac9"} err="failed to get container status \"84372a0b96d302680aac52645226307b41fb5dee0164942592b278b11e466ac9\": rpc error: code = NotFound desc = could not find container \"84372a0b96d302680aac52645226307b41fb5dee0164942592b278b11e466ac9\": container with ID starting with 84372a0b96d302680aac52645226307b41fb5dee0164942592b278b11e466ac9 not found: ID does not exist" Jan 01 08:47:43 crc kubenswrapper[4867]: I0101 08:47:43.867110 4867 scope.go:117] "RemoveContainer" containerID="8c31e833d9d875c10ee9ea92d077898d1c1a4d514dd0e1b6017de6ae661ea19e" Jan 01 08:47:43 crc kubenswrapper[4867]: E0101 08:47:43.867604 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c31e833d9d875c10ee9ea92d077898d1c1a4d514dd0e1b6017de6ae661ea19e\": container with ID starting with 8c31e833d9d875c10ee9ea92d077898d1c1a4d514dd0e1b6017de6ae661ea19e not found: ID does not exist" containerID="8c31e833d9d875c10ee9ea92d077898d1c1a4d514dd0e1b6017de6ae661ea19e" Jan 01 08:47:43 crc kubenswrapper[4867]: I0101 08:47:43.867635 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c31e833d9d875c10ee9ea92d077898d1c1a4d514dd0e1b6017de6ae661ea19e"} err="failed to get container status \"8c31e833d9d875c10ee9ea92d077898d1c1a4d514dd0e1b6017de6ae661ea19e\": rpc error: code = NotFound desc = could not find container \"8c31e833d9d875c10ee9ea92d077898d1c1a4d514dd0e1b6017de6ae661ea19e\": container with ID starting with 8c31e833d9d875c10ee9ea92d077898d1c1a4d514dd0e1b6017de6ae661ea19e not found: ID does not exist" Jan 01 08:47:43 crc kubenswrapper[4867]: I0101 08:47:43.868736 4867 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/96b7e6f9-7a1c-4f53-8317-f11e46a64ee4-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:44 crc kubenswrapper[4867]: I0101 08:47:44.014127 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-67dd85d5b6-ww7ll" Jan 01 08:47:45 crc kubenswrapper[4867]: I0101 08:47:45.143111 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b7e6f9-7a1c-4f53-8317-f11e46a64ee4" path="/var/lib/kubelet/pods/96b7e6f9-7a1c-4f53-8317-f11e46a64ee4/volumes" Jan 01 08:47:46 crc kubenswrapper[4867]: I0101 08:47:46.124836 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 01 08:47:47 crc kubenswrapper[4867]: I0101 08:47:47.827531 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 01 08:47:47 crc kubenswrapper[4867]: E0101 08:47:47.829513 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b7e6f9-7a1c-4f53-8317-f11e46a64ee4" containerName="neutron-httpd" Jan 01 08:47:47 crc kubenswrapper[4867]: I0101 08:47:47.829647 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b7e6f9-7a1c-4f53-8317-f11e46a64ee4" containerName="neutron-httpd" Jan 01 08:47:47 crc kubenswrapper[4867]: E0101 08:47:47.829774 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b7e6f9-7a1c-4f53-8317-f11e46a64ee4" containerName="neutron-api" Jan 01 08:47:47 crc kubenswrapper[4867]: I0101 08:47:47.829916 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b7e6f9-7a1c-4f53-8317-f11e46a64ee4" containerName="neutron-api" Jan 01 08:47:47 crc kubenswrapper[4867]: I0101 08:47:47.830273 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="96b7e6f9-7a1c-4f53-8317-f11e46a64ee4" containerName="neutron-httpd" Jan 01 08:47:47 crc kubenswrapper[4867]: I0101 08:47:47.830398 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="96b7e6f9-7a1c-4f53-8317-f11e46a64ee4" containerName="neutron-api" Jan 01 08:47:47 crc kubenswrapper[4867]: I0101 08:47:47.831224 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 01 08:47:47 crc kubenswrapper[4867]: I0101 08:47:47.833467 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 01 08:47:47 crc kubenswrapper[4867]: I0101 08:47:47.834479 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 01 08:47:47 crc kubenswrapper[4867]: I0101 08:47:47.834946 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-2ms4x" Jan 01 08:47:47 crc kubenswrapper[4867]: I0101 08:47:47.839706 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 01 08:47:47 crc kubenswrapper[4867]: I0101 08:47:47.944076 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/289cf8b3-9453-42e5-a272-319197158dc3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"289cf8b3-9453-42e5-a272-319197158dc3\") " pod="openstack/openstackclient" Jan 01 08:47:47 crc kubenswrapper[4867]: I0101 08:47:47.944341 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm95k\" (UniqueName: \"kubernetes.io/projected/289cf8b3-9453-42e5-a272-319197158dc3-kube-api-access-wm95k\") pod \"openstackclient\" (UID: \"289cf8b3-9453-42e5-a272-319197158dc3\") " pod="openstack/openstackclient" Jan 01 08:47:47 crc kubenswrapper[4867]: I0101 08:47:47.944434 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/289cf8b3-9453-42e5-a272-319197158dc3-openstack-config-secret\") pod \"openstackclient\" (UID: \"289cf8b3-9453-42e5-a272-319197158dc3\") " pod="openstack/openstackclient" Jan 01 08:47:47 crc kubenswrapper[4867]: I0101 08:47:47.944624 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/289cf8b3-9453-42e5-a272-319197158dc3-openstack-config\") pod \"openstackclient\" (UID: \"289cf8b3-9453-42e5-a272-319197158dc3\") " pod="openstack/openstackclient" Jan 01 08:47:48 crc kubenswrapper[4867]: I0101 08:47:48.045991 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/289cf8b3-9453-42e5-a272-319197158dc3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"289cf8b3-9453-42e5-a272-319197158dc3\") " pod="openstack/openstackclient" Jan 01 08:47:48 crc kubenswrapper[4867]: I0101 08:47:48.046046 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm95k\" (UniqueName: \"kubernetes.io/projected/289cf8b3-9453-42e5-a272-319197158dc3-kube-api-access-wm95k\") pod \"openstackclient\" (UID: \"289cf8b3-9453-42e5-a272-319197158dc3\") " pod="openstack/openstackclient" Jan 01 08:47:48 crc kubenswrapper[4867]: I0101 08:47:48.046079 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/289cf8b3-9453-42e5-a272-319197158dc3-openstack-config-secret\") pod \"openstackclient\" (UID: \"289cf8b3-9453-42e5-a272-319197158dc3\") " pod="openstack/openstackclient" Jan 01 08:47:48 crc kubenswrapper[4867]: I0101 08:47:48.046189 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/289cf8b3-9453-42e5-a272-319197158dc3-openstack-config\") pod \"openstackclient\" (UID: \"289cf8b3-9453-42e5-a272-319197158dc3\") " pod="openstack/openstackclient" Jan 01 08:47:48 crc kubenswrapper[4867]: I0101 08:47:48.047277 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/289cf8b3-9453-42e5-a272-319197158dc3-openstack-config\") pod \"openstackclient\" (UID: \"289cf8b3-9453-42e5-a272-319197158dc3\") " pod="openstack/openstackclient" Jan 01 08:47:48 crc kubenswrapper[4867]: I0101 08:47:48.054297 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/289cf8b3-9453-42e5-a272-319197158dc3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"289cf8b3-9453-42e5-a272-319197158dc3\") " pod="openstack/openstackclient" Jan 01 08:47:48 crc kubenswrapper[4867]: I0101 08:47:48.060431 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/289cf8b3-9453-42e5-a272-319197158dc3-openstack-config-secret\") pod \"openstackclient\" (UID: \"289cf8b3-9453-42e5-a272-319197158dc3\") " pod="openstack/openstackclient" Jan 01 08:47:48 crc kubenswrapper[4867]: I0101 08:47:48.082212 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm95k\" (UniqueName: \"kubernetes.io/projected/289cf8b3-9453-42e5-a272-319197158dc3-kube-api-access-wm95k\") pod \"openstackclient\" (UID: \"289cf8b3-9453-42e5-a272-319197158dc3\") " pod="openstack/openstackclient" Jan 01 08:47:48 crc kubenswrapper[4867]: I0101 08:47:48.150872 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 01 08:47:48 crc kubenswrapper[4867]: I0101 08:47:48.197120 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 01 08:47:48 crc kubenswrapper[4867]: I0101 08:47:48.207836 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 01 08:47:48 crc kubenswrapper[4867]: I0101 08:47:48.235522 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 01 08:47:48 crc kubenswrapper[4867]: I0101 08:47:48.237001 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 01 08:47:48 crc kubenswrapper[4867]: I0101 08:47:48.255085 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 01 08:47:48 crc kubenswrapper[4867]: E0101 08:47:48.310541 4867 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 01 08:47:48 crc kubenswrapper[4867]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_289cf8b3-9453-42e5-a272-319197158dc3_0(f5b086a6bedbf6a7df0d5578553d41672ea667d93bfc23b16b05bdab0352c662): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f5b086a6bedbf6a7df0d5578553d41672ea667d93bfc23b16b05bdab0352c662" Netns:"/var/run/netns/9e9e1fb8-0bc2-42ce-804f-419e57a8045c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=f5b086a6bedbf6a7df0d5578553d41672ea667d93bfc23b16b05bdab0352c662;K8S_POD_UID=289cf8b3-9453-42e5-a272-319197158dc3" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/289cf8b3-9453-42e5-a272-319197158dc3]: expected pod UID "289cf8b3-9453-42e5-a272-319197158dc3" but got "bf6c2c64-e624-4388-b9dc-3d8c7985ac8f" from Kube API Jan 01 08:47:48 crc kubenswrapper[4867]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 01 08:47:48 crc kubenswrapper[4867]: > Jan 01 08:47:48 crc kubenswrapper[4867]: E0101 08:47:48.310962 4867 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 01 08:47:48 crc kubenswrapper[4867]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_289cf8b3-9453-42e5-a272-319197158dc3_0(f5b086a6bedbf6a7df0d5578553d41672ea667d93bfc23b16b05bdab0352c662): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f5b086a6bedbf6a7df0d5578553d41672ea667d93bfc23b16b05bdab0352c662" Netns:"/var/run/netns/9e9e1fb8-0bc2-42ce-804f-419e57a8045c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=f5b086a6bedbf6a7df0d5578553d41672ea667d93bfc23b16b05bdab0352c662;K8S_POD_UID=289cf8b3-9453-42e5-a272-319197158dc3" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/289cf8b3-9453-42e5-a272-319197158dc3]: expected pod UID "289cf8b3-9453-42e5-a272-319197158dc3" but got "bf6c2c64-e624-4388-b9dc-3d8c7985ac8f" from Kube API Jan 01 08:47:48 crc kubenswrapper[4867]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 01 08:47:48 crc kubenswrapper[4867]: > pod="openstack/openstackclient" Jan 01 08:47:48 crc kubenswrapper[4867]: I0101 08:47:48.350916 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bf6c2c64-e624-4388-b9dc-3d8c7985ac8f-openstack-config\") pod \"openstackclient\" (UID: \"bf6c2c64-e624-4388-b9dc-3d8c7985ac8f\") " pod="openstack/openstackclient" Jan 01 08:47:48 crc kubenswrapper[4867]: I0101 08:47:48.350996 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf6c2c64-e624-4388-b9dc-3d8c7985ac8f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"bf6c2c64-e624-4388-b9dc-3d8c7985ac8f\") " pod="openstack/openstackclient" Jan 01 08:47:48 crc kubenswrapper[4867]: I0101 08:47:48.351060 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzwkb\" (UniqueName: \"kubernetes.io/projected/bf6c2c64-e624-4388-b9dc-3d8c7985ac8f-kube-api-access-qzwkb\") pod \"openstackclient\" (UID: \"bf6c2c64-e624-4388-b9dc-3d8c7985ac8f\") " pod="openstack/openstackclient" Jan 01 08:47:48 crc kubenswrapper[4867]: I0101 08:47:48.351090 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bf6c2c64-e624-4388-b9dc-3d8c7985ac8f-openstack-config-secret\") pod \"openstackclient\" (UID: \"bf6c2c64-e624-4388-b9dc-3d8c7985ac8f\") " pod="openstack/openstackclient" Jan 01 08:47:48 crc kubenswrapper[4867]: I0101 08:47:48.452780 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bf6c2c64-e624-4388-b9dc-3d8c7985ac8f-openstack-config\") pod \"openstackclient\" (UID: \"bf6c2c64-e624-4388-b9dc-3d8c7985ac8f\") " pod="openstack/openstackclient" Jan 01 08:47:48 crc kubenswrapper[4867]: I0101 08:47:48.452851 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf6c2c64-e624-4388-b9dc-3d8c7985ac8f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"bf6c2c64-e624-4388-b9dc-3d8c7985ac8f\") " pod="openstack/openstackclient" Jan 01 08:47:48 crc kubenswrapper[4867]: I0101 08:47:48.452934 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzwkb\" (UniqueName: \"kubernetes.io/projected/bf6c2c64-e624-4388-b9dc-3d8c7985ac8f-kube-api-access-qzwkb\") pod \"openstackclient\" (UID: \"bf6c2c64-e624-4388-b9dc-3d8c7985ac8f\") " pod="openstack/openstackclient" Jan 01 08:47:48 crc kubenswrapper[4867]: I0101 08:47:48.452964 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bf6c2c64-e624-4388-b9dc-3d8c7985ac8f-openstack-config-secret\") pod \"openstackclient\" (UID: \"bf6c2c64-e624-4388-b9dc-3d8c7985ac8f\") " pod="openstack/openstackclient" Jan 01 08:47:48 crc kubenswrapper[4867]: I0101 08:47:48.453719 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bf6c2c64-e624-4388-b9dc-3d8c7985ac8f-openstack-config\") pod \"openstackclient\" (UID: \"bf6c2c64-e624-4388-b9dc-3d8c7985ac8f\") " pod="openstack/openstackclient" Jan 01 08:47:48 crc kubenswrapper[4867]: I0101 08:47:48.456762 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf6c2c64-e624-4388-b9dc-3d8c7985ac8f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"bf6c2c64-e624-4388-b9dc-3d8c7985ac8f\") " pod="openstack/openstackclient" Jan 01 08:47:48 crc kubenswrapper[4867]: I0101 08:47:48.457210 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bf6c2c64-e624-4388-b9dc-3d8c7985ac8f-openstack-config-secret\") pod \"openstackclient\" (UID: \"bf6c2c64-e624-4388-b9dc-3d8c7985ac8f\") " pod="openstack/openstackclient" Jan 01 08:47:48 crc kubenswrapper[4867]: I0101 08:47:48.472058 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzwkb\" (UniqueName: \"kubernetes.io/projected/bf6c2c64-e624-4388-b9dc-3d8c7985ac8f-kube-api-access-qzwkb\") pod \"openstackclient\" (UID: \"bf6c2c64-e624-4388-b9dc-3d8c7985ac8f\") " pod="openstack/openstackclient" Jan 01 08:47:48 crc kubenswrapper[4867]: I0101 08:47:48.594153 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 01 08:47:48 crc kubenswrapper[4867]: I0101 08:47:48.833823 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 01 08:47:48 crc kubenswrapper[4867]: I0101 08:47:48.837774 4867 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="289cf8b3-9453-42e5-a272-319197158dc3" podUID="bf6c2c64-e624-4388-b9dc-3d8c7985ac8f" Jan 01 08:47:48 crc kubenswrapper[4867]: I0101 08:47:48.843846 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 01 08:47:48 crc kubenswrapper[4867]: I0101 08:47:48.963479 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/289cf8b3-9453-42e5-a272-319197158dc3-openstack-config\") pod \"289cf8b3-9453-42e5-a272-319197158dc3\" (UID: \"289cf8b3-9453-42e5-a272-319197158dc3\") " Jan 01 08:47:48 crc kubenswrapper[4867]: I0101 08:47:48.963625 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/289cf8b3-9453-42e5-a272-319197158dc3-openstack-config-secret\") pod \"289cf8b3-9453-42e5-a272-319197158dc3\" (UID: \"289cf8b3-9453-42e5-a272-319197158dc3\") " Jan 01 08:47:48 crc kubenswrapper[4867]: I0101 08:47:48.963676 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/289cf8b3-9453-42e5-a272-319197158dc3-combined-ca-bundle\") pod \"289cf8b3-9453-42e5-a272-319197158dc3\" (UID: \"289cf8b3-9453-42e5-a272-319197158dc3\") " Jan 01 08:47:48 crc kubenswrapper[4867]: I0101 08:47:48.963710 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm95k\" (UniqueName: \"kubernetes.io/projected/289cf8b3-9453-42e5-a272-319197158dc3-kube-api-access-wm95k\") pod \"289cf8b3-9453-42e5-a272-319197158dc3\" (UID: \"289cf8b3-9453-42e5-a272-319197158dc3\") " Jan 01 08:47:48 crc kubenswrapper[4867]: I0101 08:47:48.964151 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/289cf8b3-9453-42e5-a272-319197158dc3-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "289cf8b3-9453-42e5-a272-319197158dc3" (UID: "289cf8b3-9453-42e5-a272-319197158dc3"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:47:48 crc kubenswrapper[4867]: I0101 08:47:48.969957 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/289cf8b3-9453-42e5-a272-319197158dc3-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "289cf8b3-9453-42e5-a272-319197158dc3" (UID: "289cf8b3-9453-42e5-a272-319197158dc3"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:48 crc kubenswrapper[4867]: I0101 08:47:48.970181 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/289cf8b3-9453-42e5-a272-319197158dc3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "289cf8b3-9453-42e5-a272-319197158dc3" (UID: "289cf8b3-9453-42e5-a272-319197158dc3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:48 crc kubenswrapper[4867]: I0101 08:47:48.973100 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/289cf8b3-9453-42e5-a272-319197158dc3-kube-api-access-wm95k" (OuterVolumeSpecName: "kube-api-access-wm95k") pod "289cf8b3-9453-42e5-a272-319197158dc3" (UID: "289cf8b3-9453-42e5-a272-319197158dc3"). InnerVolumeSpecName "kube-api-access-wm95k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:47:49 crc kubenswrapper[4867]: I0101 08:47:49.066202 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/289cf8b3-9453-42e5-a272-319197158dc3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:49 crc kubenswrapper[4867]: I0101 08:47:49.066241 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm95k\" (UniqueName: \"kubernetes.io/projected/289cf8b3-9453-42e5-a272-319197158dc3-kube-api-access-wm95k\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:49 crc kubenswrapper[4867]: I0101 08:47:49.066256 4867 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/289cf8b3-9453-42e5-a272-319197158dc3-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:49 crc kubenswrapper[4867]: I0101 08:47:49.066269 4867 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/289cf8b3-9453-42e5-a272-319197158dc3-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:49 crc kubenswrapper[4867]: I0101 08:47:49.107587 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 01 08:47:49 crc kubenswrapper[4867]: I0101 08:47:49.139427 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="289cf8b3-9453-42e5-a272-319197158dc3" path="/var/lib/kubelet/pods/289cf8b3-9453-42e5-a272-319197158dc3/volumes" Jan 01 08:47:49 crc kubenswrapper[4867]: I0101 08:47:49.843310 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 01 08:47:49 crc kubenswrapper[4867]: I0101 08:47:49.843316 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"bf6c2c64-e624-4388-b9dc-3d8c7985ac8f","Type":"ContainerStarted","Data":"270137edd4971e23d5e758540fd971327b25e0837d4812eceb8378f4c94d1c3c"} Jan 01 08:47:49 crc kubenswrapper[4867]: I0101 08:47:49.853982 4867 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="289cf8b3-9453-42e5-a272-319197158dc3" podUID="bf6c2c64-e624-4388-b9dc-3d8c7985ac8f" Jan 01 08:47:51 crc kubenswrapper[4867]: I0101 08:47:51.060650 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:47:51 crc kubenswrapper[4867]: I0101 08:47:51.061143 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15e2c63d-ba70-446b-881f-a0a66f440016" containerName="ceilometer-central-agent" containerID="cri-o://8b65e10b915437b913d49d0bd8a94e4b8637ca66dbf5be348e3964bab8fc065b" gracePeriod=30 Jan 01 08:47:51 crc kubenswrapper[4867]: I0101 08:47:51.062241 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15e2c63d-ba70-446b-881f-a0a66f440016" containerName="sg-core" containerID="cri-o://cdc5a23f2185ee74e9ba9c57d42b06575e428c4549a4bdbabc7cabcbaf71d0ee" gracePeriod=30 Jan 01 08:47:51 crc kubenswrapper[4867]: I0101 08:47:51.062283 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15e2c63d-ba70-446b-881f-a0a66f440016" containerName="proxy-httpd" containerID="cri-o://42e2e9e1f1dd8ff8880e93a133c2f44e1a1ff99130135a84e1796a570c13904f" gracePeriod=30 Jan 01 08:47:51 crc kubenswrapper[4867]: I0101 08:47:51.062344 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15e2c63d-ba70-446b-881f-a0a66f440016" containerName="ceilometer-notification-agent" containerID="cri-o://cfc432020325ce046b0c180cb1225445390848132f6940931cdb72dda653b974" gracePeriod=30 Jan 01 08:47:51 crc kubenswrapper[4867]: I0101 08:47:51.072507 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 01 08:47:51 crc kubenswrapper[4867]: I0101 08:47:51.378361 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 01 08:47:51 crc kubenswrapper[4867]: E0101 08:47:51.446517 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15e2c63d_ba70_446b_881f_a0a66f440016.slice/crio-42e2e9e1f1dd8ff8880e93a133c2f44e1a1ff99130135a84e1796a570c13904f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15e2c63d_ba70_446b_881f_a0a66f440016.slice/crio-conmon-42e2e9e1f1dd8ff8880e93a133c2f44e1a1ff99130135a84e1796a570c13904f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15e2c63d_ba70_446b_881f_a0a66f440016.slice/crio-8b65e10b915437b913d49d0bd8a94e4b8637ca66dbf5be348e3964bab8fc065b.scope\": RecentStats: unable to find data in memory cache]" Jan 01 08:47:51 crc kubenswrapper[4867]: I0101 08:47:51.863566 4867 generic.go:334] "Generic (PLEG): container finished" podID="15e2c63d-ba70-446b-881f-a0a66f440016" containerID="42e2e9e1f1dd8ff8880e93a133c2f44e1a1ff99130135a84e1796a570c13904f" exitCode=0 Jan 01 08:47:51 crc kubenswrapper[4867]: I0101 08:47:51.863596 4867 generic.go:334] "Generic (PLEG): container finished" podID="15e2c63d-ba70-446b-881f-a0a66f440016" containerID="cdc5a23f2185ee74e9ba9c57d42b06575e428c4549a4bdbabc7cabcbaf71d0ee" exitCode=2 Jan 01 08:47:51 crc kubenswrapper[4867]: I0101 08:47:51.863603 4867 generic.go:334] "Generic (PLEG): container finished" podID="15e2c63d-ba70-446b-881f-a0a66f440016" containerID="8b65e10b915437b913d49d0bd8a94e4b8637ca66dbf5be348e3964bab8fc065b" exitCode=0 Jan 01 08:47:51 crc kubenswrapper[4867]: I0101 08:47:51.863651 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15e2c63d-ba70-446b-881f-a0a66f440016","Type":"ContainerDied","Data":"42e2e9e1f1dd8ff8880e93a133c2f44e1a1ff99130135a84e1796a570c13904f"} Jan 01 08:47:51 crc kubenswrapper[4867]: I0101 08:47:51.863676 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15e2c63d-ba70-446b-881f-a0a66f440016","Type":"ContainerDied","Data":"cdc5a23f2185ee74e9ba9c57d42b06575e428c4549a4bdbabc7cabcbaf71d0ee"} Jan 01 08:47:51 crc kubenswrapper[4867]: I0101 08:47:51.863686 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15e2c63d-ba70-446b-881f-a0a66f440016","Type":"ContainerDied","Data":"8b65e10b915437b913d49d0bd8a94e4b8637ca66dbf5be348e3964bab8fc065b"} Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.552620 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.728770 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e2c63d-ba70-446b-881f-a0a66f440016-combined-ca-bundle\") pod \"15e2c63d-ba70-446b-881f-a0a66f440016\" (UID: \"15e2c63d-ba70-446b-881f-a0a66f440016\") " Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.728820 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swctj\" (UniqueName: \"kubernetes.io/projected/15e2c63d-ba70-446b-881f-a0a66f440016-kube-api-access-swctj\") pod \"15e2c63d-ba70-446b-881f-a0a66f440016\" (UID: \"15e2c63d-ba70-446b-881f-a0a66f440016\") " Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.728858 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15e2c63d-ba70-446b-881f-a0a66f440016-sg-core-conf-yaml\") pod \"15e2c63d-ba70-446b-881f-a0a66f440016\" (UID: \"15e2c63d-ba70-446b-881f-a0a66f440016\") " Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.728876 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e2c63d-ba70-446b-881f-a0a66f440016-config-data\") pod \"15e2c63d-ba70-446b-881f-a0a66f440016\" (UID: \"15e2c63d-ba70-446b-881f-a0a66f440016\") " Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.728960 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15e2c63d-ba70-446b-881f-a0a66f440016-scripts\") pod \"15e2c63d-ba70-446b-881f-a0a66f440016\" (UID: \"15e2c63d-ba70-446b-881f-a0a66f440016\") " Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.729605 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15e2c63d-ba70-446b-881f-a0a66f440016-run-httpd\") pod \"15e2c63d-ba70-446b-881f-a0a66f440016\" (UID: \"15e2c63d-ba70-446b-881f-a0a66f440016\") " Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.729764 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15e2c63d-ba70-446b-881f-a0a66f440016-log-httpd\") pod \"15e2c63d-ba70-446b-881f-a0a66f440016\" (UID: \"15e2c63d-ba70-446b-881f-a0a66f440016\") " Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.729979 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15e2c63d-ba70-446b-881f-a0a66f440016-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "15e2c63d-ba70-446b-881f-a0a66f440016" (UID: "15e2c63d-ba70-446b-881f-a0a66f440016"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.730104 4867 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15e2c63d-ba70-446b-881f-a0a66f440016-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.730143 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15e2c63d-ba70-446b-881f-a0a66f440016-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "15e2c63d-ba70-446b-881f-a0a66f440016" (UID: "15e2c63d-ba70-446b-881f-a0a66f440016"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.736996 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15e2c63d-ba70-446b-881f-a0a66f440016-scripts" (OuterVolumeSpecName: "scripts") pod "15e2c63d-ba70-446b-881f-a0a66f440016" (UID: "15e2c63d-ba70-446b-881f-a0a66f440016"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.738005 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15e2c63d-ba70-446b-881f-a0a66f440016-kube-api-access-swctj" (OuterVolumeSpecName: "kube-api-access-swctj") pod "15e2c63d-ba70-446b-881f-a0a66f440016" (UID: "15e2c63d-ba70-446b-881f-a0a66f440016"). InnerVolumeSpecName "kube-api-access-swctj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.776762 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15e2c63d-ba70-446b-881f-a0a66f440016-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "15e2c63d-ba70-446b-881f-a0a66f440016" (UID: "15e2c63d-ba70-446b-881f-a0a66f440016"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.827033 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15e2c63d-ba70-446b-881f-a0a66f440016-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15e2c63d-ba70-446b-881f-a0a66f440016" (UID: "15e2c63d-ba70-446b-881f-a0a66f440016"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.831740 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15e2c63d-ba70-446b-881f-a0a66f440016-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.831779 4867 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15e2c63d-ba70-446b-881f-a0a66f440016-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.831792 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e2c63d-ba70-446b-881f-a0a66f440016-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.831808 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swctj\" (UniqueName: \"kubernetes.io/projected/15e2c63d-ba70-446b-881f-a0a66f440016-kube-api-access-swctj\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.831819 4867 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15e2c63d-ba70-446b-881f-a0a66f440016-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.849577 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15e2c63d-ba70-446b-881f-a0a66f440016-config-data" (OuterVolumeSpecName: "config-data") pod "15e2c63d-ba70-446b-881f-a0a66f440016" (UID: "15e2c63d-ba70-446b-881f-a0a66f440016"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.875625 4867 generic.go:334] "Generic (PLEG): container finished" podID="15e2c63d-ba70-446b-881f-a0a66f440016" containerID="cfc432020325ce046b0c180cb1225445390848132f6940931cdb72dda653b974" exitCode=0 Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.875686 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15e2c63d-ba70-446b-881f-a0a66f440016","Type":"ContainerDied","Data":"cfc432020325ce046b0c180cb1225445390848132f6940931cdb72dda653b974"} Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.875741 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.875755 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15e2c63d-ba70-446b-881f-a0a66f440016","Type":"ContainerDied","Data":"8af3531642ab8ac0e16211ac784e9dfffead601b9d60131ee77b6e40d76240b3"} Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.875799 4867 scope.go:117] "RemoveContainer" containerID="42e2e9e1f1dd8ff8880e93a133c2f44e1a1ff99130135a84e1796a570c13904f" Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.934798 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.937089 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e2c63d-ba70-446b-881f-a0a66f440016-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.943949 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.952430 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:47:52 crc kubenswrapper[4867]: E0101 08:47:52.952876 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15e2c63d-ba70-446b-881f-a0a66f440016" containerName="ceilometer-central-agent" Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.952907 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="15e2c63d-ba70-446b-881f-a0a66f440016" containerName="ceilometer-central-agent" Jan 01 08:47:52 crc kubenswrapper[4867]: E0101 08:47:52.952915 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15e2c63d-ba70-446b-881f-a0a66f440016" containerName="ceilometer-notification-agent" Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.952928 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="15e2c63d-ba70-446b-881f-a0a66f440016" containerName="ceilometer-notification-agent" Jan 01 08:47:52 crc kubenswrapper[4867]: E0101 08:47:52.952946 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15e2c63d-ba70-446b-881f-a0a66f440016" containerName="proxy-httpd" Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.952951 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="15e2c63d-ba70-446b-881f-a0a66f440016" containerName="proxy-httpd" Jan 01 08:47:52 crc kubenswrapper[4867]: E0101 08:47:52.952962 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15e2c63d-ba70-446b-881f-a0a66f440016" containerName="sg-core" Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.952968 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="15e2c63d-ba70-446b-881f-a0a66f440016" containerName="sg-core" Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.953127 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="15e2c63d-ba70-446b-881f-a0a66f440016" containerName="proxy-httpd" Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.953137 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="15e2c63d-ba70-446b-881f-a0a66f440016" containerName="ceilometer-central-agent" Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.953152 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="15e2c63d-ba70-446b-881f-a0a66f440016" containerName="sg-core" Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.953165 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="15e2c63d-ba70-446b-881f-a0a66f440016" containerName="ceilometer-notification-agent" Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.955070 4867 scope.go:117] "RemoveContainer" containerID="cdc5a23f2185ee74e9ba9c57d42b06575e428c4549a4bdbabc7cabcbaf71d0ee" Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.955162 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.958534 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.959954 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.961385 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:47:52 crc kubenswrapper[4867]: I0101 08:47:52.974963 4867 scope.go:117] "RemoveContainer" containerID="cfc432020325ce046b0c180cb1225445390848132f6940931cdb72dda653b974" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.001341 4867 scope.go:117] "RemoveContainer" containerID="8b65e10b915437b913d49d0bd8a94e4b8637ca66dbf5be348e3964bab8fc065b" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.024455 4867 scope.go:117] "RemoveContainer" containerID="42e2e9e1f1dd8ff8880e93a133c2f44e1a1ff99130135a84e1796a570c13904f" Jan 01 08:47:53 crc kubenswrapper[4867]: E0101 08:47:53.025019 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42e2e9e1f1dd8ff8880e93a133c2f44e1a1ff99130135a84e1796a570c13904f\": container with ID starting with 42e2e9e1f1dd8ff8880e93a133c2f44e1a1ff99130135a84e1796a570c13904f not found: ID does not exist" containerID="42e2e9e1f1dd8ff8880e93a133c2f44e1a1ff99130135a84e1796a570c13904f" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.025069 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42e2e9e1f1dd8ff8880e93a133c2f44e1a1ff99130135a84e1796a570c13904f"} err="failed to get container status \"42e2e9e1f1dd8ff8880e93a133c2f44e1a1ff99130135a84e1796a570c13904f\": rpc error: code = NotFound desc = could not find container \"42e2e9e1f1dd8ff8880e93a133c2f44e1a1ff99130135a84e1796a570c13904f\": container with ID starting with 42e2e9e1f1dd8ff8880e93a133c2f44e1a1ff99130135a84e1796a570c13904f not found: ID does not exist" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.025097 4867 scope.go:117] "RemoveContainer" containerID="cdc5a23f2185ee74e9ba9c57d42b06575e428c4549a4bdbabc7cabcbaf71d0ee" Jan 01 08:47:53 crc kubenswrapper[4867]: E0101 08:47:53.025548 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdc5a23f2185ee74e9ba9c57d42b06575e428c4549a4bdbabc7cabcbaf71d0ee\": container with ID starting with cdc5a23f2185ee74e9ba9c57d42b06575e428c4549a4bdbabc7cabcbaf71d0ee not found: ID does not exist" containerID="cdc5a23f2185ee74e9ba9c57d42b06575e428c4549a4bdbabc7cabcbaf71d0ee" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.025581 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdc5a23f2185ee74e9ba9c57d42b06575e428c4549a4bdbabc7cabcbaf71d0ee"} err="failed to get container status \"cdc5a23f2185ee74e9ba9c57d42b06575e428c4549a4bdbabc7cabcbaf71d0ee\": rpc error: code = NotFound desc = could not find container \"cdc5a23f2185ee74e9ba9c57d42b06575e428c4549a4bdbabc7cabcbaf71d0ee\": container with ID starting with cdc5a23f2185ee74e9ba9c57d42b06575e428c4549a4bdbabc7cabcbaf71d0ee not found: ID does not exist" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.025604 4867 scope.go:117] "RemoveContainer" containerID="cfc432020325ce046b0c180cb1225445390848132f6940931cdb72dda653b974" Jan 01 08:47:53 crc kubenswrapper[4867]: E0101 08:47:53.025932 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfc432020325ce046b0c180cb1225445390848132f6940931cdb72dda653b974\": container with ID starting with cfc432020325ce046b0c180cb1225445390848132f6940931cdb72dda653b974 not found: ID does not exist" containerID="cfc432020325ce046b0c180cb1225445390848132f6940931cdb72dda653b974" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.025959 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfc432020325ce046b0c180cb1225445390848132f6940931cdb72dda653b974"} err="failed to get container status \"cfc432020325ce046b0c180cb1225445390848132f6940931cdb72dda653b974\": rpc error: code = NotFound desc = could not find container \"cfc432020325ce046b0c180cb1225445390848132f6940931cdb72dda653b974\": container with ID starting with cfc432020325ce046b0c180cb1225445390848132f6940931cdb72dda653b974 not found: ID does not exist" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.025975 4867 scope.go:117] "RemoveContainer" containerID="8b65e10b915437b913d49d0bd8a94e4b8637ca66dbf5be348e3964bab8fc065b" Jan 01 08:47:53 crc kubenswrapper[4867]: E0101 08:47:53.026586 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b65e10b915437b913d49d0bd8a94e4b8637ca66dbf5be348e3964bab8fc065b\": container with ID starting with 8b65e10b915437b913d49d0bd8a94e4b8637ca66dbf5be348e3964bab8fc065b not found: ID does not exist" containerID="8b65e10b915437b913d49d0bd8a94e4b8637ca66dbf5be348e3964bab8fc065b" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.026615 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b65e10b915437b913d49d0bd8a94e4b8637ca66dbf5be348e3964bab8fc065b"} err="failed to get container status \"8b65e10b915437b913d49d0bd8a94e4b8637ca66dbf5be348e3964bab8fc065b\": rpc error: code = NotFound desc = could not find container \"8b65e10b915437b913d49d0bd8a94e4b8637ca66dbf5be348e3964bab8fc065b\": container with ID starting with 8b65e10b915437b913d49d0bd8a94e4b8637ca66dbf5be348e3964bab8fc065b not found: ID does not exist" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.139979 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15e2c63d-ba70-446b-881f-a0a66f440016" path="/var/lib/kubelet/pods/15e2c63d-ba70-446b-881f-a0a66f440016/volumes" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.140309 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-config-data\") pod \"ceilometer-0\" (UID: \"d0f72f19-cf97-45ff-bb91-b1967ab48d3d\") " pod="openstack/ceilometer-0" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.140353 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-log-httpd\") pod \"ceilometer-0\" (UID: \"d0f72f19-cf97-45ff-bb91-b1967ab48d3d\") " pod="openstack/ceilometer-0" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.140375 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-scripts\") pod \"ceilometer-0\" (UID: \"d0f72f19-cf97-45ff-bb91-b1967ab48d3d\") " pod="openstack/ceilometer-0" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.140649 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-run-httpd\") pod \"ceilometer-0\" (UID: \"d0f72f19-cf97-45ff-bb91-b1967ab48d3d\") " pod="openstack/ceilometer-0" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.140831 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jq4n\" (UniqueName: \"kubernetes.io/projected/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-kube-api-access-6jq4n\") pod \"ceilometer-0\" (UID: \"d0f72f19-cf97-45ff-bb91-b1967ab48d3d\") " pod="openstack/ceilometer-0" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.140906 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d0f72f19-cf97-45ff-bb91-b1967ab48d3d\") " pod="openstack/ceilometer-0" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.140939 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d0f72f19-cf97-45ff-bb91-b1967ab48d3d\") " pod="openstack/ceilometer-0" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.242608 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jq4n\" (UniqueName: \"kubernetes.io/projected/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-kube-api-access-6jq4n\") pod \"ceilometer-0\" (UID: \"d0f72f19-cf97-45ff-bb91-b1967ab48d3d\") " pod="openstack/ceilometer-0" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.242668 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d0f72f19-cf97-45ff-bb91-b1967ab48d3d\") " pod="openstack/ceilometer-0" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.242693 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d0f72f19-cf97-45ff-bb91-b1967ab48d3d\") " pod="openstack/ceilometer-0" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.242732 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-config-data\") pod \"ceilometer-0\" (UID: \"d0f72f19-cf97-45ff-bb91-b1967ab48d3d\") " pod="openstack/ceilometer-0" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.242752 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-log-httpd\") pod \"ceilometer-0\" (UID: \"d0f72f19-cf97-45ff-bb91-b1967ab48d3d\") " pod="openstack/ceilometer-0" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.242772 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-scripts\") pod \"ceilometer-0\" (UID: \"d0f72f19-cf97-45ff-bb91-b1967ab48d3d\") " pod="openstack/ceilometer-0" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.242853 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-run-httpd\") pod \"ceilometer-0\" (UID: \"d0f72f19-cf97-45ff-bb91-b1967ab48d3d\") " pod="openstack/ceilometer-0" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.244152 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-log-httpd\") pod \"ceilometer-0\" (UID: \"d0f72f19-cf97-45ff-bb91-b1967ab48d3d\") " pod="openstack/ceilometer-0" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.244451 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-run-httpd\") pod \"ceilometer-0\" (UID: \"d0f72f19-cf97-45ff-bb91-b1967ab48d3d\") " pod="openstack/ceilometer-0" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.248234 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d0f72f19-cf97-45ff-bb91-b1967ab48d3d\") " pod="openstack/ceilometer-0" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.248408 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-config-data\") pod \"ceilometer-0\" (UID: \"d0f72f19-cf97-45ff-bb91-b1967ab48d3d\") " pod="openstack/ceilometer-0" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.248633 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-scripts\") pod \"ceilometer-0\" (UID: \"d0f72f19-cf97-45ff-bb91-b1967ab48d3d\") " pod="openstack/ceilometer-0" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.250128 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d0f72f19-cf97-45ff-bb91-b1967ab48d3d\") " pod="openstack/ceilometer-0" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.260764 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jq4n\" (UniqueName: \"kubernetes.io/projected/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-kube-api-access-6jq4n\") pod \"ceilometer-0\" (UID: \"d0f72f19-cf97-45ff-bb91-b1967ab48d3d\") " pod="openstack/ceilometer-0" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.272032 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.352226 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-86c7f77bc7-nt6jq"] Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.353753 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-86c7f77bc7-nt6jq" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.357390 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.357550 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.357613 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.368785 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-86c7f77bc7-nt6jq"] Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.446655 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/28bc6ac4-481b-4809-a61e-f32ff6a17920-public-tls-certs\") pod \"swift-proxy-86c7f77bc7-nt6jq\" (UID: \"28bc6ac4-481b-4809-a61e-f32ff6a17920\") " pod="openstack/swift-proxy-86c7f77bc7-nt6jq" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.447128 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwfhn\" (UniqueName: \"kubernetes.io/projected/28bc6ac4-481b-4809-a61e-f32ff6a17920-kube-api-access-lwfhn\") pod \"swift-proxy-86c7f77bc7-nt6jq\" (UID: \"28bc6ac4-481b-4809-a61e-f32ff6a17920\") " pod="openstack/swift-proxy-86c7f77bc7-nt6jq" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.447188 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28bc6ac4-481b-4809-a61e-f32ff6a17920-log-httpd\") pod \"swift-proxy-86c7f77bc7-nt6jq\" (UID: \"28bc6ac4-481b-4809-a61e-f32ff6a17920\") " pod="openstack/swift-proxy-86c7f77bc7-nt6jq" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.447236 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/28bc6ac4-481b-4809-a61e-f32ff6a17920-internal-tls-certs\") pod \"swift-proxy-86c7f77bc7-nt6jq\" (UID: \"28bc6ac4-481b-4809-a61e-f32ff6a17920\") " pod="openstack/swift-proxy-86c7f77bc7-nt6jq" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.447305 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28bc6ac4-481b-4809-a61e-f32ff6a17920-config-data\") pod \"swift-proxy-86c7f77bc7-nt6jq\" (UID: \"28bc6ac4-481b-4809-a61e-f32ff6a17920\") " pod="openstack/swift-proxy-86c7f77bc7-nt6jq" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.447380 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28bc6ac4-481b-4809-a61e-f32ff6a17920-run-httpd\") pod \"swift-proxy-86c7f77bc7-nt6jq\" (UID: \"28bc6ac4-481b-4809-a61e-f32ff6a17920\") " pod="openstack/swift-proxy-86c7f77bc7-nt6jq" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.447437 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28bc6ac4-481b-4809-a61e-f32ff6a17920-etc-swift\") pod \"swift-proxy-86c7f77bc7-nt6jq\" (UID: \"28bc6ac4-481b-4809-a61e-f32ff6a17920\") " pod="openstack/swift-proxy-86c7f77bc7-nt6jq" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.447518 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28bc6ac4-481b-4809-a61e-f32ff6a17920-combined-ca-bundle\") pod \"swift-proxy-86c7f77bc7-nt6jq\" (UID: \"28bc6ac4-481b-4809-a61e-f32ff6a17920\") " pod="openstack/swift-proxy-86c7f77bc7-nt6jq" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.549278 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28bc6ac4-481b-4809-a61e-f32ff6a17920-config-data\") pod \"swift-proxy-86c7f77bc7-nt6jq\" (UID: \"28bc6ac4-481b-4809-a61e-f32ff6a17920\") " pod="openstack/swift-proxy-86c7f77bc7-nt6jq" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.549354 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28bc6ac4-481b-4809-a61e-f32ff6a17920-run-httpd\") pod \"swift-proxy-86c7f77bc7-nt6jq\" (UID: \"28bc6ac4-481b-4809-a61e-f32ff6a17920\") " pod="openstack/swift-proxy-86c7f77bc7-nt6jq" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.549428 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28bc6ac4-481b-4809-a61e-f32ff6a17920-etc-swift\") pod \"swift-proxy-86c7f77bc7-nt6jq\" (UID: \"28bc6ac4-481b-4809-a61e-f32ff6a17920\") " pod="openstack/swift-proxy-86c7f77bc7-nt6jq" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.549516 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28bc6ac4-481b-4809-a61e-f32ff6a17920-combined-ca-bundle\") pod \"swift-proxy-86c7f77bc7-nt6jq\" (UID: \"28bc6ac4-481b-4809-a61e-f32ff6a17920\") " pod="openstack/swift-proxy-86c7f77bc7-nt6jq" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.549594 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/28bc6ac4-481b-4809-a61e-f32ff6a17920-public-tls-certs\") pod \"swift-proxy-86c7f77bc7-nt6jq\" (UID: \"28bc6ac4-481b-4809-a61e-f32ff6a17920\") " pod="openstack/swift-proxy-86c7f77bc7-nt6jq" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.549642 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwfhn\" (UniqueName: \"kubernetes.io/projected/28bc6ac4-481b-4809-a61e-f32ff6a17920-kube-api-access-lwfhn\") pod \"swift-proxy-86c7f77bc7-nt6jq\" (UID: \"28bc6ac4-481b-4809-a61e-f32ff6a17920\") " pod="openstack/swift-proxy-86c7f77bc7-nt6jq" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.549716 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28bc6ac4-481b-4809-a61e-f32ff6a17920-log-httpd\") pod \"swift-proxy-86c7f77bc7-nt6jq\" (UID: \"28bc6ac4-481b-4809-a61e-f32ff6a17920\") " pod="openstack/swift-proxy-86c7f77bc7-nt6jq" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.549771 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/28bc6ac4-481b-4809-a61e-f32ff6a17920-internal-tls-certs\") pod \"swift-proxy-86c7f77bc7-nt6jq\" (UID: \"28bc6ac4-481b-4809-a61e-f32ff6a17920\") " pod="openstack/swift-proxy-86c7f77bc7-nt6jq" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.550428 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28bc6ac4-481b-4809-a61e-f32ff6a17920-run-httpd\") pod \"swift-proxy-86c7f77bc7-nt6jq\" (UID: \"28bc6ac4-481b-4809-a61e-f32ff6a17920\") " pod="openstack/swift-proxy-86c7f77bc7-nt6jq" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.550734 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28bc6ac4-481b-4809-a61e-f32ff6a17920-log-httpd\") pod \"swift-proxy-86c7f77bc7-nt6jq\" (UID: \"28bc6ac4-481b-4809-a61e-f32ff6a17920\") " pod="openstack/swift-proxy-86c7f77bc7-nt6jq" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.556814 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28bc6ac4-481b-4809-a61e-f32ff6a17920-combined-ca-bundle\") pod \"swift-proxy-86c7f77bc7-nt6jq\" (UID: \"28bc6ac4-481b-4809-a61e-f32ff6a17920\") " pod="openstack/swift-proxy-86c7f77bc7-nt6jq" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.557989 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28bc6ac4-481b-4809-a61e-f32ff6a17920-etc-swift\") pod \"swift-proxy-86c7f77bc7-nt6jq\" (UID: \"28bc6ac4-481b-4809-a61e-f32ff6a17920\") " pod="openstack/swift-proxy-86c7f77bc7-nt6jq" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.558715 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/28bc6ac4-481b-4809-a61e-f32ff6a17920-public-tls-certs\") pod \"swift-proxy-86c7f77bc7-nt6jq\" (UID: \"28bc6ac4-481b-4809-a61e-f32ff6a17920\") " pod="openstack/swift-proxy-86c7f77bc7-nt6jq" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.558849 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/28bc6ac4-481b-4809-a61e-f32ff6a17920-internal-tls-certs\") pod \"swift-proxy-86c7f77bc7-nt6jq\" (UID: \"28bc6ac4-481b-4809-a61e-f32ff6a17920\") " pod="openstack/swift-proxy-86c7f77bc7-nt6jq" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.558957 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28bc6ac4-481b-4809-a61e-f32ff6a17920-config-data\") pod \"swift-proxy-86c7f77bc7-nt6jq\" (UID: \"28bc6ac4-481b-4809-a61e-f32ff6a17920\") " pod="openstack/swift-proxy-86c7f77bc7-nt6jq" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.569779 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwfhn\" (UniqueName: \"kubernetes.io/projected/28bc6ac4-481b-4809-a61e-f32ff6a17920-kube-api-access-lwfhn\") pod \"swift-proxy-86c7f77bc7-nt6jq\" (UID: \"28bc6ac4-481b-4809-a61e-f32ff6a17920\") " pod="openstack/swift-proxy-86c7f77bc7-nt6jq" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.671378 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-86c7f77bc7-nt6jq" Jan 01 08:47:53 crc kubenswrapper[4867]: I0101 08:47:53.749584 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:47:54 crc kubenswrapper[4867]: I0101 08:47:54.377088 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.409390 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-b4jlt"] Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.412492 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-b4jlt" Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.421209 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-b4jlt"] Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.527589 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-tf28b"] Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.529565 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-tf28b" Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.535775 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-f3c1-account-create-update-lnb82"] Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.536965 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f3c1-account-create-update-lnb82" Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.542001 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.544158 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-tf28b"] Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.549109 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tpbc\" (UniqueName: \"kubernetes.io/projected/c8ec6291-8802-442c-af30-08b607472e97-kube-api-access-5tpbc\") pod \"nova-api-db-create-b4jlt\" (UID: \"c8ec6291-8802-442c-af30-08b607472e97\") " pod="openstack/nova-api-db-create-b4jlt" Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.549177 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8ec6291-8802-442c-af30-08b607472e97-operator-scripts\") pod \"nova-api-db-create-b4jlt\" (UID: \"c8ec6291-8802-442c-af30-08b607472e97\") " pod="openstack/nova-api-db-create-b4jlt" Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.559179 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-f3c1-account-create-update-lnb82"] Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.650500 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aed2e086-295b-468f-93a1-47ce57c3e871-operator-scripts\") pod \"nova-cell0-db-create-tf28b\" (UID: \"aed2e086-295b-468f-93a1-47ce57c3e871\") " pod="openstack/nova-cell0-db-create-tf28b" Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.650546 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24dd201f-c983-42e0-9fcc-c80c8d38f545-operator-scripts\") pod \"nova-api-f3c1-account-create-update-lnb82\" (UID: \"24dd201f-c983-42e0-9fcc-c80c8d38f545\") " pod="openstack/nova-api-f3c1-account-create-update-lnb82" Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.650677 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cm5x\" (UniqueName: \"kubernetes.io/projected/24dd201f-c983-42e0-9fcc-c80c8d38f545-kube-api-access-5cm5x\") pod \"nova-api-f3c1-account-create-update-lnb82\" (UID: \"24dd201f-c983-42e0-9fcc-c80c8d38f545\") " pod="openstack/nova-api-f3c1-account-create-update-lnb82" Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.650738 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tpbc\" (UniqueName: \"kubernetes.io/projected/c8ec6291-8802-442c-af30-08b607472e97-kube-api-access-5tpbc\") pod \"nova-api-db-create-b4jlt\" (UID: \"c8ec6291-8802-442c-af30-08b607472e97\") " pod="openstack/nova-api-db-create-b4jlt" Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.650767 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhj4v\" (UniqueName: \"kubernetes.io/projected/aed2e086-295b-468f-93a1-47ce57c3e871-kube-api-access-dhj4v\") pod \"nova-cell0-db-create-tf28b\" (UID: \"aed2e086-295b-468f-93a1-47ce57c3e871\") " pod="openstack/nova-cell0-db-create-tf28b" Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.650803 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8ec6291-8802-442c-af30-08b607472e97-operator-scripts\") pod \"nova-api-db-create-b4jlt\" (UID: \"c8ec6291-8802-442c-af30-08b607472e97\") " pod="openstack/nova-api-db-create-b4jlt" Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.651451 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8ec6291-8802-442c-af30-08b607472e97-operator-scripts\") pod \"nova-api-db-create-b4jlt\" (UID: \"c8ec6291-8802-442c-af30-08b607472e97\") " pod="openstack/nova-api-db-create-b4jlt" Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.669589 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tpbc\" (UniqueName: \"kubernetes.io/projected/c8ec6291-8802-442c-af30-08b607472e97-kube-api-access-5tpbc\") pod \"nova-api-db-create-b4jlt\" (UID: \"c8ec6291-8802-442c-af30-08b607472e97\") " pod="openstack/nova-api-db-create-b4jlt" Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.717479 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-9ba1-account-create-update-27nl2"] Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.718540 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9ba1-account-create-update-27nl2" Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.722531 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.737061 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-9ba1-account-create-update-27nl2"] Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.754470 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24dd201f-c983-42e0-9fcc-c80c8d38f545-operator-scripts\") pod \"nova-api-f3c1-account-create-update-lnb82\" (UID: \"24dd201f-c983-42e0-9fcc-c80c8d38f545\") " pod="openstack/nova-api-f3c1-account-create-update-lnb82" Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.754620 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cm5x\" (UniqueName: \"kubernetes.io/projected/24dd201f-c983-42e0-9fcc-c80c8d38f545-kube-api-access-5cm5x\") pod \"nova-api-f3c1-account-create-update-lnb82\" (UID: \"24dd201f-c983-42e0-9fcc-c80c8d38f545\") " pod="openstack/nova-api-f3c1-account-create-update-lnb82" Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.754712 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhj4v\" (UniqueName: \"kubernetes.io/projected/aed2e086-295b-468f-93a1-47ce57c3e871-kube-api-access-dhj4v\") pod \"nova-cell0-db-create-tf28b\" (UID: \"aed2e086-295b-468f-93a1-47ce57c3e871\") " pod="openstack/nova-cell0-db-create-tf28b" Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.754762 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aed2e086-295b-468f-93a1-47ce57c3e871-operator-scripts\") pod \"nova-cell0-db-create-tf28b\" (UID: \"aed2e086-295b-468f-93a1-47ce57c3e871\") " pod="openstack/nova-cell0-db-create-tf28b" Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.755786 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aed2e086-295b-468f-93a1-47ce57c3e871-operator-scripts\") pod \"nova-cell0-db-create-tf28b\" (UID: \"aed2e086-295b-468f-93a1-47ce57c3e871\") " pod="openstack/nova-cell0-db-create-tf28b" Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.756021 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24dd201f-c983-42e0-9fcc-c80c8d38f545-operator-scripts\") pod \"nova-api-f3c1-account-create-update-lnb82\" (UID: \"24dd201f-c983-42e0-9fcc-c80c8d38f545\") " pod="openstack/nova-api-f3c1-account-create-update-lnb82" Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.774785 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cm5x\" (UniqueName: \"kubernetes.io/projected/24dd201f-c983-42e0-9fcc-c80c8d38f545-kube-api-access-5cm5x\") pod \"nova-api-f3c1-account-create-update-lnb82\" (UID: \"24dd201f-c983-42e0-9fcc-c80c8d38f545\") " pod="openstack/nova-api-f3c1-account-create-update-lnb82" Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.775574 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhj4v\" (UniqueName: \"kubernetes.io/projected/aed2e086-295b-468f-93a1-47ce57c3e871-kube-api-access-dhj4v\") pod \"nova-cell0-db-create-tf28b\" (UID: \"aed2e086-295b-468f-93a1-47ce57c3e871\") " pod="openstack/nova-cell0-db-create-tf28b" Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.780498 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-b4jlt" Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.819072 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-tbs6w"] Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.820262 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-tbs6w" Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.828287 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-tbs6w"] Jan 01 08:47:58 crc kubenswrapper[4867]: W0101 08:47:58.829012 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0f72f19_cf97_45ff_bb91_b1967ab48d3d.slice/crio-a879542b3b68308b71704f9c55d47dccea3d579deb6add13caa9446a915a0309 WatchSource:0}: Error finding container a879542b3b68308b71704f9c55d47dccea3d579deb6add13caa9446a915a0309: Status 404 returned error can't find the container with id a879542b3b68308b71704f9c55d47dccea3d579deb6add13caa9446a915a0309 Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.849499 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-tf28b" Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.856063 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzks8\" (UniqueName: \"kubernetes.io/projected/82441228-8114-485a-a020-b8997a64900c-kube-api-access-zzks8\") pod \"nova-cell0-9ba1-account-create-update-27nl2\" (UID: \"82441228-8114-485a-a020-b8997a64900c\") " pod="openstack/nova-cell0-9ba1-account-create-update-27nl2" Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.856105 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82441228-8114-485a-a020-b8997a64900c-operator-scripts\") pod \"nova-cell0-9ba1-account-create-update-27nl2\" (UID: \"82441228-8114-485a-a020-b8997a64900c\") " pod="openstack/nova-cell0-9ba1-account-create-update-27nl2" Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.858805 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f3c1-account-create-update-lnb82" Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.921145 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-1f09-account-create-update-mb5l4"] Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.922479 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1f09-account-create-update-mb5l4" Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.925255 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.954203 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-1f09-account-create-update-mb5l4"] Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.957943 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9w8t\" (UniqueName: \"kubernetes.io/projected/9738d77e-f75a-4b30-ac35-4e91438aad75-kube-api-access-c9w8t\") pod \"nova-cell1-db-create-tbs6w\" (UID: \"9738d77e-f75a-4b30-ac35-4e91438aad75\") " pod="openstack/nova-cell1-db-create-tbs6w" Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.958057 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzks8\" (UniqueName: \"kubernetes.io/projected/82441228-8114-485a-a020-b8997a64900c-kube-api-access-zzks8\") pod \"nova-cell0-9ba1-account-create-update-27nl2\" (UID: \"82441228-8114-485a-a020-b8997a64900c\") " pod="openstack/nova-cell0-9ba1-account-create-update-27nl2" Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.958179 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82441228-8114-485a-a020-b8997a64900c-operator-scripts\") pod \"nova-cell0-9ba1-account-create-update-27nl2\" (UID: \"82441228-8114-485a-a020-b8997a64900c\") " pod="openstack/nova-cell0-9ba1-account-create-update-27nl2" Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.958225 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9738d77e-f75a-4b30-ac35-4e91438aad75-operator-scripts\") pod \"nova-cell1-db-create-tbs6w\" (UID: \"9738d77e-f75a-4b30-ac35-4e91438aad75\") " pod="openstack/nova-cell1-db-create-tbs6w" Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.959196 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82441228-8114-485a-a020-b8997a64900c-operator-scripts\") pod \"nova-cell0-9ba1-account-create-update-27nl2\" (UID: \"82441228-8114-485a-a020-b8997a64900c\") " pod="openstack/nova-cell0-9ba1-account-create-update-27nl2" Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.975998 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0f72f19-cf97-45ff-bb91-b1967ab48d3d","Type":"ContainerStarted","Data":"a879542b3b68308b71704f9c55d47dccea3d579deb6add13caa9446a915a0309"} Jan 01 08:47:58 crc kubenswrapper[4867]: I0101 08:47:58.976030 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzks8\" (UniqueName: \"kubernetes.io/projected/82441228-8114-485a-a020-b8997a64900c-kube-api-access-zzks8\") pod \"nova-cell0-9ba1-account-create-update-27nl2\" (UID: \"82441228-8114-485a-a020-b8997a64900c\") " pod="openstack/nova-cell0-9ba1-account-create-update-27nl2" Jan 01 08:47:59 crc kubenswrapper[4867]: I0101 08:47:59.044100 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9ba1-account-create-update-27nl2" Jan 01 08:47:59 crc kubenswrapper[4867]: I0101 08:47:59.060824 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9w8t\" (UniqueName: \"kubernetes.io/projected/9738d77e-f75a-4b30-ac35-4e91438aad75-kube-api-access-c9w8t\") pod \"nova-cell1-db-create-tbs6w\" (UID: \"9738d77e-f75a-4b30-ac35-4e91438aad75\") " pod="openstack/nova-cell1-db-create-tbs6w" Jan 01 08:47:59 crc kubenswrapper[4867]: I0101 08:47:59.060986 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/992836a0-1e44-4e3f-8a2d-139f151eef51-operator-scripts\") pod \"nova-cell1-1f09-account-create-update-mb5l4\" (UID: \"992836a0-1e44-4e3f-8a2d-139f151eef51\") " pod="openstack/nova-cell1-1f09-account-create-update-mb5l4" Jan 01 08:47:59 crc kubenswrapper[4867]: I0101 08:47:59.061013 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-788sk\" (UniqueName: \"kubernetes.io/projected/992836a0-1e44-4e3f-8a2d-139f151eef51-kube-api-access-788sk\") pod \"nova-cell1-1f09-account-create-update-mb5l4\" (UID: \"992836a0-1e44-4e3f-8a2d-139f151eef51\") " pod="openstack/nova-cell1-1f09-account-create-update-mb5l4" Jan 01 08:47:59 crc kubenswrapper[4867]: I0101 08:47:59.061053 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9738d77e-f75a-4b30-ac35-4e91438aad75-operator-scripts\") pod \"nova-cell1-db-create-tbs6w\" (UID: \"9738d77e-f75a-4b30-ac35-4e91438aad75\") " pod="openstack/nova-cell1-db-create-tbs6w" Jan 01 08:47:59 crc kubenswrapper[4867]: I0101 08:47:59.061786 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9738d77e-f75a-4b30-ac35-4e91438aad75-operator-scripts\") pod \"nova-cell1-db-create-tbs6w\" (UID: \"9738d77e-f75a-4b30-ac35-4e91438aad75\") " pod="openstack/nova-cell1-db-create-tbs6w" Jan 01 08:47:59 crc kubenswrapper[4867]: I0101 08:47:59.088029 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9w8t\" (UniqueName: \"kubernetes.io/projected/9738d77e-f75a-4b30-ac35-4e91438aad75-kube-api-access-c9w8t\") pod \"nova-cell1-db-create-tbs6w\" (UID: \"9738d77e-f75a-4b30-ac35-4e91438aad75\") " pod="openstack/nova-cell1-db-create-tbs6w" Jan 01 08:47:59 crc kubenswrapper[4867]: I0101 08:47:59.163009 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/992836a0-1e44-4e3f-8a2d-139f151eef51-operator-scripts\") pod \"nova-cell1-1f09-account-create-update-mb5l4\" (UID: \"992836a0-1e44-4e3f-8a2d-139f151eef51\") " pod="openstack/nova-cell1-1f09-account-create-update-mb5l4" Jan 01 08:47:59 crc kubenswrapper[4867]: I0101 08:47:59.163396 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-788sk\" (UniqueName: \"kubernetes.io/projected/992836a0-1e44-4e3f-8a2d-139f151eef51-kube-api-access-788sk\") pod \"nova-cell1-1f09-account-create-update-mb5l4\" (UID: \"992836a0-1e44-4e3f-8a2d-139f151eef51\") " pod="openstack/nova-cell1-1f09-account-create-update-mb5l4" Jan 01 08:47:59 crc kubenswrapper[4867]: I0101 08:47:59.163869 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/992836a0-1e44-4e3f-8a2d-139f151eef51-operator-scripts\") pod \"nova-cell1-1f09-account-create-update-mb5l4\" (UID: \"992836a0-1e44-4e3f-8a2d-139f151eef51\") " pod="openstack/nova-cell1-1f09-account-create-update-mb5l4" Jan 01 08:47:59 crc kubenswrapper[4867]: I0101 08:47:59.180423 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-788sk\" (UniqueName: \"kubernetes.io/projected/992836a0-1e44-4e3f-8a2d-139f151eef51-kube-api-access-788sk\") pod \"nova-cell1-1f09-account-create-update-mb5l4\" (UID: \"992836a0-1e44-4e3f-8a2d-139f151eef51\") " pod="openstack/nova-cell1-1f09-account-create-update-mb5l4" Jan 01 08:47:59 crc kubenswrapper[4867]: I0101 08:47:59.246812 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-tbs6w" Jan 01 08:47:59 crc kubenswrapper[4867]: I0101 08:47:59.266178 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1f09-account-create-update-mb5l4" Jan 01 08:47:59 crc kubenswrapper[4867]: I0101 08:47:59.450111 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-b4jlt"] Jan 01 08:47:59 crc kubenswrapper[4867]: I0101 08:47:59.461157 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-tf28b"] Jan 01 08:47:59 crc kubenswrapper[4867]: I0101 08:47:59.530845 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-86c7f77bc7-nt6jq"] Jan 01 08:47:59 crc kubenswrapper[4867]: I0101 08:47:59.548943 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-9ba1-account-create-update-27nl2"] Jan 01 08:47:59 crc kubenswrapper[4867]: I0101 08:47:59.622698 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-f3c1-account-create-update-lnb82"] Jan 01 08:47:59 crc kubenswrapper[4867]: W0101 08:47:59.815377 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9738d77e_f75a_4b30_ac35_4e91438aad75.slice/crio-6e030fa49a5101438e059a897bfd91c289d6ec9e673124cc02a6b48f1fe4183f WatchSource:0}: Error finding container 6e030fa49a5101438e059a897bfd91c289d6ec9e673124cc02a6b48f1fe4183f: Status 404 returned error can't find the container with id 6e030fa49a5101438e059a897bfd91c289d6ec9e673124cc02a6b48f1fe4183f Jan 01 08:47:59 crc kubenswrapper[4867]: I0101 08:47:59.816704 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-tbs6w"] Jan 01 08:47:59 crc kubenswrapper[4867]: I0101 08:47:59.835600 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-1f09-account-create-update-mb5l4"] Jan 01 08:47:59 crc kubenswrapper[4867]: W0101 08:47:59.839703 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod992836a0_1e44_4e3f_8a2d_139f151eef51.slice/crio-1a81b0bcbdd0cdbe360eec91bf7869b6a251c1fcbc35806350e1b9a59de13a51 WatchSource:0}: Error finding container 1a81b0bcbdd0cdbe360eec91bf7869b6a251c1fcbc35806350e1b9a59de13a51: Status 404 returned error can't find the container with id 1a81b0bcbdd0cdbe360eec91bf7869b6a251c1fcbc35806350e1b9a59de13a51 Jan 01 08:47:59 crc kubenswrapper[4867]: I0101 08:47:59.988998 4867 generic.go:334] "Generic (PLEG): container finished" podID="c8ec6291-8802-442c-af30-08b607472e97" containerID="476c6421f57f7d1396c4f3c1ee490bf8a8da19804f6d3d654acc7bb8a4fe3682" exitCode=0 Jan 01 08:47:59 crc kubenswrapper[4867]: I0101 08:47:59.989098 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-b4jlt" event={"ID":"c8ec6291-8802-442c-af30-08b607472e97","Type":"ContainerDied","Data":"476c6421f57f7d1396c4f3c1ee490bf8a8da19804f6d3d654acc7bb8a4fe3682"} Jan 01 08:47:59 crc kubenswrapper[4867]: I0101 08:47:59.989144 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-b4jlt" event={"ID":"c8ec6291-8802-442c-af30-08b607472e97","Type":"ContainerStarted","Data":"01564bd05215a7670417458e9d26f2728c92404266e4cc61ad405cffb792ebe7"} Jan 01 08:47:59 crc kubenswrapper[4867]: I0101 08:47:59.990784 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f3c1-account-create-update-lnb82" event={"ID":"24dd201f-c983-42e0-9fcc-c80c8d38f545","Type":"ContainerStarted","Data":"726114d5fcb41f242ddcc4504589362f4b14fb8b745d7c204fd9f13392ad0d92"} Jan 01 08:47:59 crc kubenswrapper[4867]: I0101 08:47:59.991864 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1f09-account-create-update-mb5l4" event={"ID":"992836a0-1e44-4e3f-8a2d-139f151eef51","Type":"ContainerStarted","Data":"1a81b0bcbdd0cdbe360eec91bf7869b6a251c1fcbc35806350e1b9a59de13a51"} Jan 01 08:47:59 crc kubenswrapper[4867]: I0101 08:47:59.994334 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-tbs6w" event={"ID":"9738d77e-f75a-4b30-ac35-4e91438aad75","Type":"ContainerStarted","Data":"6e030fa49a5101438e059a897bfd91c289d6ec9e673124cc02a6b48f1fe4183f"} Jan 01 08:47:59 crc kubenswrapper[4867]: I0101 08:47:59.995260 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-86c7f77bc7-nt6jq" event={"ID":"28bc6ac4-481b-4809-a61e-f32ff6a17920","Type":"ContainerStarted","Data":"5c05402f0b7a1ed492e73b2aaadc51d362d74f80468ba808d9c123b4db15c8b6"} Jan 01 08:47:59 crc kubenswrapper[4867]: I0101 08:47:59.996040 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-tf28b" event={"ID":"aed2e086-295b-468f-93a1-47ce57c3e871","Type":"ContainerStarted","Data":"f4b04f742225eb82d1f848eb251fb87b505e1212e606b0942dff34a76e0cedce"} Jan 01 08:47:59 crc kubenswrapper[4867]: I0101 08:47:59.996862 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9ba1-account-create-update-27nl2" event={"ID":"82441228-8114-485a-a020-b8997a64900c","Type":"ContainerStarted","Data":"f901215054a4d27de9060e510975a31414a9a70ccf3027fa1a79e5ee287fb1b0"} Jan 01 08:47:59 crc kubenswrapper[4867]: I0101 08:47:59.998174 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0f72f19-cf97-45ff-bb91-b1967ab48d3d","Type":"ContainerStarted","Data":"27458b23ba6b8f7ecfef7cdc0c836bd37f64d54780107e94449469657d2a22d9"} Jan 01 08:48:00 crc kubenswrapper[4867]: I0101 08:48:00.000107 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"bf6c2c64-e624-4388-b9dc-3d8c7985ac8f","Type":"ContainerStarted","Data":"e698afe95247188c1349dc45263790341fa72c55e4b91893ad3b356253a8a571"} Jan 01 08:48:00 crc kubenswrapper[4867]: I0101 08:48:00.121518 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.287476939 podStartE2EDuration="12.121502169s" podCreationTimestamp="2026-01-01 08:47:48 +0000 UTC" firstStartedPulling="2026-01-01 08:47:49.114750078 +0000 UTC m=+1278.250018857" lastFinishedPulling="2026-01-01 08:47:58.948775318 +0000 UTC m=+1288.084044087" observedRunningTime="2026-01-01 08:48:00.102180347 +0000 UTC m=+1289.237449116" watchObservedRunningTime="2026-01-01 08:48:00.121502169 +0000 UTC m=+1289.256770938" Jan 01 08:48:01 crc kubenswrapper[4867]: I0101 08:48:01.028444 4867 generic.go:334] "Generic (PLEG): container finished" podID="9738d77e-f75a-4b30-ac35-4e91438aad75" containerID="c5d0ca79ac4541ddccc69df852009bbfa95af4615fe22aebff2823b0767f3b63" exitCode=0 Jan 01 08:48:01 crc kubenswrapper[4867]: I0101 08:48:01.032899 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-tbs6w" event={"ID":"9738d77e-f75a-4b30-ac35-4e91438aad75","Type":"ContainerDied","Data":"c5d0ca79ac4541ddccc69df852009bbfa95af4615fe22aebff2823b0767f3b63"} Jan 01 08:48:01 crc kubenswrapper[4867]: I0101 08:48:01.039225 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-86c7f77bc7-nt6jq" event={"ID":"28bc6ac4-481b-4809-a61e-f32ff6a17920","Type":"ContainerStarted","Data":"e4fe854d98b51a453dcc4f2c161e7293d1516af4b78ad4a4598ea5c3eea480ac"} Jan 01 08:48:01 crc kubenswrapper[4867]: I0101 08:48:01.039277 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-86c7f77bc7-nt6jq" event={"ID":"28bc6ac4-481b-4809-a61e-f32ff6a17920","Type":"ContainerStarted","Data":"4cd0bb63af1fee6659642feb949cce89eb3edf52c7db8a4bc4bddb7af7d72398"} Jan 01 08:48:01 crc kubenswrapper[4867]: I0101 08:48:01.039304 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-86c7f77bc7-nt6jq" Jan 01 08:48:01 crc kubenswrapper[4867]: I0101 08:48:01.039320 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-86c7f77bc7-nt6jq" Jan 01 08:48:01 crc kubenswrapper[4867]: I0101 08:48:01.044601 4867 generic.go:334] "Generic (PLEG): container finished" podID="aed2e086-295b-468f-93a1-47ce57c3e871" containerID="3ca99532fece43ddcd181b8d30d852363f3e317662532c7bb4ef1999cabb7ab0" exitCode=0 Jan 01 08:48:01 crc kubenswrapper[4867]: I0101 08:48:01.044669 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-tf28b" event={"ID":"aed2e086-295b-468f-93a1-47ce57c3e871","Type":"ContainerDied","Data":"3ca99532fece43ddcd181b8d30d852363f3e317662532c7bb4ef1999cabb7ab0"} Jan 01 08:48:01 crc kubenswrapper[4867]: I0101 08:48:01.059309 4867 generic.go:334] "Generic (PLEG): container finished" podID="82441228-8114-485a-a020-b8997a64900c" containerID="21e877e59228579e71714143a5ac19469d97317cbd760c47050750e19b0271c0" exitCode=0 Jan 01 08:48:01 crc kubenswrapper[4867]: I0101 08:48:01.059387 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9ba1-account-create-update-27nl2" event={"ID":"82441228-8114-485a-a020-b8997a64900c","Type":"ContainerDied","Data":"21e877e59228579e71714143a5ac19469d97317cbd760c47050750e19b0271c0"} Jan 01 08:48:01 crc kubenswrapper[4867]: I0101 08:48:01.062840 4867 generic.go:334] "Generic (PLEG): container finished" podID="24dd201f-c983-42e0-9fcc-c80c8d38f545" containerID="48c5ec08c4d1c98aa016392451b054e9e19f7bcf56b86ed4573484ac5f6be544" exitCode=0 Jan 01 08:48:01 crc kubenswrapper[4867]: I0101 08:48:01.062894 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f3c1-account-create-update-lnb82" event={"ID":"24dd201f-c983-42e0-9fcc-c80c8d38f545","Type":"ContainerDied","Data":"48c5ec08c4d1c98aa016392451b054e9e19f7bcf56b86ed4573484ac5f6be544"} Jan 01 08:48:01 crc kubenswrapper[4867]: I0101 08:48:01.085787 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0f72f19-cf97-45ff-bb91-b1967ab48d3d","Type":"ContainerStarted","Data":"d6780b2dfdce67658766d2b36b4422be65153c58b0be47a18ebbc9469ca770e0"} Jan 01 08:48:01 crc kubenswrapper[4867]: I0101 08:48:01.092577 4867 generic.go:334] "Generic (PLEG): container finished" podID="992836a0-1e44-4e3f-8a2d-139f151eef51" containerID="429cd94d92c39d729bb98925ebfd0b4ac0c1dc9a973cba5d9f2daf931ca4c489" exitCode=0 Jan 01 08:48:01 crc kubenswrapper[4867]: I0101 08:48:01.092931 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1f09-account-create-update-mb5l4" event={"ID":"992836a0-1e44-4e3f-8a2d-139f151eef51","Type":"ContainerDied","Data":"429cd94d92c39d729bb98925ebfd0b4ac0c1dc9a973cba5d9f2daf931ca4c489"} Jan 01 08:48:01 crc kubenswrapper[4867]: I0101 08:48:01.113341 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-86c7f77bc7-nt6jq" podStartSLOduration=8.11331896 podStartE2EDuration="8.11331896s" podCreationTimestamp="2026-01-01 08:47:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:48:01.103199816 +0000 UTC m=+1290.238468595" watchObservedRunningTime="2026-01-01 08:48:01.11331896 +0000 UTC m=+1290.248587729" Jan 01 08:48:01 crc kubenswrapper[4867]: I0101 08:48:01.611143 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-b4jlt" Jan 01 08:48:01 crc kubenswrapper[4867]: I0101 08:48:01.742956 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8ec6291-8802-442c-af30-08b607472e97-operator-scripts\") pod \"c8ec6291-8802-442c-af30-08b607472e97\" (UID: \"c8ec6291-8802-442c-af30-08b607472e97\") " Jan 01 08:48:01 crc kubenswrapper[4867]: I0101 08:48:01.743022 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tpbc\" (UniqueName: \"kubernetes.io/projected/c8ec6291-8802-442c-af30-08b607472e97-kube-api-access-5tpbc\") pod \"c8ec6291-8802-442c-af30-08b607472e97\" (UID: \"c8ec6291-8802-442c-af30-08b607472e97\") " Jan 01 08:48:01 crc kubenswrapper[4867]: I0101 08:48:01.743980 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8ec6291-8802-442c-af30-08b607472e97-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c8ec6291-8802-442c-af30-08b607472e97" (UID: "c8ec6291-8802-442c-af30-08b607472e97"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:48:01 crc kubenswrapper[4867]: I0101 08:48:01.749548 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8ec6291-8802-442c-af30-08b607472e97-kube-api-access-5tpbc" (OuterVolumeSpecName: "kube-api-access-5tpbc") pod "c8ec6291-8802-442c-af30-08b607472e97" (UID: "c8ec6291-8802-442c-af30-08b607472e97"). InnerVolumeSpecName "kube-api-access-5tpbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:48:01 crc kubenswrapper[4867]: I0101 08:48:01.844905 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8ec6291-8802-442c-af30-08b607472e97-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:01 crc kubenswrapper[4867]: I0101 08:48:01.844934 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tpbc\" (UniqueName: \"kubernetes.io/projected/c8ec6291-8802-442c-af30-08b607472e97-kube-api-access-5tpbc\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:02 crc kubenswrapper[4867]: I0101 08:48:02.103249 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0f72f19-cf97-45ff-bb91-b1967ab48d3d","Type":"ContainerStarted","Data":"d145181f90dcd17c628dba3b9a6bf741ca9c01724f047ccdbc7dc6af5d2d9f87"} Jan 01 08:48:02 crc kubenswrapper[4867]: I0101 08:48:02.104792 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-b4jlt" event={"ID":"c8ec6291-8802-442c-af30-08b607472e97","Type":"ContainerDied","Data":"01564bd05215a7670417458e9d26f2728c92404266e4cc61ad405cffb792ebe7"} Jan 01 08:48:02 crc kubenswrapper[4867]: I0101 08:48:02.104914 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01564bd05215a7670417458e9d26f2728c92404266e4cc61ad405cffb792ebe7" Jan 01 08:48:02 crc kubenswrapper[4867]: I0101 08:48:02.104945 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-b4jlt" Jan 01 08:48:02 crc kubenswrapper[4867]: I0101 08:48:02.512249 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-tbs6w" Jan 01 08:48:02 crc kubenswrapper[4867]: I0101 08:48:02.659866 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9w8t\" (UniqueName: \"kubernetes.io/projected/9738d77e-f75a-4b30-ac35-4e91438aad75-kube-api-access-c9w8t\") pod \"9738d77e-f75a-4b30-ac35-4e91438aad75\" (UID: \"9738d77e-f75a-4b30-ac35-4e91438aad75\") " Jan 01 08:48:02 crc kubenswrapper[4867]: I0101 08:48:02.660015 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9738d77e-f75a-4b30-ac35-4e91438aad75-operator-scripts\") pod \"9738d77e-f75a-4b30-ac35-4e91438aad75\" (UID: \"9738d77e-f75a-4b30-ac35-4e91438aad75\") " Jan 01 08:48:02 crc kubenswrapper[4867]: I0101 08:48:02.660815 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9738d77e-f75a-4b30-ac35-4e91438aad75-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9738d77e-f75a-4b30-ac35-4e91438aad75" (UID: "9738d77e-f75a-4b30-ac35-4e91438aad75"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:48:02 crc kubenswrapper[4867]: I0101 08:48:02.665145 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9738d77e-f75a-4b30-ac35-4e91438aad75-kube-api-access-c9w8t" (OuterVolumeSpecName: "kube-api-access-c9w8t") pod "9738d77e-f75a-4b30-ac35-4e91438aad75" (UID: "9738d77e-f75a-4b30-ac35-4e91438aad75"). InnerVolumeSpecName "kube-api-access-c9w8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:48:02 crc kubenswrapper[4867]: I0101 08:48:02.714908 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9ba1-account-create-update-27nl2" Jan 01 08:48:02 crc kubenswrapper[4867]: I0101 08:48:02.720276 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-tf28b" Jan 01 08:48:02 crc kubenswrapper[4867]: I0101 08:48:02.736077 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1f09-account-create-update-mb5l4" Jan 01 08:48:02 crc kubenswrapper[4867]: I0101 08:48:02.747029 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f3c1-account-create-update-lnb82" Jan 01 08:48:02 crc kubenswrapper[4867]: I0101 08:48:02.763446 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9738d77e-f75a-4b30-ac35-4e91438aad75-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:02 crc kubenswrapper[4867]: I0101 08:48:02.763499 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9w8t\" (UniqueName: \"kubernetes.io/projected/9738d77e-f75a-4b30-ac35-4e91438aad75-kube-api-access-c9w8t\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:02 crc kubenswrapper[4867]: I0101 08:48:02.864253 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cm5x\" (UniqueName: \"kubernetes.io/projected/24dd201f-c983-42e0-9fcc-c80c8d38f545-kube-api-access-5cm5x\") pod \"24dd201f-c983-42e0-9fcc-c80c8d38f545\" (UID: \"24dd201f-c983-42e0-9fcc-c80c8d38f545\") " Jan 01 08:48:02 crc kubenswrapper[4867]: I0101 08:48:02.864622 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzks8\" (UniqueName: \"kubernetes.io/projected/82441228-8114-485a-a020-b8997a64900c-kube-api-access-zzks8\") pod \"82441228-8114-485a-a020-b8997a64900c\" (UID: \"82441228-8114-485a-a020-b8997a64900c\") " Jan 01 08:48:02 crc kubenswrapper[4867]: I0101 08:48:02.864650 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhj4v\" (UniqueName: \"kubernetes.io/projected/aed2e086-295b-468f-93a1-47ce57c3e871-kube-api-access-dhj4v\") pod \"aed2e086-295b-468f-93a1-47ce57c3e871\" (UID: \"aed2e086-295b-468f-93a1-47ce57c3e871\") " Jan 01 08:48:02 crc kubenswrapper[4867]: I0101 08:48:02.864725 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-788sk\" (UniqueName: \"kubernetes.io/projected/992836a0-1e44-4e3f-8a2d-139f151eef51-kube-api-access-788sk\") pod \"992836a0-1e44-4e3f-8a2d-139f151eef51\" (UID: \"992836a0-1e44-4e3f-8a2d-139f151eef51\") " Jan 01 08:48:02 crc kubenswrapper[4867]: I0101 08:48:02.864810 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aed2e086-295b-468f-93a1-47ce57c3e871-operator-scripts\") pod \"aed2e086-295b-468f-93a1-47ce57c3e871\" (UID: \"aed2e086-295b-468f-93a1-47ce57c3e871\") " Jan 01 08:48:02 crc kubenswrapper[4867]: I0101 08:48:02.864858 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/992836a0-1e44-4e3f-8a2d-139f151eef51-operator-scripts\") pod \"992836a0-1e44-4e3f-8a2d-139f151eef51\" (UID: \"992836a0-1e44-4e3f-8a2d-139f151eef51\") " Jan 01 08:48:02 crc kubenswrapper[4867]: I0101 08:48:02.864939 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24dd201f-c983-42e0-9fcc-c80c8d38f545-operator-scripts\") pod \"24dd201f-c983-42e0-9fcc-c80c8d38f545\" (UID: \"24dd201f-c983-42e0-9fcc-c80c8d38f545\") " Jan 01 08:48:02 crc kubenswrapper[4867]: I0101 08:48:02.865045 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82441228-8114-485a-a020-b8997a64900c-operator-scripts\") pod \"82441228-8114-485a-a020-b8997a64900c\" (UID: \"82441228-8114-485a-a020-b8997a64900c\") " Jan 01 08:48:02 crc kubenswrapper[4867]: I0101 08:48:02.865578 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/992836a0-1e44-4e3f-8a2d-139f151eef51-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "992836a0-1e44-4e3f-8a2d-139f151eef51" (UID: "992836a0-1e44-4e3f-8a2d-139f151eef51"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:48:02 crc kubenswrapper[4867]: I0101 08:48:02.865714 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82441228-8114-485a-a020-b8997a64900c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "82441228-8114-485a-a020-b8997a64900c" (UID: "82441228-8114-485a-a020-b8997a64900c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:48:02 crc kubenswrapper[4867]: I0101 08:48:02.866328 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aed2e086-295b-468f-93a1-47ce57c3e871-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aed2e086-295b-468f-93a1-47ce57c3e871" (UID: "aed2e086-295b-468f-93a1-47ce57c3e871"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:48:02 crc kubenswrapper[4867]: I0101 08:48:02.866376 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24dd201f-c983-42e0-9fcc-c80c8d38f545-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "24dd201f-c983-42e0-9fcc-c80c8d38f545" (UID: "24dd201f-c983-42e0-9fcc-c80c8d38f545"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:48:02 crc kubenswrapper[4867]: I0101 08:48:02.866547 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aed2e086-295b-468f-93a1-47ce57c3e871-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:02 crc kubenswrapper[4867]: I0101 08:48:02.866571 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/992836a0-1e44-4e3f-8a2d-139f151eef51-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:02 crc kubenswrapper[4867]: I0101 08:48:02.866585 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24dd201f-c983-42e0-9fcc-c80c8d38f545-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:02 crc kubenswrapper[4867]: I0101 08:48:02.866597 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82441228-8114-485a-a020-b8997a64900c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:02 crc kubenswrapper[4867]: I0101 08:48:02.867864 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24dd201f-c983-42e0-9fcc-c80c8d38f545-kube-api-access-5cm5x" (OuterVolumeSpecName: "kube-api-access-5cm5x") pod "24dd201f-c983-42e0-9fcc-c80c8d38f545" (UID: "24dd201f-c983-42e0-9fcc-c80c8d38f545"). InnerVolumeSpecName "kube-api-access-5cm5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:48:02 crc kubenswrapper[4867]: I0101 08:48:02.869136 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aed2e086-295b-468f-93a1-47ce57c3e871-kube-api-access-dhj4v" (OuterVolumeSpecName: "kube-api-access-dhj4v") pod "aed2e086-295b-468f-93a1-47ce57c3e871" (UID: "aed2e086-295b-468f-93a1-47ce57c3e871"). InnerVolumeSpecName "kube-api-access-dhj4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:48:02 crc kubenswrapper[4867]: I0101 08:48:02.876179 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82441228-8114-485a-a020-b8997a64900c-kube-api-access-zzks8" (OuterVolumeSpecName: "kube-api-access-zzks8") pod "82441228-8114-485a-a020-b8997a64900c" (UID: "82441228-8114-485a-a020-b8997a64900c"). InnerVolumeSpecName "kube-api-access-zzks8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:48:02 crc kubenswrapper[4867]: I0101 08:48:02.887155 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/992836a0-1e44-4e3f-8a2d-139f151eef51-kube-api-access-788sk" (OuterVolumeSpecName: "kube-api-access-788sk") pod "992836a0-1e44-4e3f-8a2d-139f151eef51" (UID: "992836a0-1e44-4e3f-8a2d-139f151eef51"). InnerVolumeSpecName "kube-api-access-788sk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:48:02 crc kubenswrapper[4867]: I0101 08:48:02.967824 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cm5x\" (UniqueName: \"kubernetes.io/projected/24dd201f-c983-42e0-9fcc-c80c8d38f545-kube-api-access-5cm5x\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:02 crc kubenswrapper[4867]: I0101 08:48:02.967858 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzks8\" (UniqueName: \"kubernetes.io/projected/82441228-8114-485a-a020-b8997a64900c-kube-api-access-zzks8\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:02 crc kubenswrapper[4867]: I0101 08:48:02.967867 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhj4v\" (UniqueName: \"kubernetes.io/projected/aed2e086-295b-468f-93a1-47ce57c3e871-kube-api-access-dhj4v\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:02 crc kubenswrapper[4867]: I0101 08:48:02.967879 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-788sk\" (UniqueName: \"kubernetes.io/projected/992836a0-1e44-4e3f-8a2d-139f151eef51-kube-api-access-788sk\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:03 crc kubenswrapper[4867]: I0101 08:48:03.114682 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f3c1-account-create-update-lnb82" event={"ID":"24dd201f-c983-42e0-9fcc-c80c8d38f545","Type":"ContainerDied","Data":"726114d5fcb41f242ddcc4504589362f4b14fb8b745d7c204fd9f13392ad0d92"} Jan 01 08:48:03 crc kubenswrapper[4867]: I0101 08:48:03.114731 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="726114d5fcb41f242ddcc4504589362f4b14fb8b745d7c204fd9f13392ad0d92" Jan 01 08:48:03 crc kubenswrapper[4867]: I0101 08:48:03.114797 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f3c1-account-create-update-lnb82" Jan 01 08:48:03 crc kubenswrapper[4867]: I0101 08:48:03.119398 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1f09-account-create-update-mb5l4" event={"ID":"992836a0-1e44-4e3f-8a2d-139f151eef51","Type":"ContainerDied","Data":"1a81b0bcbdd0cdbe360eec91bf7869b6a251c1fcbc35806350e1b9a59de13a51"} Jan 01 08:48:03 crc kubenswrapper[4867]: I0101 08:48:03.119447 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a81b0bcbdd0cdbe360eec91bf7869b6a251c1fcbc35806350e1b9a59de13a51" Jan 01 08:48:03 crc kubenswrapper[4867]: I0101 08:48:03.119525 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1f09-account-create-update-mb5l4" Jan 01 08:48:03 crc kubenswrapper[4867]: I0101 08:48:03.125599 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-tbs6w" event={"ID":"9738d77e-f75a-4b30-ac35-4e91438aad75","Type":"ContainerDied","Data":"6e030fa49a5101438e059a897bfd91c289d6ec9e673124cc02a6b48f1fe4183f"} Jan 01 08:48:03 crc kubenswrapper[4867]: I0101 08:48:03.125683 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e030fa49a5101438e059a897bfd91c289d6ec9e673124cc02a6b48f1fe4183f" Jan 01 08:48:03 crc kubenswrapper[4867]: I0101 08:48:03.125782 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-tbs6w" Jan 01 08:48:03 crc kubenswrapper[4867]: I0101 08:48:03.149148 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-tf28b" Jan 01 08:48:03 crc kubenswrapper[4867]: I0101 08:48:03.164195 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9ba1-account-create-update-27nl2" Jan 01 08:48:03 crc kubenswrapper[4867]: I0101 08:48:03.164339 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-tf28b" event={"ID":"aed2e086-295b-468f-93a1-47ce57c3e871","Type":"ContainerDied","Data":"f4b04f742225eb82d1f848eb251fb87b505e1212e606b0942dff34a76e0cedce"} Jan 01 08:48:03 crc kubenswrapper[4867]: I0101 08:48:03.164380 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4b04f742225eb82d1f848eb251fb87b505e1212e606b0942dff34a76e0cedce" Jan 01 08:48:03 crc kubenswrapper[4867]: I0101 08:48:03.164390 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9ba1-account-create-update-27nl2" event={"ID":"82441228-8114-485a-a020-b8997a64900c","Type":"ContainerDied","Data":"f901215054a4d27de9060e510975a31414a9a70ccf3027fa1a79e5ee287fb1b0"} Jan 01 08:48:03 crc kubenswrapper[4867]: I0101 08:48:03.164399 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f901215054a4d27de9060e510975a31414a9a70ccf3027fa1a79e5ee287fb1b0" Jan 01 08:48:04 crc kubenswrapper[4867]: I0101 08:48:04.174110 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0f72f19-cf97-45ff-bb91-b1967ab48d3d","Type":"ContainerStarted","Data":"9ecaafd9891ef20d0b36e7b3ffca1db861b51d86c82e587d7dcec70a6ed2c2d5"} Jan 01 08:48:04 crc kubenswrapper[4867]: I0101 08:48:04.174263 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d0f72f19-cf97-45ff-bb91-b1967ab48d3d" containerName="ceilometer-central-agent" containerID="cri-o://27458b23ba6b8f7ecfef7cdc0c836bd37f64d54780107e94449469657d2a22d9" gracePeriod=30 Jan 01 08:48:04 crc kubenswrapper[4867]: I0101 08:48:04.174300 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d0f72f19-cf97-45ff-bb91-b1967ab48d3d" containerName="ceilometer-notification-agent" containerID="cri-o://d6780b2dfdce67658766d2b36b4422be65153c58b0be47a18ebbc9469ca770e0" gracePeriod=30 Jan 01 08:48:04 crc kubenswrapper[4867]: I0101 08:48:04.174311 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d0f72f19-cf97-45ff-bb91-b1967ab48d3d" containerName="sg-core" containerID="cri-o://d145181f90dcd17c628dba3b9a6bf741ca9c01724f047ccdbc7dc6af5d2d9f87" gracePeriod=30 Jan 01 08:48:04 crc kubenswrapper[4867]: I0101 08:48:04.174326 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d0f72f19-cf97-45ff-bb91-b1967ab48d3d" containerName="proxy-httpd" containerID="cri-o://9ecaafd9891ef20d0b36e7b3ffca1db861b51d86c82e587d7dcec70a6ed2c2d5" gracePeriod=30 Jan 01 08:48:04 crc kubenswrapper[4867]: I0101 08:48:04.174824 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 01 08:48:04 crc kubenswrapper[4867]: I0101 08:48:04.198315 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=7.958307755 podStartE2EDuration="12.198298862s" podCreationTimestamp="2026-01-01 08:47:52 +0000 UTC" firstStartedPulling="2026-01-01 08:47:58.863850918 +0000 UTC m=+1287.999119687" lastFinishedPulling="2026-01-01 08:48:03.103842025 +0000 UTC m=+1292.239110794" observedRunningTime="2026-01-01 08:48:04.193709964 +0000 UTC m=+1293.328978763" watchObservedRunningTime="2026-01-01 08:48:04.198298862 +0000 UTC m=+1293.333567631" Jan 01 08:48:05 crc kubenswrapper[4867]: I0101 08:48:05.188452 4867 generic.go:334] "Generic (PLEG): container finished" podID="d0f72f19-cf97-45ff-bb91-b1967ab48d3d" containerID="9ecaafd9891ef20d0b36e7b3ffca1db861b51d86c82e587d7dcec70a6ed2c2d5" exitCode=0 Jan 01 08:48:05 crc kubenswrapper[4867]: I0101 08:48:05.188831 4867 generic.go:334] "Generic (PLEG): container finished" podID="d0f72f19-cf97-45ff-bb91-b1967ab48d3d" containerID="d145181f90dcd17c628dba3b9a6bf741ca9c01724f047ccdbc7dc6af5d2d9f87" exitCode=2 Jan 01 08:48:05 crc kubenswrapper[4867]: I0101 08:48:05.188841 4867 generic.go:334] "Generic (PLEG): container finished" podID="d0f72f19-cf97-45ff-bb91-b1967ab48d3d" containerID="d6780b2dfdce67658766d2b36b4422be65153c58b0be47a18ebbc9469ca770e0" exitCode=0 Jan 01 08:48:05 crc kubenswrapper[4867]: I0101 08:48:05.188519 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0f72f19-cf97-45ff-bb91-b1967ab48d3d","Type":"ContainerDied","Data":"9ecaafd9891ef20d0b36e7b3ffca1db861b51d86c82e587d7dcec70a6ed2c2d5"} Jan 01 08:48:05 crc kubenswrapper[4867]: I0101 08:48:05.188902 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0f72f19-cf97-45ff-bb91-b1967ab48d3d","Type":"ContainerDied","Data":"d145181f90dcd17c628dba3b9a6bf741ca9c01724f047ccdbc7dc6af5d2d9f87"} Jan 01 08:48:05 crc kubenswrapper[4867]: I0101 08:48:05.188923 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0f72f19-cf97-45ff-bb91-b1967ab48d3d","Type":"ContainerDied","Data":"d6780b2dfdce67658766d2b36b4422be65153c58b0be47a18ebbc9469ca770e0"} Jan 01 08:48:08 crc kubenswrapper[4867]: I0101 08:48:08.216353 4867 generic.go:334] "Generic (PLEG): container finished" podID="d0f72f19-cf97-45ff-bb91-b1967ab48d3d" containerID="27458b23ba6b8f7ecfef7cdc0c836bd37f64d54780107e94449469657d2a22d9" exitCode=0 Jan 01 08:48:08 crc kubenswrapper[4867]: I0101 08:48:08.216444 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0f72f19-cf97-45ff-bb91-b1967ab48d3d","Type":"ContainerDied","Data":"27458b23ba6b8f7ecfef7cdc0c836bd37f64d54780107e94449469657d2a22d9"} Jan 01 08:48:08 crc kubenswrapper[4867]: I0101 08:48:08.679436 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-86c7f77bc7-nt6jq" Jan 01 08:48:08 crc kubenswrapper[4867]: I0101 08:48:08.681282 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-86c7f77bc7-nt6jq" Jan 01 08:48:08 crc kubenswrapper[4867]: I0101 08:48:08.955664 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xfnlz"] Jan 01 08:48:08 crc kubenswrapper[4867]: E0101 08:48:08.961756 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="992836a0-1e44-4e3f-8a2d-139f151eef51" containerName="mariadb-account-create-update" Jan 01 08:48:08 crc kubenswrapper[4867]: I0101 08:48:08.961771 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="992836a0-1e44-4e3f-8a2d-139f151eef51" containerName="mariadb-account-create-update" Jan 01 08:48:08 crc kubenswrapper[4867]: E0101 08:48:08.961793 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24dd201f-c983-42e0-9fcc-c80c8d38f545" containerName="mariadb-account-create-update" Jan 01 08:48:08 crc kubenswrapper[4867]: I0101 08:48:08.961799 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="24dd201f-c983-42e0-9fcc-c80c8d38f545" containerName="mariadb-account-create-update" Jan 01 08:48:08 crc kubenswrapper[4867]: E0101 08:48:08.961814 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82441228-8114-485a-a020-b8997a64900c" containerName="mariadb-account-create-update" Jan 01 08:48:08 crc kubenswrapper[4867]: I0101 08:48:08.961821 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="82441228-8114-485a-a020-b8997a64900c" containerName="mariadb-account-create-update" Jan 01 08:48:08 crc kubenswrapper[4867]: E0101 08:48:08.961832 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9738d77e-f75a-4b30-ac35-4e91438aad75" containerName="mariadb-database-create" Jan 01 08:48:08 crc kubenswrapper[4867]: I0101 08:48:08.961837 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9738d77e-f75a-4b30-ac35-4e91438aad75" containerName="mariadb-database-create" Jan 01 08:48:08 crc kubenswrapper[4867]: E0101 08:48:08.961845 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aed2e086-295b-468f-93a1-47ce57c3e871" containerName="mariadb-database-create" Jan 01 08:48:08 crc kubenswrapper[4867]: I0101 08:48:08.961850 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="aed2e086-295b-468f-93a1-47ce57c3e871" containerName="mariadb-database-create" Jan 01 08:48:08 crc kubenswrapper[4867]: E0101 08:48:08.961864 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8ec6291-8802-442c-af30-08b607472e97" containerName="mariadb-database-create" Jan 01 08:48:08 crc kubenswrapper[4867]: I0101 08:48:08.961869 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8ec6291-8802-442c-af30-08b607472e97" containerName="mariadb-database-create" Jan 01 08:48:08 crc kubenswrapper[4867]: I0101 08:48:08.962037 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8ec6291-8802-442c-af30-08b607472e97" containerName="mariadb-database-create" Jan 01 08:48:08 crc kubenswrapper[4867]: I0101 08:48:08.962049 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="992836a0-1e44-4e3f-8a2d-139f151eef51" containerName="mariadb-account-create-update" Jan 01 08:48:08 crc kubenswrapper[4867]: I0101 08:48:08.962061 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="aed2e086-295b-468f-93a1-47ce57c3e871" containerName="mariadb-database-create" Jan 01 08:48:08 crc kubenswrapper[4867]: I0101 08:48:08.962072 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="24dd201f-c983-42e0-9fcc-c80c8d38f545" containerName="mariadb-account-create-update" Jan 01 08:48:08 crc kubenswrapper[4867]: I0101 08:48:08.962081 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="82441228-8114-485a-a020-b8997a64900c" containerName="mariadb-account-create-update" Jan 01 08:48:08 crc kubenswrapper[4867]: I0101 08:48:08.962097 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="9738d77e-f75a-4b30-ac35-4e91438aad75" containerName="mariadb-database-create" Jan 01 08:48:08 crc kubenswrapper[4867]: I0101 08:48:08.962683 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xfnlz" Jan 01 08:48:08 crc kubenswrapper[4867]: I0101 08:48:08.969343 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 01 08:48:08 crc kubenswrapper[4867]: I0101 08:48:08.969712 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-khd69" Jan 01 08:48:08 crc kubenswrapper[4867]: I0101 08:48:08.969863 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 01 08:48:08 crc kubenswrapper[4867]: I0101 08:48:08.991950 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xfnlz"] Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.079973 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68297d63-ca47-4d11-8e40-3d6903527773-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xfnlz\" (UID: \"68297d63-ca47-4d11-8e40-3d6903527773\") " pod="openstack/nova-cell0-conductor-db-sync-xfnlz" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.080016 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsqt8\" (UniqueName: \"kubernetes.io/projected/68297d63-ca47-4d11-8e40-3d6903527773-kube-api-access-gsqt8\") pod \"nova-cell0-conductor-db-sync-xfnlz\" (UID: \"68297d63-ca47-4d11-8e40-3d6903527773\") " pod="openstack/nova-cell0-conductor-db-sync-xfnlz" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.080059 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68297d63-ca47-4d11-8e40-3d6903527773-config-data\") pod \"nova-cell0-conductor-db-sync-xfnlz\" (UID: \"68297d63-ca47-4d11-8e40-3d6903527773\") " pod="openstack/nova-cell0-conductor-db-sync-xfnlz" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.080087 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68297d63-ca47-4d11-8e40-3d6903527773-scripts\") pod \"nova-cell0-conductor-db-sync-xfnlz\" (UID: \"68297d63-ca47-4d11-8e40-3d6903527773\") " pod="openstack/nova-cell0-conductor-db-sync-xfnlz" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.146436 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.181346 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68297d63-ca47-4d11-8e40-3d6903527773-config-data\") pod \"nova-cell0-conductor-db-sync-xfnlz\" (UID: \"68297d63-ca47-4d11-8e40-3d6903527773\") " pod="openstack/nova-cell0-conductor-db-sync-xfnlz" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.181625 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68297d63-ca47-4d11-8e40-3d6903527773-scripts\") pod \"nova-cell0-conductor-db-sync-xfnlz\" (UID: \"68297d63-ca47-4d11-8e40-3d6903527773\") " pod="openstack/nova-cell0-conductor-db-sync-xfnlz" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.181778 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68297d63-ca47-4d11-8e40-3d6903527773-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xfnlz\" (UID: \"68297d63-ca47-4d11-8e40-3d6903527773\") " pod="openstack/nova-cell0-conductor-db-sync-xfnlz" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.181858 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsqt8\" (UniqueName: \"kubernetes.io/projected/68297d63-ca47-4d11-8e40-3d6903527773-kube-api-access-gsqt8\") pod \"nova-cell0-conductor-db-sync-xfnlz\" (UID: \"68297d63-ca47-4d11-8e40-3d6903527773\") " pod="openstack/nova-cell0-conductor-db-sync-xfnlz" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.192450 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68297d63-ca47-4d11-8e40-3d6903527773-config-data\") pod \"nova-cell0-conductor-db-sync-xfnlz\" (UID: \"68297d63-ca47-4d11-8e40-3d6903527773\") " pod="openstack/nova-cell0-conductor-db-sync-xfnlz" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.195850 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68297d63-ca47-4d11-8e40-3d6903527773-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xfnlz\" (UID: \"68297d63-ca47-4d11-8e40-3d6903527773\") " pod="openstack/nova-cell0-conductor-db-sync-xfnlz" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.200254 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68297d63-ca47-4d11-8e40-3d6903527773-scripts\") pod \"nova-cell0-conductor-db-sync-xfnlz\" (UID: \"68297d63-ca47-4d11-8e40-3d6903527773\") " pod="openstack/nova-cell0-conductor-db-sync-xfnlz" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.208588 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsqt8\" (UniqueName: \"kubernetes.io/projected/68297d63-ca47-4d11-8e40-3d6903527773-kube-api-access-gsqt8\") pod \"nova-cell0-conductor-db-sync-xfnlz\" (UID: \"68297d63-ca47-4d11-8e40-3d6903527773\") " pod="openstack/nova-cell0-conductor-db-sync-xfnlz" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.231314 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.231868 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0f72f19-cf97-45ff-bb91-b1967ab48d3d","Type":"ContainerDied","Data":"a879542b3b68308b71704f9c55d47dccea3d579deb6add13caa9446a915a0309"} Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.231916 4867 scope.go:117] "RemoveContainer" containerID="9ecaafd9891ef20d0b36e7b3ffca1db861b51d86c82e587d7dcec70a6ed2c2d5" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.271357 4867 scope.go:117] "RemoveContainer" containerID="d145181f90dcd17c628dba3b9a6bf741ca9c01724f047ccdbc7dc6af5d2d9f87" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.283492 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-run-httpd\") pod \"d0f72f19-cf97-45ff-bb91-b1967ab48d3d\" (UID: \"d0f72f19-cf97-45ff-bb91-b1967ab48d3d\") " Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.283565 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-scripts\") pod \"d0f72f19-cf97-45ff-bb91-b1967ab48d3d\" (UID: \"d0f72f19-cf97-45ff-bb91-b1967ab48d3d\") " Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.283630 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-log-httpd\") pod \"d0f72f19-cf97-45ff-bb91-b1967ab48d3d\" (UID: \"d0f72f19-cf97-45ff-bb91-b1967ab48d3d\") " Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.283666 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-combined-ca-bundle\") pod \"d0f72f19-cf97-45ff-bb91-b1967ab48d3d\" (UID: \"d0f72f19-cf97-45ff-bb91-b1967ab48d3d\") " Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.283687 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-config-data\") pod \"d0f72f19-cf97-45ff-bb91-b1967ab48d3d\" (UID: \"d0f72f19-cf97-45ff-bb91-b1967ab48d3d\") " Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.283723 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-sg-core-conf-yaml\") pod \"d0f72f19-cf97-45ff-bb91-b1967ab48d3d\" (UID: \"d0f72f19-cf97-45ff-bb91-b1967ab48d3d\") " Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.283767 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jq4n\" (UniqueName: \"kubernetes.io/projected/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-kube-api-access-6jq4n\") pod \"d0f72f19-cf97-45ff-bb91-b1967ab48d3d\" (UID: \"d0f72f19-cf97-45ff-bb91-b1967ab48d3d\") " Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.284534 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d0f72f19-cf97-45ff-bb91-b1967ab48d3d" (UID: "d0f72f19-cf97-45ff-bb91-b1967ab48d3d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.285377 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d0f72f19-cf97-45ff-bb91-b1967ab48d3d" (UID: "d0f72f19-cf97-45ff-bb91-b1967ab48d3d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.289493 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-kube-api-access-6jq4n" (OuterVolumeSpecName: "kube-api-access-6jq4n") pod "d0f72f19-cf97-45ff-bb91-b1967ab48d3d" (UID: "d0f72f19-cf97-45ff-bb91-b1967ab48d3d"). InnerVolumeSpecName "kube-api-access-6jq4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.291235 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-scripts" (OuterVolumeSpecName: "scripts") pod "d0f72f19-cf97-45ff-bb91-b1967ab48d3d" (UID: "d0f72f19-cf97-45ff-bb91-b1967ab48d3d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.303807 4867 scope.go:117] "RemoveContainer" containerID="d6780b2dfdce67658766d2b36b4422be65153c58b0be47a18ebbc9469ca770e0" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.308455 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xfnlz" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.324735 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d0f72f19-cf97-45ff-bb91-b1967ab48d3d" (UID: "d0f72f19-cf97-45ff-bb91-b1967ab48d3d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.362527 4867 scope.go:117] "RemoveContainer" containerID="27458b23ba6b8f7ecfef7cdc0c836bd37f64d54780107e94449469657d2a22d9" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.387957 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0f72f19-cf97-45ff-bb91-b1967ab48d3d" (UID: "d0f72f19-cf97-45ff-bb91-b1967ab48d3d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.389140 4867 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.389161 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.389170 4867 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.389180 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.389192 4867 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.389201 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jq4n\" (UniqueName: \"kubernetes.io/projected/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-kube-api-access-6jq4n\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.413129 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-config-data" (OuterVolumeSpecName: "config-data") pod "d0f72f19-cf97-45ff-bb91-b1967ab48d3d" (UID: "d0f72f19-cf97-45ff-bb91-b1967ab48d3d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.491407 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0f72f19-cf97-45ff-bb91-b1967ab48d3d-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.584786 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.598646 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.607755 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:48:09 crc kubenswrapper[4867]: E0101 08:48:09.608181 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0f72f19-cf97-45ff-bb91-b1967ab48d3d" containerName="ceilometer-central-agent" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.608197 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0f72f19-cf97-45ff-bb91-b1967ab48d3d" containerName="ceilometer-central-agent" Jan 01 08:48:09 crc kubenswrapper[4867]: E0101 08:48:09.608219 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0f72f19-cf97-45ff-bb91-b1967ab48d3d" containerName="sg-core" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.608227 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0f72f19-cf97-45ff-bb91-b1967ab48d3d" containerName="sg-core" Jan 01 08:48:09 crc kubenswrapper[4867]: E0101 08:48:09.608239 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0f72f19-cf97-45ff-bb91-b1967ab48d3d" containerName="proxy-httpd" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.608245 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0f72f19-cf97-45ff-bb91-b1967ab48d3d" containerName="proxy-httpd" Jan 01 08:48:09 crc kubenswrapper[4867]: E0101 08:48:09.608252 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0f72f19-cf97-45ff-bb91-b1967ab48d3d" containerName="ceilometer-notification-agent" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.608258 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0f72f19-cf97-45ff-bb91-b1967ab48d3d" containerName="ceilometer-notification-agent" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.608448 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0f72f19-cf97-45ff-bb91-b1967ab48d3d" containerName="ceilometer-notification-agent" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.608461 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0f72f19-cf97-45ff-bb91-b1967ab48d3d" containerName="sg-core" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.608479 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0f72f19-cf97-45ff-bb91-b1967ab48d3d" containerName="ceilometer-central-agent" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.608491 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0f72f19-cf97-45ff-bb91-b1967ab48d3d" containerName="proxy-httpd" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.609954 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.612059 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.612426 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.621543 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.697691 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67sv6\" (UniqueName: \"kubernetes.io/projected/4cce5658-90eb-4be8-8043-90237f93caf1-kube-api-access-67sv6\") pod \"ceilometer-0\" (UID: \"4cce5658-90eb-4be8-8043-90237f93caf1\") " pod="openstack/ceilometer-0" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.697761 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4cce5658-90eb-4be8-8043-90237f93caf1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4cce5658-90eb-4be8-8043-90237f93caf1\") " pod="openstack/ceilometer-0" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.697787 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cce5658-90eb-4be8-8043-90237f93caf1-config-data\") pod \"ceilometer-0\" (UID: \"4cce5658-90eb-4be8-8043-90237f93caf1\") " pod="openstack/ceilometer-0" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.697911 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cce5658-90eb-4be8-8043-90237f93caf1-scripts\") pod \"ceilometer-0\" (UID: \"4cce5658-90eb-4be8-8043-90237f93caf1\") " pod="openstack/ceilometer-0" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.697928 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4cce5658-90eb-4be8-8043-90237f93caf1-log-httpd\") pod \"ceilometer-0\" (UID: \"4cce5658-90eb-4be8-8043-90237f93caf1\") " pod="openstack/ceilometer-0" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.697947 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cce5658-90eb-4be8-8043-90237f93caf1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4cce5658-90eb-4be8-8043-90237f93caf1\") " pod="openstack/ceilometer-0" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.697964 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4cce5658-90eb-4be8-8043-90237f93caf1-run-httpd\") pod \"ceilometer-0\" (UID: \"4cce5658-90eb-4be8-8043-90237f93caf1\") " pod="openstack/ceilometer-0" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.780958 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xfnlz"] Jan 01 08:48:09 crc kubenswrapper[4867]: W0101 08:48:09.792169 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68297d63_ca47_4d11_8e40_3d6903527773.slice/crio-e85110665a9760d9243d90ff2d6fa3fc090d42026e6702e65e1f66826e5feb36 WatchSource:0}: Error finding container e85110665a9760d9243d90ff2d6fa3fc090d42026e6702e65e1f66826e5feb36: Status 404 returned error can't find the container with id e85110665a9760d9243d90ff2d6fa3fc090d42026e6702e65e1f66826e5feb36 Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.799904 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67sv6\" (UniqueName: \"kubernetes.io/projected/4cce5658-90eb-4be8-8043-90237f93caf1-kube-api-access-67sv6\") pod \"ceilometer-0\" (UID: \"4cce5658-90eb-4be8-8043-90237f93caf1\") " pod="openstack/ceilometer-0" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.800004 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cce5658-90eb-4be8-8043-90237f93caf1-config-data\") pod \"ceilometer-0\" (UID: \"4cce5658-90eb-4be8-8043-90237f93caf1\") " pod="openstack/ceilometer-0" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.800036 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4cce5658-90eb-4be8-8043-90237f93caf1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4cce5658-90eb-4be8-8043-90237f93caf1\") " pod="openstack/ceilometer-0" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.800110 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cce5658-90eb-4be8-8043-90237f93caf1-scripts\") pod \"ceilometer-0\" (UID: \"4cce5658-90eb-4be8-8043-90237f93caf1\") " pod="openstack/ceilometer-0" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.800132 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4cce5658-90eb-4be8-8043-90237f93caf1-log-httpd\") pod \"ceilometer-0\" (UID: \"4cce5658-90eb-4be8-8043-90237f93caf1\") " pod="openstack/ceilometer-0" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.800194 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cce5658-90eb-4be8-8043-90237f93caf1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4cce5658-90eb-4be8-8043-90237f93caf1\") " pod="openstack/ceilometer-0" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.800217 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4cce5658-90eb-4be8-8043-90237f93caf1-run-httpd\") pod \"ceilometer-0\" (UID: \"4cce5658-90eb-4be8-8043-90237f93caf1\") " pod="openstack/ceilometer-0" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.800919 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4cce5658-90eb-4be8-8043-90237f93caf1-run-httpd\") pod \"ceilometer-0\" (UID: \"4cce5658-90eb-4be8-8043-90237f93caf1\") " pod="openstack/ceilometer-0" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.801471 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4cce5658-90eb-4be8-8043-90237f93caf1-log-httpd\") pod \"ceilometer-0\" (UID: \"4cce5658-90eb-4be8-8043-90237f93caf1\") " pod="openstack/ceilometer-0" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.805048 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cce5658-90eb-4be8-8043-90237f93caf1-scripts\") pod \"ceilometer-0\" (UID: \"4cce5658-90eb-4be8-8043-90237f93caf1\") " pod="openstack/ceilometer-0" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.805369 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4cce5658-90eb-4be8-8043-90237f93caf1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4cce5658-90eb-4be8-8043-90237f93caf1\") " pod="openstack/ceilometer-0" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.805861 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cce5658-90eb-4be8-8043-90237f93caf1-config-data\") pod \"ceilometer-0\" (UID: \"4cce5658-90eb-4be8-8043-90237f93caf1\") " pod="openstack/ceilometer-0" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.806061 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cce5658-90eb-4be8-8043-90237f93caf1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4cce5658-90eb-4be8-8043-90237f93caf1\") " pod="openstack/ceilometer-0" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.816629 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67sv6\" (UniqueName: \"kubernetes.io/projected/4cce5658-90eb-4be8-8043-90237f93caf1-kube-api-access-67sv6\") pod \"ceilometer-0\" (UID: \"4cce5658-90eb-4be8-8043-90237f93caf1\") " pod="openstack/ceilometer-0" Jan 01 08:48:09 crc kubenswrapper[4867]: I0101 08:48:09.928059 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 01 08:48:10 crc kubenswrapper[4867]: I0101 08:48:10.237608 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xfnlz" event={"ID":"68297d63-ca47-4d11-8e40-3d6903527773","Type":"ContainerStarted","Data":"e85110665a9760d9243d90ff2d6fa3fc090d42026e6702e65e1f66826e5feb36"} Jan 01 08:48:10 crc kubenswrapper[4867]: I0101 08:48:10.425409 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:48:11 crc kubenswrapper[4867]: I0101 08:48:11.144220 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0f72f19-cf97-45ff-bb91-b1967ab48d3d" path="/var/lib/kubelet/pods/d0f72f19-cf97-45ff-bb91-b1967ab48d3d/volumes" Jan 01 08:48:11 crc kubenswrapper[4867]: I0101 08:48:11.247747 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4cce5658-90eb-4be8-8043-90237f93caf1","Type":"ContainerStarted","Data":"ddbd89a04db807a525812d04b77cf422e4abc075a4f93960b9c7d759ffb8531d"} Jan 01 08:48:11 crc kubenswrapper[4867]: I0101 08:48:11.247786 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4cce5658-90eb-4be8-8043-90237f93caf1","Type":"ContainerStarted","Data":"a1f23b28cc4def5eed0984c46df3af45a082a440f6ea82f809e29f3ea960f355"} Jan 01 08:48:12 crc kubenswrapper[4867]: I0101 08:48:12.259268 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4cce5658-90eb-4be8-8043-90237f93caf1","Type":"ContainerStarted","Data":"376cd306f10b0187c991be1b0eb2323cff0a3d64f01681de6bd541d4a46299df"} Jan 01 08:48:12 crc kubenswrapper[4867]: I0101 08:48:12.727516 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:48:13 crc kubenswrapper[4867]: I0101 08:48:13.268277 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4cce5658-90eb-4be8-8043-90237f93caf1","Type":"ContainerStarted","Data":"36df4fbe64ddf11d4267fcc0687f2517205ca02235322ef488432ed9afe58c8b"} Jan 01 08:48:18 crc kubenswrapper[4867]: I0101 08:48:18.326592 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4cce5658-90eb-4be8-8043-90237f93caf1","Type":"ContainerStarted","Data":"aba36056038bccef931f1166daab5563213efd054712f4154bf55ed9d23f8a0e"} Jan 01 08:48:18 crc kubenswrapper[4867]: I0101 08:48:18.327247 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 01 08:48:18 crc kubenswrapper[4867]: I0101 08:48:18.326736 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4cce5658-90eb-4be8-8043-90237f93caf1" containerName="sg-core" containerID="cri-o://36df4fbe64ddf11d4267fcc0687f2517205ca02235322ef488432ed9afe58c8b" gracePeriod=30 Jan 01 08:48:18 crc kubenswrapper[4867]: I0101 08:48:18.326756 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4cce5658-90eb-4be8-8043-90237f93caf1" containerName="ceilometer-notification-agent" containerID="cri-o://376cd306f10b0187c991be1b0eb2323cff0a3d64f01681de6bd541d4a46299df" gracePeriod=30 Jan 01 08:48:18 crc kubenswrapper[4867]: I0101 08:48:18.327125 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4cce5658-90eb-4be8-8043-90237f93caf1" containerName="ceilometer-central-agent" containerID="cri-o://ddbd89a04db807a525812d04b77cf422e4abc075a4f93960b9c7d759ffb8531d" gracePeriod=30 Jan 01 08:48:18 crc kubenswrapper[4867]: I0101 08:48:18.326710 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4cce5658-90eb-4be8-8043-90237f93caf1" containerName="proxy-httpd" containerID="cri-o://aba36056038bccef931f1166daab5563213efd054712f4154bf55ed9d23f8a0e" gracePeriod=30 Jan 01 08:48:18 crc kubenswrapper[4867]: I0101 08:48:18.331475 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xfnlz" event={"ID":"68297d63-ca47-4d11-8e40-3d6903527773","Type":"ContainerStarted","Data":"95161ca79e37e61717b65705d5f72df0f5d3ee56eaca3808bc4567d29151f991"} Jan 01 08:48:18 crc kubenswrapper[4867]: I0101 08:48:18.360972 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.042293941 podStartE2EDuration="9.360953643s" podCreationTimestamp="2026-01-01 08:48:09 +0000 UTC" firstStartedPulling="2026-01-01 08:48:10.454029881 +0000 UTC m=+1299.589298650" lastFinishedPulling="2026-01-01 08:48:17.772689543 +0000 UTC m=+1306.907958352" observedRunningTime="2026-01-01 08:48:18.354507582 +0000 UTC m=+1307.489776361" watchObservedRunningTime="2026-01-01 08:48:18.360953643 +0000 UTC m=+1307.496222412" Jan 01 08:48:19 crc kubenswrapper[4867]: I0101 08:48:19.344789 4867 generic.go:334] "Generic (PLEG): container finished" podID="4cce5658-90eb-4be8-8043-90237f93caf1" containerID="aba36056038bccef931f1166daab5563213efd054712f4154bf55ed9d23f8a0e" exitCode=0 Jan 01 08:48:19 crc kubenswrapper[4867]: I0101 08:48:19.344832 4867 generic.go:334] "Generic (PLEG): container finished" podID="4cce5658-90eb-4be8-8043-90237f93caf1" containerID="36df4fbe64ddf11d4267fcc0687f2517205ca02235322ef488432ed9afe58c8b" exitCode=2 Jan 01 08:48:19 crc kubenswrapper[4867]: I0101 08:48:19.344842 4867 generic.go:334] "Generic (PLEG): container finished" podID="4cce5658-90eb-4be8-8043-90237f93caf1" containerID="376cd306f10b0187c991be1b0eb2323cff0a3d64f01681de6bd541d4a46299df" exitCode=0 Jan 01 08:48:19 crc kubenswrapper[4867]: I0101 08:48:19.344850 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4cce5658-90eb-4be8-8043-90237f93caf1","Type":"ContainerDied","Data":"aba36056038bccef931f1166daab5563213efd054712f4154bf55ed9d23f8a0e"} Jan 01 08:48:19 crc kubenswrapper[4867]: I0101 08:48:19.344945 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4cce5658-90eb-4be8-8043-90237f93caf1","Type":"ContainerDied","Data":"36df4fbe64ddf11d4267fcc0687f2517205ca02235322ef488432ed9afe58c8b"} Jan 01 08:48:19 crc kubenswrapper[4867]: I0101 08:48:19.344966 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4cce5658-90eb-4be8-8043-90237f93caf1","Type":"ContainerDied","Data":"376cd306f10b0187c991be1b0eb2323cff0a3d64f01681de6bd541d4a46299df"} Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.079542 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-xfnlz" podStartSLOduration=4.095354198 podStartE2EDuration="12.079512344s" podCreationTimestamp="2026-01-01 08:48:08 +0000 UTC" firstStartedPulling="2026-01-01 08:48:09.795283137 +0000 UTC m=+1298.930551906" lastFinishedPulling="2026-01-01 08:48:17.779441273 +0000 UTC m=+1306.914710052" observedRunningTime="2026-01-01 08:48:18.386050536 +0000 UTC m=+1307.521319305" watchObservedRunningTime="2026-01-01 08:48:20.079512344 +0000 UTC m=+1309.214781133" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.097315 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.097949 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fa829cab-564c-410b-a84f-50bc6bba8676" containerName="glance-log" containerID="cri-o://82b64cc4d5a9681c6b6e7f280a45133b1d7d8b66620bf65a4a254e9c97580d6a" gracePeriod=30 Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.098132 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fa829cab-564c-410b-a84f-50bc6bba8676" containerName="glance-httpd" containerID="cri-o://bf5a24a0389da79ac334afcfb3dee92dc2c07c58a8ef2298c7e0f9ace80ed7c0" gracePeriod=30 Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.282707 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.355869 4867 generic.go:334] "Generic (PLEG): container finished" podID="4cce5658-90eb-4be8-8043-90237f93caf1" containerID="ddbd89a04db807a525812d04b77cf422e4abc075a4f93960b9c7d759ffb8531d" exitCode=0 Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.355918 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4cce5658-90eb-4be8-8043-90237f93caf1","Type":"ContainerDied","Data":"ddbd89a04db807a525812d04b77cf422e4abc075a4f93960b9c7d759ffb8531d"} Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.355942 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.355959 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4cce5658-90eb-4be8-8043-90237f93caf1","Type":"ContainerDied","Data":"a1f23b28cc4def5eed0984c46df3af45a082a440f6ea82f809e29f3ea960f355"} Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.355977 4867 scope.go:117] "RemoveContainer" containerID="aba36056038bccef931f1166daab5563213efd054712f4154bf55ed9d23f8a0e" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.358293 4867 generic.go:334] "Generic (PLEG): container finished" podID="fa829cab-564c-410b-a84f-50bc6bba8676" containerID="82b64cc4d5a9681c6b6e7f280a45133b1d7d8b66620bf65a4a254e9c97580d6a" exitCode=143 Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.358325 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa829cab-564c-410b-a84f-50bc6bba8676","Type":"ContainerDied","Data":"82b64cc4d5a9681c6b6e7f280a45133b1d7d8b66620bf65a4a254e9c97580d6a"} Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.380507 4867 scope.go:117] "RemoveContainer" containerID="36df4fbe64ddf11d4267fcc0687f2517205ca02235322ef488432ed9afe58c8b" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.395757 4867 scope.go:117] "RemoveContainer" containerID="376cd306f10b0187c991be1b0eb2323cff0a3d64f01681de6bd541d4a46299df" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.424051 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4cce5658-90eb-4be8-8043-90237f93caf1-run-httpd\") pod \"4cce5658-90eb-4be8-8043-90237f93caf1\" (UID: \"4cce5658-90eb-4be8-8043-90237f93caf1\") " Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.424130 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cce5658-90eb-4be8-8043-90237f93caf1-config-data\") pod \"4cce5658-90eb-4be8-8043-90237f93caf1\" (UID: \"4cce5658-90eb-4be8-8043-90237f93caf1\") " Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.424234 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4cce5658-90eb-4be8-8043-90237f93caf1-log-httpd\") pod \"4cce5658-90eb-4be8-8043-90237f93caf1\" (UID: \"4cce5658-90eb-4be8-8043-90237f93caf1\") " Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.424275 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cce5658-90eb-4be8-8043-90237f93caf1-combined-ca-bundle\") pod \"4cce5658-90eb-4be8-8043-90237f93caf1\" (UID: \"4cce5658-90eb-4be8-8043-90237f93caf1\") " Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.424351 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67sv6\" (UniqueName: \"kubernetes.io/projected/4cce5658-90eb-4be8-8043-90237f93caf1-kube-api-access-67sv6\") pod \"4cce5658-90eb-4be8-8043-90237f93caf1\" (UID: \"4cce5658-90eb-4be8-8043-90237f93caf1\") " Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.424370 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4cce5658-90eb-4be8-8043-90237f93caf1-sg-core-conf-yaml\") pod \"4cce5658-90eb-4be8-8043-90237f93caf1\" (UID: \"4cce5658-90eb-4be8-8043-90237f93caf1\") " Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.424433 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cce5658-90eb-4be8-8043-90237f93caf1-scripts\") pod \"4cce5658-90eb-4be8-8043-90237f93caf1\" (UID: \"4cce5658-90eb-4be8-8043-90237f93caf1\") " Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.425055 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cce5658-90eb-4be8-8043-90237f93caf1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4cce5658-90eb-4be8-8043-90237f93caf1" (UID: "4cce5658-90eb-4be8-8043-90237f93caf1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.425194 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cce5658-90eb-4be8-8043-90237f93caf1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4cce5658-90eb-4be8-8043-90237f93caf1" (UID: "4cce5658-90eb-4be8-8043-90237f93caf1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.430331 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cce5658-90eb-4be8-8043-90237f93caf1-scripts" (OuterVolumeSpecName: "scripts") pod "4cce5658-90eb-4be8-8043-90237f93caf1" (UID: "4cce5658-90eb-4be8-8043-90237f93caf1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.430665 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cce5658-90eb-4be8-8043-90237f93caf1-kube-api-access-67sv6" (OuterVolumeSpecName: "kube-api-access-67sv6") pod "4cce5658-90eb-4be8-8043-90237f93caf1" (UID: "4cce5658-90eb-4be8-8043-90237f93caf1"). InnerVolumeSpecName "kube-api-access-67sv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.432437 4867 scope.go:117] "RemoveContainer" containerID="ddbd89a04db807a525812d04b77cf422e4abc075a4f93960b9c7d759ffb8531d" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.455095 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cce5658-90eb-4be8-8043-90237f93caf1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4cce5658-90eb-4be8-8043-90237f93caf1" (UID: "4cce5658-90eb-4be8-8043-90237f93caf1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.505021 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cce5658-90eb-4be8-8043-90237f93caf1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4cce5658-90eb-4be8-8043-90237f93caf1" (UID: "4cce5658-90eb-4be8-8043-90237f93caf1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.526078 4867 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4cce5658-90eb-4be8-8043-90237f93caf1-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.526109 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cce5658-90eb-4be8-8043-90237f93caf1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.526120 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67sv6\" (UniqueName: \"kubernetes.io/projected/4cce5658-90eb-4be8-8043-90237f93caf1-kube-api-access-67sv6\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.526129 4867 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4cce5658-90eb-4be8-8043-90237f93caf1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.526138 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cce5658-90eb-4be8-8043-90237f93caf1-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.526147 4867 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4cce5658-90eb-4be8-8043-90237f93caf1-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.527715 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cce5658-90eb-4be8-8043-90237f93caf1-config-data" (OuterVolumeSpecName: "config-data") pod "4cce5658-90eb-4be8-8043-90237f93caf1" (UID: "4cce5658-90eb-4be8-8043-90237f93caf1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.530107 4867 scope.go:117] "RemoveContainer" containerID="aba36056038bccef931f1166daab5563213efd054712f4154bf55ed9d23f8a0e" Jan 01 08:48:20 crc kubenswrapper[4867]: E0101 08:48:20.530653 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aba36056038bccef931f1166daab5563213efd054712f4154bf55ed9d23f8a0e\": container with ID starting with aba36056038bccef931f1166daab5563213efd054712f4154bf55ed9d23f8a0e not found: ID does not exist" containerID="aba36056038bccef931f1166daab5563213efd054712f4154bf55ed9d23f8a0e" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.530711 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aba36056038bccef931f1166daab5563213efd054712f4154bf55ed9d23f8a0e"} err="failed to get container status \"aba36056038bccef931f1166daab5563213efd054712f4154bf55ed9d23f8a0e\": rpc error: code = NotFound desc = could not find container \"aba36056038bccef931f1166daab5563213efd054712f4154bf55ed9d23f8a0e\": container with ID starting with aba36056038bccef931f1166daab5563213efd054712f4154bf55ed9d23f8a0e not found: ID does not exist" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.530746 4867 scope.go:117] "RemoveContainer" containerID="36df4fbe64ddf11d4267fcc0687f2517205ca02235322ef488432ed9afe58c8b" Jan 01 08:48:20 crc kubenswrapper[4867]: E0101 08:48:20.531136 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36df4fbe64ddf11d4267fcc0687f2517205ca02235322ef488432ed9afe58c8b\": container with ID starting with 36df4fbe64ddf11d4267fcc0687f2517205ca02235322ef488432ed9afe58c8b not found: ID does not exist" containerID="36df4fbe64ddf11d4267fcc0687f2517205ca02235322ef488432ed9afe58c8b" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.531159 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36df4fbe64ddf11d4267fcc0687f2517205ca02235322ef488432ed9afe58c8b"} err="failed to get container status \"36df4fbe64ddf11d4267fcc0687f2517205ca02235322ef488432ed9afe58c8b\": rpc error: code = NotFound desc = could not find container \"36df4fbe64ddf11d4267fcc0687f2517205ca02235322ef488432ed9afe58c8b\": container with ID starting with 36df4fbe64ddf11d4267fcc0687f2517205ca02235322ef488432ed9afe58c8b not found: ID does not exist" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.531173 4867 scope.go:117] "RemoveContainer" containerID="376cd306f10b0187c991be1b0eb2323cff0a3d64f01681de6bd541d4a46299df" Jan 01 08:48:20 crc kubenswrapper[4867]: E0101 08:48:20.531490 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"376cd306f10b0187c991be1b0eb2323cff0a3d64f01681de6bd541d4a46299df\": container with ID starting with 376cd306f10b0187c991be1b0eb2323cff0a3d64f01681de6bd541d4a46299df not found: ID does not exist" containerID="376cd306f10b0187c991be1b0eb2323cff0a3d64f01681de6bd541d4a46299df" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.531551 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"376cd306f10b0187c991be1b0eb2323cff0a3d64f01681de6bd541d4a46299df"} err="failed to get container status \"376cd306f10b0187c991be1b0eb2323cff0a3d64f01681de6bd541d4a46299df\": rpc error: code = NotFound desc = could not find container \"376cd306f10b0187c991be1b0eb2323cff0a3d64f01681de6bd541d4a46299df\": container with ID starting with 376cd306f10b0187c991be1b0eb2323cff0a3d64f01681de6bd541d4a46299df not found: ID does not exist" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.531571 4867 scope.go:117] "RemoveContainer" containerID="ddbd89a04db807a525812d04b77cf422e4abc075a4f93960b9c7d759ffb8531d" Jan 01 08:48:20 crc kubenswrapper[4867]: E0101 08:48:20.531870 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddbd89a04db807a525812d04b77cf422e4abc075a4f93960b9c7d759ffb8531d\": container with ID starting with ddbd89a04db807a525812d04b77cf422e4abc075a4f93960b9c7d759ffb8531d not found: ID does not exist" containerID="ddbd89a04db807a525812d04b77cf422e4abc075a4f93960b9c7d759ffb8531d" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.531970 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddbd89a04db807a525812d04b77cf422e4abc075a4f93960b9c7d759ffb8531d"} err="failed to get container status \"ddbd89a04db807a525812d04b77cf422e4abc075a4f93960b9c7d759ffb8531d\": rpc error: code = NotFound desc = could not find container \"ddbd89a04db807a525812d04b77cf422e4abc075a4f93960b9c7d759ffb8531d\": container with ID starting with ddbd89a04db807a525812d04b77cf422e4abc075a4f93960b9c7d759ffb8531d not found: ID does not exist" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.627676 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cce5658-90eb-4be8-8043-90237f93caf1-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.692800 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.705435 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.717204 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:48:20 crc kubenswrapper[4867]: E0101 08:48:20.717638 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cce5658-90eb-4be8-8043-90237f93caf1" containerName="ceilometer-notification-agent" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.717658 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cce5658-90eb-4be8-8043-90237f93caf1" containerName="ceilometer-notification-agent" Jan 01 08:48:20 crc kubenswrapper[4867]: E0101 08:48:20.717674 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cce5658-90eb-4be8-8043-90237f93caf1" containerName="sg-core" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.717682 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cce5658-90eb-4be8-8043-90237f93caf1" containerName="sg-core" Jan 01 08:48:20 crc kubenswrapper[4867]: E0101 08:48:20.717705 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cce5658-90eb-4be8-8043-90237f93caf1" containerName="ceilometer-central-agent" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.717713 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cce5658-90eb-4be8-8043-90237f93caf1" containerName="ceilometer-central-agent" Jan 01 08:48:20 crc kubenswrapper[4867]: E0101 08:48:20.717741 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cce5658-90eb-4be8-8043-90237f93caf1" containerName="proxy-httpd" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.717748 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cce5658-90eb-4be8-8043-90237f93caf1" containerName="proxy-httpd" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.717993 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cce5658-90eb-4be8-8043-90237f93caf1" containerName="ceilometer-central-agent" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.718014 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cce5658-90eb-4be8-8043-90237f93caf1" containerName="proxy-httpd" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.718039 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cce5658-90eb-4be8-8043-90237f93caf1" containerName="sg-core" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.718053 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cce5658-90eb-4be8-8043-90237f93caf1" containerName="ceilometer-notification-agent" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.719804 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.721899 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.722056 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.728199 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.831073 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/729d3b97-c91b-495d-84df-95c6a94be5de-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"729d3b97-c91b-495d-84df-95c6a94be5de\") " pod="openstack/ceilometer-0" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.831393 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/729d3b97-c91b-495d-84df-95c6a94be5de-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"729d3b97-c91b-495d-84df-95c6a94be5de\") " pod="openstack/ceilometer-0" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.831426 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/729d3b97-c91b-495d-84df-95c6a94be5de-config-data\") pod \"ceilometer-0\" (UID: \"729d3b97-c91b-495d-84df-95c6a94be5de\") " pod="openstack/ceilometer-0" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.831440 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/729d3b97-c91b-495d-84df-95c6a94be5de-run-httpd\") pod \"ceilometer-0\" (UID: \"729d3b97-c91b-495d-84df-95c6a94be5de\") " pod="openstack/ceilometer-0" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.831475 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmbx6\" (UniqueName: \"kubernetes.io/projected/729d3b97-c91b-495d-84df-95c6a94be5de-kube-api-access-hmbx6\") pod \"ceilometer-0\" (UID: \"729d3b97-c91b-495d-84df-95c6a94be5de\") " pod="openstack/ceilometer-0" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.831502 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/729d3b97-c91b-495d-84df-95c6a94be5de-scripts\") pod \"ceilometer-0\" (UID: \"729d3b97-c91b-495d-84df-95c6a94be5de\") " pod="openstack/ceilometer-0" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.831538 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/729d3b97-c91b-495d-84df-95c6a94be5de-log-httpd\") pod \"ceilometer-0\" (UID: \"729d3b97-c91b-495d-84df-95c6a94be5de\") " pod="openstack/ceilometer-0" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.933812 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/729d3b97-c91b-495d-84df-95c6a94be5de-log-httpd\") pod \"ceilometer-0\" (UID: \"729d3b97-c91b-495d-84df-95c6a94be5de\") " pod="openstack/ceilometer-0" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.933951 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/729d3b97-c91b-495d-84df-95c6a94be5de-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"729d3b97-c91b-495d-84df-95c6a94be5de\") " pod="openstack/ceilometer-0" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.933979 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/729d3b97-c91b-495d-84df-95c6a94be5de-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"729d3b97-c91b-495d-84df-95c6a94be5de\") " pod="openstack/ceilometer-0" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.934008 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/729d3b97-c91b-495d-84df-95c6a94be5de-config-data\") pod \"ceilometer-0\" (UID: \"729d3b97-c91b-495d-84df-95c6a94be5de\") " pod="openstack/ceilometer-0" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.934024 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/729d3b97-c91b-495d-84df-95c6a94be5de-run-httpd\") pod \"ceilometer-0\" (UID: \"729d3b97-c91b-495d-84df-95c6a94be5de\") " pod="openstack/ceilometer-0" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.934056 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmbx6\" (UniqueName: \"kubernetes.io/projected/729d3b97-c91b-495d-84df-95c6a94be5de-kube-api-access-hmbx6\") pod \"ceilometer-0\" (UID: \"729d3b97-c91b-495d-84df-95c6a94be5de\") " pod="openstack/ceilometer-0" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.934081 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/729d3b97-c91b-495d-84df-95c6a94be5de-scripts\") pod \"ceilometer-0\" (UID: \"729d3b97-c91b-495d-84df-95c6a94be5de\") " pod="openstack/ceilometer-0" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.934571 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/729d3b97-c91b-495d-84df-95c6a94be5de-run-httpd\") pod \"ceilometer-0\" (UID: \"729d3b97-c91b-495d-84df-95c6a94be5de\") " pod="openstack/ceilometer-0" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.935397 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/729d3b97-c91b-495d-84df-95c6a94be5de-log-httpd\") pod \"ceilometer-0\" (UID: \"729d3b97-c91b-495d-84df-95c6a94be5de\") " pod="openstack/ceilometer-0" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.940744 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/729d3b97-c91b-495d-84df-95c6a94be5de-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"729d3b97-c91b-495d-84df-95c6a94be5de\") " pod="openstack/ceilometer-0" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.941067 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/729d3b97-c91b-495d-84df-95c6a94be5de-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"729d3b97-c91b-495d-84df-95c6a94be5de\") " pod="openstack/ceilometer-0" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.941465 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/729d3b97-c91b-495d-84df-95c6a94be5de-config-data\") pod \"ceilometer-0\" (UID: \"729d3b97-c91b-495d-84df-95c6a94be5de\") " pod="openstack/ceilometer-0" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.941607 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/729d3b97-c91b-495d-84df-95c6a94be5de-scripts\") pod \"ceilometer-0\" (UID: \"729d3b97-c91b-495d-84df-95c6a94be5de\") " pod="openstack/ceilometer-0" Jan 01 08:48:20 crc kubenswrapper[4867]: I0101 08:48:20.950547 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmbx6\" (UniqueName: \"kubernetes.io/projected/729d3b97-c91b-495d-84df-95c6a94be5de-kube-api-access-hmbx6\") pod \"ceilometer-0\" (UID: \"729d3b97-c91b-495d-84df-95c6a94be5de\") " pod="openstack/ceilometer-0" Jan 01 08:48:21 crc kubenswrapper[4867]: I0101 08:48:21.041478 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 01 08:48:21 crc kubenswrapper[4867]: I0101 08:48:21.086022 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 01 08:48:21 crc kubenswrapper[4867]: I0101 08:48:21.086421 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1032e145-2486-4fe1-9bde-b067d64c5d1c" containerName="glance-log" containerID="cri-o://dd7d657d98de1804bb48e8272431ea76481f11e8f2c965c2ac1f114a9dbdcf4b" gracePeriod=30 Jan 01 08:48:21 crc kubenswrapper[4867]: I0101 08:48:21.086540 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1032e145-2486-4fe1-9bde-b067d64c5d1c" containerName="glance-httpd" containerID="cri-o://776c8c16c62f11e76ab7ec02a01a146e73a47c3522a5e9e4d5ea077d152a0c24" gracePeriod=30 Jan 01 08:48:21 crc kubenswrapper[4867]: I0101 08:48:21.150596 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cce5658-90eb-4be8-8043-90237f93caf1" path="/var/lib/kubelet/pods/4cce5658-90eb-4be8-8043-90237f93caf1/volumes" Jan 01 08:48:21 crc kubenswrapper[4867]: I0101 08:48:21.335012 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 08:48:21 crc kubenswrapper[4867]: I0101 08:48:21.335394 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 08:48:21 crc kubenswrapper[4867]: I0101 08:48:21.375331 4867 generic.go:334] "Generic (PLEG): container finished" podID="1032e145-2486-4fe1-9bde-b067d64c5d1c" containerID="dd7d657d98de1804bb48e8272431ea76481f11e8f2c965c2ac1f114a9dbdcf4b" exitCode=143 Jan 01 08:48:21 crc kubenswrapper[4867]: I0101 08:48:21.375381 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1032e145-2486-4fe1-9bde-b067d64c5d1c","Type":"ContainerDied","Data":"dd7d657d98de1804bb48e8272431ea76481f11e8f2c965c2ac1f114a9dbdcf4b"} Jan 01 08:48:21 crc kubenswrapper[4867]: I0101 08:48:21.548381 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:48:21 crc kubenswrapper[4867]: W0101 08:48:21.548514 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod729d3b97_c91b_495d_84df_95c6a94be5de.slice/crio-09b6237cf7a2c4a86fecc054b91b0c51715d758dda623f7356574f507970bd38 WatchSource:0}: Error finding container 09b6237cf7a2c4a86fecc054b91b0c51715d758dda623f7356574f507970bd38: Status 404 returned error can't find the container with id 09b6237cf7a2c4a86fecc054b91b0c51715d758dda623f7356574f507970bd38 Jan 01 08:48:22 crc kubenswrapper[4867]: I0101 08:48:22.384798 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"729d3b97-c91b-495d-84df-95c6a94be5de","Type":"ContainerStarted","Data":"09b6237cf7a2c4a86fecc054b91b0c51715d758dda623f7356574f507970bd38"} Jan 01 08:48:22 crc kubenswrapper[4867]: I0101 08:48:22.642244 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:48:23 crc kubenswrapper[4867]: I0101 08:48:23.393942 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"729d3b97-c91b-495d-84df-95c6a94be5de","Type":"ContainerStarted","Data":"bea04d47776e0f86d9a905f5aec7b5cebd18f56f48da7d6c287d0f1524a907a8"} Jan 01 08:48:23 crc kubenswrapper[4867]: I0101 08:48:23.400393 4867 generic.go:334] "Generic (PLEG): container finished" podID="fa829cab-564c-410b-a84f-50bc6bba8676" containerID="bf5a24a0389da79ac334afcfb3dee92dc2c07c58a8ef2298c7e0f9ace80ed7c0" exitCode=0 Jan 01 08:48:23 crc kubenswrapper[4867]: I0101 08:48:23.400440 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa829cab-564c-410b-a84f-50bc6bba8676","Type":"ContainerDied","Data":"bf5a24a0389da79ac334afcfb3dee92dc2c07c58a8ef2298c7e0f9ace80ed7c0"} Jan 01 08:48:23 crc kubenswrapper[4867]: I0101 08:48:23.723725 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 01 08:48:23 crc kubenswrapper[4867]: I0101 08:48:23.787863 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa829cab-564c-410b-a84f-50bc6bba8676-httpd-run\") pod \"fa829cab-564c-410b-a84f-50bc6bba8676\" (UID: \"fa829cab-564c-410b-a84f-50bc6bba8676\") " Jan 01 08:48:23 crc kubenswrapper[4867]: I0101 08:48:23.787945 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"fa829cab-564c-410b-a84f-50bc6bba8676\" (UID: \"fa829cab-564c-410b-a84f-50bc6bba8676\") " Jan 01 08:48:23 crc kubenswrapper[4867]: I0101 08:48:23.788024 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7nds\" (UniqueName: \"kubernetes.io/projected/fa829cab-564c-410b-a84f-50bc6bba8676-kube-api-access-n7nds\") pod \"fa829cab-564c-410b-a84f-50bc6bba8676\" (UID: \"fa829cab-564c-410b-a84f-50bc6bba8676\") " Jan 01 08:48:23 crc kubenswrapper[4867]: I0101 08:48:23.788082 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa829cab-564c-410b-a84f-50bc6bba8676-scripts\") pod \"fa829cab-564c-410b-a84f-50bc6bba8676\" (UID: \"fa829cab-564c-410b-a84f-50bc6bba8676\") " Jan 01 08:48:23 crc kubenswrapper[4867]: I0101 08:48:23.788125 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa829cab-564c-410b-a84f-50bc6bba8676-config-data\") pod \"fa829cab-564c-410b-a84f-50bc6bba8676\" (UID: \"fa829cab-564c-410b-a84f-50bc6bba8676\") " Jan 01 08:48:23 crc kubenswrapper[4867]: I0101 08:48:23.788157 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa829cab-564c-410b-a84f-50bc6bba8676-public-tls-certs\") pod \"fa829cab-564c-410b-a84f-50bc6bba8676\" (UID: \"fa829cab-564c-410b-a84f-50bc6bba8676\") " Jan 01 08:48:23 crc kubenswrapper[4867]: I0101 08:48:23.788183 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa829cab-564c-410b-a84f-50bc6bba8676-combined-ca-bundle\") pod \"fa829cab-564c-410b-a84f-50bc6bba8676\" (UID: \"fa829cab-564c-410b-a84f-50bc6bba8676\") " Jan 01 08:48:23 crc kubenswrapper[4867]: I0101 08:48:23.788216 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa829cab-564c-410b-a84f-50bc6bba8676-logs\") pod \"fa829cab-564c-410b-a84f-50bc6bba8676\" (UID: \"fa829cab-564c-410b-a84f-50bc6bba8676\") " Jan 01 08:48:23 crc kubenswrapper[4867]: I0101 08:48:23.788972 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa829cab-564c-410b-a84f-50bc6bba8676-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fa829cab-564c-410b-a84f-50bc6bba8676" (UID: "fa829cab-564c-410b-a84f-50bc6bba8676"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:48:23 crc kubenswrapper[4867]: I0101 08:48:23.789127 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa829cab-564c-410b-a84f-50bc6bba8676-logs" (OuterVolumeSpecName: "logs") pod "fa829cab-564c-410b-a84f-50bc6bba8676" (UID: "fa829cab-564c-410b-a84f-50bc6bba8676"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:48:23 crc kubenswrapper[4867]: I0101 08:48:23.796017 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "fa829cab-564c-410b-a84f-50bc6bba8676" (UID: "fa829cab-564c-410b-a84f-50bc6bba8676"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 01 08:48:23 crc kubenswrapper[4867]: I0101 08:48:23.808070 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa829cab-564c-410b-a84f-50bc6bba8676-kube-api-access-n7nds" (OuterVolumeSpecName: "kube-api-access-n7nds") pod "fa829cab-564c-410b-a84f-50bc6bba8676" (UID: "fa829cab-564c-410b-a84f-50bc6bba8676"). InnerVolumeSpecName "kube-api-access-n7nds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:48:23 crc kubenswrapper[4867]: I0101 08:48:23.808090 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa829cab-564c-410b-a84f-50bc6bba8676-scripts" (OuterVolumeSpecName: "scripts") pod "fa829cab-564c-410b-a84f-50bc6bba8676" (UID: "fa829cab-564c-410b-a84f-50bc6bba8676"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:48:23 crc kubenswrapper[4867]: I0101 08:48:23.832633 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa829cab-564c-410b-a84f-50bc6bba8676-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa829cab-564c-410b-a84f-50bc6bba8676" (UID: "fa829cab-564c-410b-a84f-50bc6bba8676"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:48:23 crc kubenswrapper[4867]: I0101 08:48:23.883756 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa829cab-564c-410b-a84f-50bc6bba8676-config-data" (OuterVolumeSpecName: "config-data") pod "fa829cab-564c-410b-a84f-50bc6bba8676" (UID: "fa829cab-564c-410b-a84f-50bc6bba8676"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:48:23 crc kubenswrapper[4867]: I0101 08:48:23.889655 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa829cab-564c-410b-a84f-50bc6bba8676-logs\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:23 crc kubenswrapper[4867]: I0101 08:48:23.889676 4867 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa829cab-564c-410b-a84f-50bc6bba8676-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:23 crc kubenswrapper[4867]: I0101 08:48:23.889694 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 01 08:48:23 crc kubenswrapper[4867]: I0101 08:48:23.889703 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7nds\" (UniqueName: \"kubernetes.io/projected/fa829cab-564c-410b-a84f-50bc6bba8676-kube-api-access-n7nds\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:23 crc kubenswrapper[4867]: I0101 08:48:23.889712 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa829cab-564c-410b-a84f-50bc6bba8676-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:23 crc kubenswrapper[4867]: I0101 08:48:23.889720 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa829cab-564c-410b-a84f-50bc6bba8676-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:23 crc kubenswrapper[4867]: I0101 08:48:23.889749 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa829cab-564c-410b-a84f-50bc6bba8676-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:23 crc kubenswrapper[4867]: I0101 08:48:23.898049 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa829cab-564c-410b-a84f-50bc6bba8676-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fa829cab-564c-410b-a84f-50bc6bba8676" (UID: "fa829cab-564c-410b-a84f-50bc6bba8676"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:48:23 crc kubenswrapper[4867]: I0101 08:48:23.913791 4867 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 01 08:48:23 crc kubenswrapper[4867]: I0101 08:48:23.991442 4867 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:23 crc kubenswrapper[4867]: I0101 08:48:23.991481 4867 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa829cab-564c-410b-a84f-50bc6bba8676-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.417860 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"729d3b97-c91b-495d-84df-95c6a94be5de","Type":"ContainerStarted","Data":"b0b813c33c239527473c704b81ea860ad11d339e881bf783b1a01e954d9c99ea"} Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.419856 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa829cab-564c-410b-a84f-50bc6bba8676","Type":"ContainerDied","Data":"d73749da25c692d25167f228707058330f1f86f64e88c7d64588f8b946fc367d"} Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.419867 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.419909 4867 scope.go:117] "RemoveContainer" containerID="bf5a24a0389da79ac334afcfb3dee92dc2c07c58a8ef2298c7e0f9ace80ed7c0" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.424080 4867 generic.go:334] "Generic (PLEG): container finished" podID="1032e145-2486-4fe1-9bde-b067d64c5d1c" containerID="776c8c16c62f11e76ab7ec02a01a146e73a47c3522a5e9e4d5ea077d152a0c24" exitCode=0 Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.424119 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1032e145-2486-4fe1-9bde-b067d64c5d1c","Type":"ContainerDied","Data":"776c8c16c62f11e76ab7ec02a01a146e73a47c3522a5e9e4d5ea077d152a0c24"} Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.466186 4867 scope.go:117] "RemoveContainer" containerID="82b64cc4d5a9681c6b6e7f280a45133b1d7d8b66620bf65a4a254e9c97580d6a" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.623634 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.651827 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.659497 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 01 08:48:24 crc kubenswrapper[4867]: E0101 08:48:24.659943 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa829cab-564c-410b-a84f-50bc6bba8676" containerName="glance-log" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.659959 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa829cab-564c-410b-a84f-50bc6bba8676" containerName="glance-log" Jan 01 08:48:24 crc kubenswrapper[4867]: E0101 08:48:24.659979 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa829cab-564c-410b-a84f-50bc6bba8676" containerName="glance-httpd" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.659987 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa829cab-564c-410b-a84f-50bc6bba8676" containerName="glance-httpd" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.660140 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa829cab-564c-410b-a84f-50bc6bba8676" containerName="glance-httpd" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.660166 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa829cab-564c-410b-a84f-50bc6bba8676" containerName="glance-log" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.661029 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.662943 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.663273 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.667248 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.706201 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x97bq\" (UniqueName: \"kubernetes.io/projected/6f47f095-abde-4e07-8edf-d0a318043581-kube-api-access-x97bq\") pod \"glance-default-external-api-0\" (UID: \"6f47f095-abde-4e07-8edf-d0a318043581\") " pod="openstack/glance-default-external-api-0" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.706256 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"6f47f095-abde-4e07-8edf-d0a318043581\") " pod="openstack/glance-default-external-api-0" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.706293 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f47f095-abde-4e07-8edf-d0a318043581-logs\") pod \"glance-default-external-api-0\" (UID: \"6f47f095-abde-4e07-8edf-d0a318043581\") " pod="openstack/glance-default-external-api-0" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.706413 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f47f095-abde-4e07-8edf-d0a318043581-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6f47f095-abde-4e07-8edf-d0a318043581\") " pod="openstack/glance-default-external-api-0" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.706449 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6f47f095-abde-4e07-8edf-d0a318043581-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6f47f095-abde-4e07-8edf-d0a318043581\") " pod="openstack/glance-default-external-api-0" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.706468 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f47f095-abde-4e07-8edf-d0a318043581-config-data\") pod \"glance-default-external-api-0\" (UID: \"6f47f095-abde-4e07-8edf-d0a318043581\") " pod="openstack/glance-default-external-api-0" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.706491 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f47f095-abde-4e07-8edf-d0a318043581-scripts\") pod \"glance-default-external-api-0\" (UID: \"6f47f095-abde-4e07-8edf-d0a318043581\") " pod="openstack/glance-default-external-api-0" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.706659 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f47f095-abde-4e07-8edf-d0a318043581-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6f47f095-abde-4e07-8edf-d0a318043581\") " pod="openstack/glance-default-external-api-0" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.724146 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.807567 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1032e145-2486-4fe1-9bde-b067d64c5d1c-httpd-run\") pod \"1032e145-2486-4fe1-9bde-b067d64c5d1c\" (UID: \"1032e145-2486-4fe1-9bde-b067d64c5d1c\") " Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.807622 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1032e145-2486-4fe1-9bde-b067d64c5d1c-logs\") pod \"1032e145-2486-4fe1-9bde-b067d64c5d1c\" (UID: \"1032e145-2486-4fe1-9bde-b067d64c5d1c\") " Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.807664 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1032e145-2486-4fe1-9bde-b067d64c5d1c-internal-tls-certs\") pod \"1032e145-2486-4fe1-9bde-b067d64c5d1c\" (UID: \"1032e145-2486-4fe1-9bde-b067d64c5d1c\") " Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.807702 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1032e145-2486-4fe1-9bde-b067d64c5d1c-scripts\") pod \"1032e145-2486-4fe1-9bde-b067d64c5d1c\" (UID: \"1032e145-2486-4fe1-9bde-b067d64c5d1c\") " Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.807723 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qp7ng\" (UniqueName: \"kubernetes.io/projected/1032e145-2486-4fe1-9bde-b067d64c5d1c-kube-api-access-qp7ng\") pod \"1032e145-2486-4fe1-9bde-b067d64c5d1c\" (UID: \"1032e145-2486-4fe1-9bde-b067d64c5d1c\") " Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.807750 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1032e145-2486-4fe1-9bde-b067d64c5d1c-config-data\") pod \"1032e145-2486-4fe1-9bde-b067d64c5d1c\" (UID: \"1032e145-2486-4fe1-9bde-b067d64c5d1c\") " Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.807784 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"1032e145-2486-4fe1-9bde-b067d64c5d1c\" (UID: \"1032e145-2486-4fe1-9bde-b067d64c5d1c\") " Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.807806 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1032e145-2486-4fe1-9bde-b067d64c5d1c-combined-ca-bundle\") pod \"1032e145-2486-4fe1-9bde-b067d64c5d1c\" (UID: \"1032e145-2486-4fe1-9bde-b067d64c5d1c\") " Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.808026 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x97bq\" (UniqueName: \"kubernetes.io/projected/6f47f095-abde-4e07-8edf-d0a318043581-kube-api-access-x97bq\") pod \"glance-default-external-api-0\" (UID: \"6f47f095-abde-4e07-8edf-d0a318043581\") " pod="openstack/glance-default-external-api-0" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.808056 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"6f47f095-abde-4e07-8edf-d0a318043581\") " pod="openstack/glance-default-external-api-0" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.808088 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f47f095-abde-4e07-8edf-d0a318043581-logs\") pod \"glance-default-external-api-0\" (UID: \"6f47f095-abde-4e07-8edf-d0a318043581\") " pod="openstack/glance-default-external-api-0" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.808122 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f47f095-abde-4e07-8edf-d0a318043581-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6f47f095-abde-4e07-8edf-d0a318043581\") " pod="openstack/glance-default-external-api-0" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.808142 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6f47f095-abde-4e07-8edf-d0a318043581-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6f47f095-abde-4e07-8edf-d0a318043581\") " pod="openstack/glance-default-external-api-0" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.808159 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f47f095-abde-4e07-8edf-d0a318043581-config-data\") pod \"glance-default-external-api-0\" (UID: \"6f47f095-abde-4e07-8edf-d0a318043581\") " pod="openstack/glance-default-external-api-0" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.808179 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f47f095-abde-4e07-8edf-d0a318043581-scripts\") pod \"glance-default-external-api-0\" (UID: \"6f47f095-abde-4e07-8edf-d0a318043581\") " pod="openstack/glance-default-external-api-0" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.808211 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f47f095-abde-4e07-8edf-d0a318043581-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6f47f095-abde-4e07-8edf-d0a318043581\") " pod="openstack/glance-default-external-api-0" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.808449 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1032e145-2486-4fe1-9bde-b067d64c5d1c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1032e145-2486-4fe1-9bde-b067d64c5d1c" (UID: "1032e145-2486-4fe1-9bde-b067d64c5d1c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.809048 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"6f47f095-abde-4e07-8edf-d0a318043581\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.812517 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f47f095-abde-4e07-8edf-d0a318043581-logs\") pod \"glance-default-external-api-0\" (UID: \"6f47f095-abde-4e07-8edf-d0a318043581\") " pod="openstack/glance-default-external-api-0" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.812780 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1032e145-2486-4fe1-9bde-b067d64c5d1c-logs" (OuterVolumeSpecName: "logs") pod "1032e145-2486-4fe1-9bde-b067d64c5d1c" (UID: "1032e145-2486-4fe1-9bde-b067d64c5d1c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.812827 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6f47f095-abde-4e07-8edf-d0a318043581-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6f47f095-abde-4e07-8edf-d0a318043581\") " pod="openstack/glance-default-external-api-0" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.815353 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "1032e145-2486-4fe1-9bde-b067d64c5d1c" (UID: "1032e145-2486-4fe1-9bde-b067d64c5d1c"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.816217 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1032e145-2486-4fe1-9bde-b067d64c5d1c-scripts" (OuterVolumeSpecName: "scripts") pod "1032e145-2486-4fe1-9bde-b067d64c5d1c" (UID: "1032e145-2486-4fe1-9bde-b067d64c5d1c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.819016 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f47f095-abde-4e07-8edf-d0a318043581-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6f47f095-abde-4e07-8edf-d0a318043581\") " pod="openstack/glance-default-external-api-0" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.823699 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1032e145-2486-4fe1-9bde-b067d64c5d1c-kube-api-access-qp7ng" (OuterVolumeSpecName: "kube-api-access-qp7ng") pod "1032e145-2486-4fe1-9bde-b067d64c5d1c" (UID: "1032e145-2486-4fe1-9bde-b067d64c5d1c"). InnerVolumeSpecName "kube-api-access-qp7ng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.826193 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f47f095-abde-4e07-8edf-d0a318043581-config-data\") pod \"glance-default-external-api-0\" (UID: \"6f47f095-abde-4e07-8edf-d0a318043581\") " pod="openstack/glance-default-external-api-0" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.827742 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f47f095-abde-4e07-8edf-d0a318043581-scripts\") pod \"glance-default-external-api-0\" (UID: \"6f47f095-abde-4e07-8edf-d0a318043581\") " pod="openstack/glance-default-external-api-0" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.828094 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f47f095-abde-4e07-8edf-d0a318043581-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6f47f095-abde-4e07-8edf-d0a318043581\") " pod="openstack/glance-default-external-api-0" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.833793 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x97bq\" (UniqueName: \"kubernetes.io/projected/6f47f095-abde-4e07-8edf-d0a318043581-kube-api-access-x97bq\") pod \"glance-default-external-api-0\" (UID: \"6f47f095-abde-4e07-8edf-d0a318043581\") " pod="openstack/glance-default-external-api-0" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.857328 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1032e145-2486-4fe1-9bde-b067d64c5d1c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1032e145-2486-4fe1-9bde-b067d64c5d1c" (UID: "1032e145-2486-4fe1-9bde-b067d64c5d1c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.861163 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"6f47f095-abde-4e07-8edf-d0a318043581\") " pod="openstack/glance-default-external-api-0" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.888166 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1032e145-2486-4fe1-9bde-b067d64c5d1c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1032e145-2486-4fe1-9bde-b067d64c5d1c" (UID: "1032e145-2486-4fe1-9bde-b067d64c5d1c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.897841 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1032e145-2486-4fe1-9bde-b067d64c5d1c-config-data" (OuterVolumeSpecName: "config-data") pod "1032e145-2486-4fe1-9bde-b067d64c5d1c" (UID: "1032e145-2486-4fe1-9bde-b067d64c5d1c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.910164 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1032e145-2486-4fe1-9bde-b067d64c5d1c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.910198 4867 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1032e145-2486-4fe1-9bde-b067d64c5d1c-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.910209 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1032e145-2486-4fe1-9bde-b067d64c5d1c-logs\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.910218 4867 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1032e145-2486-4fe1-9bde-b067d64c5d1c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.910226 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1032e145-2486-4fe1-9bde-b067d64c5d1c-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.910234 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qp7ng\" (UniqueName: \"kubernetes.io/projected/1032e145-2486-4fe1-9bde-b067d64c5d1c-kube-api-access-qp7ng\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.910244 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1032e145-2486-4fe1-9bde-b067d64c5d1c-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.910274 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.928633 4867 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 01 08:48:24 crc kubenswrapper[4867]: I0101 08:48:24.987554 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.012194 4867 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.144984 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa829cab-564c-410b-a84f-50bc6bba8676" path="/var/lib/kubelet/pods/fa829cab-564c-410b-a84f-50bc6bba8676/volumes" Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.479414 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1032e145-2486-4fe1-9bde-b067d64c5d1c","Type":"ContainerDied","Data":"4396ed1a1dd6df9589cdcf1e89d1ab371a26dda2389465c77cc40a9c4009e360"} Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.479780 4867 scope.go:117] "RemoveContainer" containerID="776c8c16c62f11e76ab7ec02a01a146e73a47c3522a5e9e4d5ea077d152a0c24" Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.479470 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.493863 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"729d3b97-c91b-495d-84df-95c6a94be5de","Type":"ContainerStarted","Data":"01d147c8b60e0db382ccfee96dda73f6f1d4d56c1244365a0642b8a2884c0801"} Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.512185 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.534697 4867 scope.go:117] "RemoveContainer" containerID="dd7d657d98de1804bb48e8272431ea76481f11e8f2c965c2ac1f114a9dbdcf4b" Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.537916 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.574938 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 01 08:48:25 crc kubenswrapper[4867]: E0101 08:48:25.575328 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1032e145-2486-4fe1-9bde-b067d64c5d1c" containerName="glance-log" Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.575344 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1032e145-2486-4fe1-9bde-b067d64c5d1c" containerName="glance-log" Jan 01 08:48:25 crc kubenswrapper[4867]: E0101 08:48:25.575355 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1032e145-2486-4fe1-9bde-b067d64c5d1c" containerName="glance-httpd" Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.575362 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1032e145-2486-4fe1-9bde-b067d64c5d1c" containerName="glance-httpd" Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.575522 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="1032e145-2486-4fe1-9bde-b067d64c5d1c" containerName="glance-log" Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.575535 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="1032e145-2486-4fe1-9bde-b067d64c5d1c" containerName="glance-httpd" Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.576410 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.579398 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.580448 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.607963 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.635849 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.727050 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-logs\") pod \"glance-default-internal-api-0\" (UID: \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.727104 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.727140 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc5vm\" (UniqueName: \"kubernetes.io/projected/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-kube-api-access-jc5vm\") pod \"glance-default-internal-api-0\" (UID: \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.727188 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.727248 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.727280 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.727320 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.727345 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.829061 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.829114 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.829156 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-logs\") pod \"glance-default-internal-api-0\" (UID: \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.829179 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.829210 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc5vm\" (UniqueName: \"kubernetes.io/projected/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-kube-api-access-jc5vm\") pod \"glance-default-internal-api-0\" (UID: \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.829286 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.829345 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.829374 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.829680 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-logs\") pod \"glance-default-internal-api-0\" (UID: \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.829770 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.829975 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.834979 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.836550 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.837279 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.848927 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc5vm\" (UniqueName: \"kubernetes.io/projected/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-kube-api-access-jc5vm\") pod \"glance-default-internal-api-0\" (UID: \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.849604 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.868407 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\") " pod="openstack/glance-default-internal-api-0" Jan 01 08:48:25 crc kubenswrapper[4867]: I0101 08:48:25.908773 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 01 08:48:26 crc kubenswrapper[4867]: I0101 08:48:26.466000 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 01 08:48:26 crc kubenswrapper[4867]: I0101 08:48:26.528043 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e809a11a-a5d8-49a0-9d9d-cac6a399dd35","Type":"ContainerStarted","Data":"bf8f8652376973036beb2d302c94d8c532f851e8ac6d076c012daf285f7fcf11"} Jan 01 08:48:26 crc kubenswrapper[4867]: I0101 08:48:26.536675 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"729d3b97-c91b-495d-84df-95c6a94be5de","Type":"ContainerStarted","Data":"bf6d1a523aff13af983d83dfc5871069577c9c6fd4877cae0a3e08ebb192d763"} Jan 01 08:48:26 crc kubenswrapper[4867]: I0101 08:48:26.537053 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="729d3b97-c91b-495d-84df-95c6a94be5de" containerName="ceilometer-central-agent" containerID="cri-o://bea04d47776e0f86d9a905f5aec7b5cebd18f56f48da7d6c287d0f1524a907a8" gracePeriod=30 Jan 01 08:48:26 crc kubenswrapper[4867]: I0101 08:48:26.537155 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 01 08:48:26 crc kubenswrapper[4867]: I0101 08:48:26.537139 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="729d3b97-c91b-495d-84df-95c6a94be5de" containerName="proxy-httpd" containerID="cri-o://bf6d1a523aff13af983d83dfc5871069577c9c6fd4877cae0a3e08ebb192d763" gracePeriod=30 Jan 01 08:48:26 crc kubenswrapper[4867]: I0101 08:48:26.537329 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="729d3b97-c91b-495d-84df-95c6a94be5de" containerName="sg-core" containerID="cri-o://01d147c8b60e0db382ccfee96dda73f6f1d4d56c1244365a0642b8a2884c0801" gracePeriod=30 Jan 01 08:48:26 crc kubenswrapper[4867]: I0101 08:48:26.537382 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="729d3b97-c91b-495d-84df-95c6a94be5de" containerName="ceilometer-notification-agent" containerID="cri-o://b0b813c33c239527473c704b81ea860ad11d339e881bf783b1a01e954d9c99ea" gracePeriod=30 Jan 01 08:48:26 crc kubenswrapper[4867]: I0101 08:48:26.552962 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6f47f095-abde-4e07-8edf-d0a318043581","Type":"ContainerStarted","Data":"c1af335d05f310408a3a3e7e9c132db515267848f5873efa7c468ee6eea3edc6"} Jan 01 08:48:26 crc kubenswrapper[4867]: I0101 08:48:26.553003 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6f47f095-abde-4e07-8edf-d0a318043581","Type":"ContainerStarted","Data":"822f82508c998db7e356ecb226620c5945ed607a34c00995e9b70039a61c4c4d"} Jan 01 08:48:26 crc kubenswrapper[4867]: I0101 08:48:26.573052 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.301201688 podStartE2EDuration="6.573015609s" podCreationTimestamp="2026-01-01 08:48:20 +0000 UTC" firstStartedPulling="2026-01-01 08:48:21.550724192 +0000 UTC m=+1310.685992951" lastFinishedPulling="2026-01-01 08:48:25.822538103 +0000 UTC m=+1314.957806872" observedRunningTime="2026-01-01 08:48:26.563162942 +0000 UTC m=+1315.698431701" watchObservedRunningTime="2026-01-01 08:48:26.573015609 +0000 UTC m=+1315.708284378" Jan 01 08:48:27 crc kubenswrapper[4867]: I0101 08:48:27.154141 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1032e145-2486-4fe1-9bde-b067d64c5d1c" path="/var/lib/kubelet/pods/1032e145-2486-4fe1-9bde-b067d64c5d1c/volumes" Jan 01 08:48:27 crc kubenswrapper[4867]: I0101 08:48:27.575865 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6f47f095-abde-4e07-8edf-d0a318043581","Type":"ContainerStarted","Data":"6fd8f4c7059e184922dd9a91f3056bb550d7c290a243aea1fe9c949fb9fa29c7"} Jan 01 08:48:27 crc kubenswrapper[4867]: I0101 08:48:27.579789 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e809a11a-a5d8-49a0-9d9d-cac6a399dd35","Type":"ContainerStarted","Data":"c147d7635f762a0bb4d5c3b4b921b293ef2acefe6bfd101dcab003dc2f076886"} Jan 01 08:48:27 crc kubenswrapper[4867]: I0101 08:48:27.582485 4867 generic.go:334] "Generic (PLEG): container finished" podID="729d3b97-c91b-495d-84df-95c6a94be5de" containerID="bf6d1a523aff13af983d83dfc5871069577c9c6fd4877cae0a3e08ebb192d763" exitCode=0 Jan 01 08:48:27 crc kubenswrapper[4867]: I0101 08:48:27.582511 4867 generic.go:334] "Generic (PLEG): container finished" podID="729d3b97-c91b-495d-84df-95c6a94be5de" containerID="01d147c8b60e0db382ccfee96dda73f6f1d4d56c1244365a0642b8a2884c0801" exitCode=2 Jan 01 08:48:27 crc kubenswrapper[4867]: I0101 08:48:27.582519 4867 generic.go:334] "Generic (PLEG): container finished" podID="729d3b97-c91b-495d-84df-95c6a94be5de" containerID="b0b813c33c239527473c704b81ea860ad11d339e881bf783b1a01e954d9c99ea" exitCode=0 Jan 01 08:48:27 crc kubenswrapper[4867]: I0101 08:48:27.582533 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"729d3b97-c91b-495d-84df-95c6a94be5de","Type":"ContainerDied","Data":"bf6d1a523aff13af983d83dfc5871069577c9c6fd4877cae0a3e08ebb192d763"} Jan 01 08:48:27 crc kubenswrapper[4867]: I0101 08:48:27.582554 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"729d3b97-c91b-495d-84df-95c6a94be5de","Type":"ContainerDied","Data":"01d147c8b60e0db382ccfee96dda73f6f1d4d56c1244365a0642b8a2884c0801"} Jan 01 08:48:27 crc kubenswrapper[4867]: I0101 08:48:27.582565 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"729d3b97-c91b-495d-84df-95c6a94be5de","Type":"ContainerDied","Data":"b0b813c33c239527473c704b81ea860ad11d339e881bf783b1a01e954d9c99ea"} Jan 01 08:48:27 crc kubenswrapper[4867]: I0101 08:48:27.597459 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.597436184 podStartE2EDuration="3.597436184s" podCreationTimestamp="2026-01-01 08:48:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:48:27.591142457 +0000 UTC m=+1316.726411236" watchObservedRunningTime="2026-01-01 08:48:27.597436184 +0000 UTC m=+1316.732704953" Jan 01 08:48:28 crc kubenswrapper[4867]: I0101 08:48:28.593708 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e809a11a-a5d8-49a0-9d9d-cac6a399dd35","Type":"ContainerStarted","Data":"9185c2834b9cf63d0aa63913819769f2b534971b2a8528f9b981383d4142d637"} Jan 01 08:48:28 crc kubenswrapper[4867]: I0101 08:48:28.619698 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.619679786 podStartE2EDuration="3.619679786s" podCreationTimestamp="2026-01-01 08:48:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:48:28.612862365 +0000 UTC m=+1317.748131124" watchObservedRunningTime="2026-01-01 08:48:28.619679786 +0000 UTC m=+1317.754948555" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.154369 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.306550 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmbx6\" (UniqueName: \"kubernetes.io/projected/729d3b97-c91b-495d-84df-95c6a94be5de-kube-api-access-hmbx6\") pod \"729d3b97-c91b-495d-84df-95c6a94be5de\" (UID: \"729d3b97-c91b-495d-84df-95c6a94be5de\") " Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.306616 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/729d3b97-c91b-495d-84df-95c6a94be5de-log-httpd\") pod \"729d3b97-c91b-495d-84df-95c6a94be5de\" (UID: \"729d3b97-c91b-495d-84df-95c6a94be5de\") " Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.306652 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/729d3b97-c91b-495d-84df-95c6a94be5de-scripts\") pod \"729d3b97-c91b-495d-84df-95c6a94be5de\" (UID: \"729d3b97-c91b-495d-84df-95c6a94be5de\") " Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.306718 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/729d3b97-c91b-495d-84df-95c6a94be5de-combined-ca-bundle\") pod \"729d3b97-c91b-495d-84df-95c6a94be5de\" (UID: \"729d3b97-c91b-495d-84df-95c6a94be5de\") " Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.306841 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/729d3b97-c91b-495d-84df-95c6a94be5de-config-data\") pod \"729d3b97-c91b-495d-84df-95c6a94be5de\" (UID: \"729d3b97-c91b-495d-84df-95c6a94be5de\") " Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.306908 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/729d3b97-c91b-495d-84df-95c6a94be5de-sg-core-conf-yaml\") pod \"729d3b97-c91b-495d-84df-95c6a94be5de\" (UID: \"729d3b97-c91b-495d-84df-95c6a94be5de\") " Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.306978 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/729d3b97-c91b-495d-84df-95c6a94be5de-run-httpd\") pod \"729d3b97-c91b-495d-84df-95c6a94be5de\" (UID: \"729d3b97-c91b-495d-84df-95c6a94be5de\") " Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.307715 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/729d3b97-c91b-495d-84df-95c6a94be5de-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "729d3b97-c91b-495d-84df-95c6a94be5de" (UID: "729d3b97-c91b-495d-84df-95c6a94be5de"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.308313 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/729d3b97-c91b-495d-84df-95c6a94be5de-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "729d3b97-c91b-495d-84df-95c6a94be5de" (UID: "729d3b97-c91b-495d-84df-95c6a94be5de"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.312316 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/729d3b97-c91b-495d-84df-95c6a94be5de-kube-api-access-hmbx6" (OuterVolumeSpecName: "kube-api-access-hmbx6") pod "729d3b97-c91b-495d-84df-95c6a94be5de" (UID: "729d3b97-c91b-495d-84df-95c6a94be5de"). InnerVolumeSpecName "kube-api-access-hmbx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.313193 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/729d3b97-c91b-495d-84df-95c6a94be5de-scripts" (OuterVolumeSpecName: "scripts") pod "729d3b97-c91b-495d-84df-95c6a94be5de" (UID: "729d3b97-c91b-495d-84df-95c6a94be5de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.338908 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/729d3b97-c91b-495d-84df-95c6a94be5de-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "729d3b97-c91b-495d-84df-95c6a94be5de" (UID: "729d3b97-c91b-495d-84df-95c6a94be5de"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.384157 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/729d3b97-c91b-495d-84df-95c6a94be5de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "729d3b97-c91b-495d-84df-95c6a94be5de" (UID: "729d3b97-c91b-495d-84df-95c6a94be5de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.406498 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/729d3b97-c91b-495d-84df-95c6a94be5de-config-data" (OuterVolumeSpecName: "config-data") pod "729d3b97-c91b-495d-84df-95c6a94be5de" (UID: "729d3b97-c91b-495d-84df-95c6a94be5de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.408522 4867 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/729d3b97-c91b-495d-84df-95c6a94be5de-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.408551 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/729d3b97-c91b-495d-84df-95c6a94be5de-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.408560 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/729d3b97-c91b-495d-84df-95c6a94be5de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.408570 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/729d3b97-c91b-495d-84df-95c6a94be5de-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.408579 4867 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/729d3b97-c91b-495d-84df-95c6a94be5de-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.408587 4867 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/729d3b97-c91b-495d-84df-95c6a94be5de-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.408594 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmbx6\" (UniqueName: \"kubernetes.io/projected/729d3b97-c91b-495d-84df-95c6a94be5de-kube-api-access-hmbx6\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.605011 4867 generic.go:334] "Generic (PLEG): container finished" podID="729d3b97-c91b-495d-84df-95c6a94be5de" containerID="bea04d47776e0f86d9a905f5aec7b5cebd18f56f48da7d6c287d0f1524a907a8" exitCode=0 Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.605059 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.605077 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"729d3b97-c91b-495d-84df-95c6a94be5de","Type":"ContainerDied","Data":"bea04d47776e0f86d9a905f5aec7b5cebd18f56f48da7d6c287d0f1524a907a8"} Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.606663 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"729d3b97-c91b-495d-84df-95c6a94be5de","Type":"ContainerDied","Data":"09b6237cf7a2c4a86fecc054b91b0c51715d758dda623f7356574f507970bd38"} Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.606689 4867 scope.go:117] "RemoveContainer" containerID="bf6d1a523aff13af983d83dfc5871069577c9c6fd4877cae0a3e08ebb192d763" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.632251 4867 scope.go:117] "RemoveContainer" containerID="01d147c8b60e0db382ccfee96dda73f6f1d4d56c1244365a0642b8a2884c0801" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.652626 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.692708 4867 scope.go:117] "RemoveContainer" containerID="b0b813c33c239527473c704b81ea860ad11d339e881bf783b1a01e954d9c99ea" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.704059 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.721717 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:48:29 crc kubenswrapper[4867]: E0101 08:48:29.722343 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="729d3b97-c91b-495d-84df-95c6a94be5de" containerName="proxy-httpd" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.722375 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="729d3b97-c91b-495d-84df-95c6a94be5de" containerName="proxy-httpd" Jan 01 08:48:29 crc kubenswrapper[4867]: E0101 08:48:29.722399 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="729d3b97-c91b-495d-84df-95c6a94be5de" containerName="ceilometer-central-agent" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.722411 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="729d3b97-c91b-495d-84df-95c6a94be5de" containerName="ceilometer-central-agent" Jan 01 08:48:29 crc kubenswrapper[4867]: E0101 08:48:29.722442 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="729d3b97-c91b-495d-84df-95c6a94be5de" containerName="sg-core" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.722455 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="729d3b97-c91b-495d-84df-95c6a94be5de" containerName="sg-core" Jan 01 08:48:29 crc kubenswrapper[4867]: E0101 08:48:29.722476 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="729d3b97-c91b-495d-84df-95c6a94be5de" containerName="ceilometer-notification-agent" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.722488 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="729d3b97-c91b-495d-84df-95c6a94be5de" containerName="ceilometer-notification-agent" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.722819 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="729d3b97-c91b-495d-84df-95c6a94be5de" containerName="ceilometer-central-agent" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.722856 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="729d3b97-c91b-495d-84df-95c6a94be5de" containerName="ceilometer-notification-agent" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.722875 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="729d3b97-c91b-495d-84df-95c6a94be5de" containerName="sg-core" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.722916 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="729d3b97-c91b-495d-84df-95c6a94be5de" containerName="proxy-httpd" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.728475 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.732093 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.732942 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.736872 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.737353 4867 scope.go:117] "RemoveContainer" containerID="bea04d47776e0f86d9a905f5aec7b5cebd18f56f48da7d6c287d0f1524a907a8" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.774401 4867 scope.go:117] "RemoveContainer" containerID="bf6d1a523aff13af983d83dfc5871069577c9c6fd4877cae0a3e08ebb192d763" Jan 01 08:48:29 crc kubenswrapper[4867]: E0101 08:48:29.774984 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf6d1a523aff13af983d83dfc5871069577c9c6fd4877cae0a3e08ebb192d763\": container with ID starting with bf6d1a523aff13af983d83dfc5871069577c9c6fd4877cae0a3e08ebb192d763 not found: ID does not exist" containerID="bf6d1a523aff13af983d83dfc5871069577c9c6fd4877cae0a3e08ebb192d763" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.775576 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf6d1a523aff13af983d83dfc5871069577c9c6fd4877cae0a3e08ebb192d763"} err="failed to get container status \"bf6d1a523aff13af983d83dfc5871069577c9c6fd4877cae0a3e08ebb192d763\": rpc error: code = NotFound desc = could not find container \"bf6d1a523aff13af983d83dfc5871069577c9c6fd4877cae0a3e08ebb192d763\": container with ID starting with bf6d1a523aff13af983d83dfc5871069577c9c6fd4877cae0a3e08ebb192d763 not found: ID does not exist" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.775612 4867 scope.go:117] "RemoveContainer" containerID="01d147c8b60e0db382ccfee96dda73f6f1d4d56c1244365a0642b8a2884c0801" Jan 01 08:48:29 crc kubenswrapper[4867]: E0101 08:48:29.776173 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01d147c8b60e0db382ccfee96dda73f6f1d4d56c1244365a0642b8a2884c0801\": container with ID starting with 01d147c8b60e0db382ccfee96dda73f6f1d4d56c1244365a0642b8a2884c0801 not found: ID does not exist" containerID="01d147c8b60e0db382ccfee96dda73f6f1d4d56c1244365a0642b8a2884c0801" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.776250 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01d147c8b60e0db382ccfee96dda73f6f1d4d56c1244365a0642b8a2884c0801"} err="failed to get container status \"01d147c8b60e0db382ccfee96dda73f6f1d4d56c1244365a0642b8a2884c0801\": rpc error: code = NotFound desc = could not find container \"01d147c8b60e0db382ccfee96dda73f6f1d4d56c1244365a0642b8a2884c0801\": container with ID starting with 01d147c8b60e0db382ccfee96dda73f6f1d4d56c1244365a0642b8a2884c0801 not found: ID does not exist" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.776271 4867 scope.go:117] "RemoveContainer" containerID="b0b813c33c239527473c704b81ea860ad11d339e881bf783b1a01e954d9c99ea" Jan 01 08:48:29 crc kubenswrapper[4867]: E0101 08:48:29.777029 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0b813c33c239527473c704b81ea860ad11d339e881bf783b1a01e954d9c99ea\": container with ID starting with b0b813c33c239527473c704b81ea860ad11d339e881bf783b1a01e954d9c99ea not found: ID does not exist" containerID="b0b813c33c239527473c704b81ea860ad11d339e881bf783b1a01e954d9c99ea" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.777060 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0b813c33c239527473c704b81ea860ad11d339e881bf783b1a01e954d9c99ea"} err="failed to get container status \"b0b813c33c239527473c704b81ea860ad11d339e881bf783b1a01e954d9c99ea\": rpc error: code = NotFound desc = could not find container \"b0b813c33c239527473c704b81ea860ad11d339e881bf783b1a01e954d9c99ea\": container with ID starting with b0b813c33c239527473c704b81ea860ad11d339e881bf783b1a01e954d9c99ea not found: ID does not exist" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.777078 4867 scope.go:117] "RemoveContainer" containerID="bea04d47776e0f86d9a905f5aec7b5cebd18f56f48da7d6c287d0f1524a907a8" Jan 01 08:48:29 crc kubenswrapper[4867]: E0101 08:48:29.777455 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bea04d47776e0f86d9a905f5aec7b5cebd18f56f48da7d6c287d0f1524a907a8\": container with ID starting with bea04d47776e0f86d9a905f5aec7b5cebd18f56f48da7d6c287d0f1524a907a8 not found: ID does not exist" containerID="bea04d47776e0f86d9a905f5aec7b5cebd18f56f48da7d6c287d0f1524a907a8" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.777485 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bea04d47776e0f86d9a905f5aec7b5cebd18f56f48da7d6c287d0f1524a907a8"} err="failed to get container status \"bea04d47776e0f86d9a905f5aec7b5cebd18f56f48da7d6c287d0f1524a907a8\": rpc error: code = NotFound desc = could not find container \"bea04d47776e0f86d9a905f5aec7b5cebd18f56f48da7d6c287d0f1524a907a8\": container with ID starting with bea04d47776e0f86d9a905f5aec7b5cebd18f56f48da7d6c287d0f1524a907a8 not found: ID does not exist" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.815056 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"03a3b9bd-55d1-469d-a4b9-4db9992bbf56\") " pod="openstack/ceilometer-0" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.815098 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-run-httpd\") pod \"ceilometer-0\" (UID: \"03a3b9bd-55d1-469d-a4b9-4db9992bbf56\") " pod="openstack/ceilometer-0" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.815252 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-config-data\") pod \"ceilometer-0\" (UID: \"03a3b9bd-55d1-469d-a4b9-4db9992bbf56\") " pod="openstack/ceilometer-0" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.815327 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-scripts\") pod \"ceilometer-0\" (UID: \"03a3b9bd-55d1-469d-a4b9-4db9992bbf56\") " pod="openstack/ceilometer-0" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.815372 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"03a3b9bd-55d1-469d-a4b9-4db9992bbf56\") " pod="openstack/ceilometer-0" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.815564 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-log-httpd\") pod \"ceilometer-0\" (UID: \"03a3b9bd-55d1-469d-a4b9-4db9992bbf56\") " pod="openstack/ceilometer-0" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.815768 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4jxg\" (UniqueName: \"kubernetes.io/projected/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-kube-api-access-f4jxg\") pod \"ceilometer-0\" (UID: \"03a3b9bd-55d1-469d-a4b9-4db9992bbf56\") " pod="openstack/ceilometer-0" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.918120 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4jxg\" (UniqueName: \"kubernetes.io/projected/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-kube-api-access-f4jxg\") pod \"ceilometer-0\" (UID: \"03a3b9bd-55d1-469d-a4b9-4db9992bbf56\") " pod="openstack/ceilometer-0" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.918706 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"03a3b9bd-55d1-469d-a4b9-4db9992bbf56\") " pod="openstack/ceilometer-0" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.919628 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-run-httpd\") pod \"ceilometer-0\" (UID: \"03a3b9bd-55d1-469d-a4b9-4db9992bbf56\") " pod="openstack/ceilometer-0" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.919718 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-config-data\") pod \"ceilometer-0\" (UID: \"03a3b9bd-55d1-469d-a4b9-4db9992bbf56\") " pod="openstack/ceilometer-0" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.919756 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-scripts\") pod \"ceilometer-0\" (UID: \"03a3b9bd-55d1-469d-a4b9-4db9992bbf56\") " pod="openstack/ceilometer-0" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.919793 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"03a3b9bd-55d1-469d-a4b9-4db9992bbf56\") " pod="openstack/ceilometer-0" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.920017 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-log-httpd\") pod \"ceilometer-0\" (UID: \"03a3b9bd-55d1-469d-a4b9-4db9992bbf56\") " pod="openstack/ceilometer-0" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.920955 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-log-httpd\") pod \"ceilometer-0\" (UID: \"03a3b9bd-55d1-469d-a4b9-4db9992bbf56\") " pod="openstack/ceilometer-0" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.921736 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-run-httpd\") pod \"ceilometer-0\" (UID: \"03a3b9bd-55d1-469d-a4b9-4db9992bbf56\") " pod="openstack/ceilometer-0" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.923478 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"03a3b9bd-55d1-469d-a4b9-4db9992bbf56\") " pod="openstack/ceilometer-0" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.927152 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"03a3b9bd-55d1-469d-a4b9-4db9992bbf56\") " pod="openstack/ceilometer-0" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.927598 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-config-data\") pod \"ceilometer-0\" (UID: \"03a3b9bd-55d1-469d-a4b9-4db9992bbf56\") " pod="openstack/ceilometer-0" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.942173 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-scripts\") pod \"ceilometer-0\" (UID: \"03a3b9bd-55d1-469d-a4b9-4db9992bbf56\") " pod="openstack/ceilometer-0" Jan 01 08:48:29 crc kubenswrapper[4867]: I0101 08:48:29.949006 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4jxg\" (UniqueName: \"kubernetes.io/projected/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-kube-api-access-f4jxg\") pod \"ceilometer-0\" (UID: \"03a3b9bd-55d1-469d-a4b9-4db9992bbf56\") " pod="openstack/ceilometer-0" Jan 01 08:48:30 crc kubenswrapper[4867]: I0101 08:48:30.054996 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 01 08:48:30 crc kubenswrapper[4867]: I0101 08:48:30.521250 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:48:30 crc kubenswrapper[4867]: I0101 08:48:30.618046 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03a3b9bd-55d1-469d-a4b9-4db9992bbf56","Type":"ContainerStarted","Data":"151bb62db5f3ef7d4932771898c41abb4725fcc2ae3643ebdf430b7efdda85db"} Jan 01 08:48:30 crc kubenswrapper[4867]: I0101 08:48:30.620054 4867 generic.go:334] "Generic (PLEG): container finished" podID="68297d63-ca47-4d11-8e40-3d6903527773" containerID="95161ca79e37e61717b65705d5f72df0f5d3ee56eaca3808bc4567d29151f991" exitCode=0 Jan 01 08:48:30 crc kubenswrapper[4867]: I0101 08:48:30.620090 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xfnlz" event={"ID":"68297d63-ca47-4d11-8e40-3d6903527773","Type":"ContainerDied","Data":"95161ca79e37e61717b65705d5f72df0f5d3ee56eaca3808bc4567d29151f991"} Jan 01 08:48:31 crc kubenswrapper[4867]: I0101 08:48:31.145021 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="729d3b97-c91b-495d-84df-95c6a94be5de" path="/var/lib/kubelet/pods/729d3b97-c91b-495d-84df-95c6a94be5de/volumes" Jan 01 08:48:31 crc kubenswrapper[4867]: I0101 08:48:31.630197 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03a3b9bd-55d1-469d-a4b9-4db9992bbf56","Type":"ContainerStarted","Data":"978e912450fcc6bc39516afc738b91cba25cf146101d3265e9a3d6435ed0c217"} Jan 01 08:48:32 crc kubenswrapper[4867]: I0101 08:48:32.034641 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xfnlz" Jan 01 08:48:32 crc kubenswrapper[4867]: I0101 08:48:32.160477 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68297d63-ca47-4d11-8e40-3d6903527773-scripts\") pod \"68297d63-ca47-4d11-8e40-3d6903527773\" (UID: \"68297d63-ca47-4d11-8e40-3d6903527773\") " Jan 01 08:48:32 crc kubenswrapper[4867]: I0101 08:48:32.160764 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68297d63-ca47-4d11-8e40-3d6903527773-combined-ca-bundle\") pod \"68297d63-ca47-4d11-8e40-3d6903527773\" (UID: \"68297d63-ca47-4d11-8e40-3d6903527773\") " Jan 01 08:48:32 crc kubenswrapper[4867]: I0101 08:48:32.160812 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsqt8\" (UniqueName: \"kubernetes.io/projected/68297d63-ca47-4d11-8e40-3d6903527773-kube-api-access-gsqt8\") pod \"68297d63-ca47-4d11-8e40-3d6903527773\" (UID: \"68297d63-ca47-4d11-8e40-3d6903527773\") " Jan 01 08:48:32 crc kubenswrapper[4867]: I0101 08:48:32.160844 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68297d63-ca47-4d11-8e40-3d6903527773-config-data\") pod \"68297d63-ca47-4d11-8e40-3d6903527773\" (UID: \"68297d63-ca47-4d11-8e40-3d6903527773\") " Jan 01 08:48:32 crc kubenswrapper[4867]: I0101 08:48:32.164010 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68297d63-ca47-4d11-8e40-3d6903527773-scripts" (OuterVolumeSpecName: "scripts") pod "68297d63-ca47-4d11-8e40-3d6903527773" (UID: "68297d63-ca47-4d11-8e40-3d6903527773"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:48:32 crc kubenswrapper[4867]: I0101 08:48:32.165039 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68297d63-ca47-4d11-8e40-3d6903527773-kube-api-access-gsqt8" (OuterVolumeSpecName: "kube-api-access-gsqt8") pod "68297d63-ca47-4d11-8e40-3d6903527773" (UID: "68297d63-ca47-4d11-8e40-3d6903527773"). InnerVolumeSpecName "kube-api-access-gsqt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:48:32 crc kubenswrapper[4867]: I0101 08:48:32.187685 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68297d63-ca47-4d11-8e40-3d6903527773-config-data" (OuterVolumeSpecName: "config-data") pod "68297d63-ca47-4d11-8e40-3d6903527773" (UID: "68297d63-ca47-4d11-8e40-3d6903527773"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:48:32 crc kubenswrapper[4867]: I0101 08:48:32.197096 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68297d63-ca47-4d11-8e40-3d6903527773-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68297d63-ca47-4d11-8e40-3d6903527773" (UID: "68297d63-ca47-4d11-8e40-3d6903527773"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:48:32 crc kubenswrapper[4867]: I0101 08:48:32.262969 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68297d63-ca47-4d11-8e40-3d6903527773-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:32 crc kubenswrapper[4867]: I0101 08:48:32.263001 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68297d63-ca47-4d11-8e40-3d6903527773-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:32 crc kubenswrapper[4867]: I0101 08:48:32.263012 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsqt8\" (UniqueName: \"kubernetes.io/projected/68297d63-ca47-4d11-8e40-3d6903527773-kube-api-access-gsqt8\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:32 crc kubenswrapper[4867]: I0101 08:48:32.263020 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68297d63-ca47-4d11-8e40-3d6903527773-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:32 crc kubenswrapper[4867]: I0101 08:48:32.645944 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03a3b9bd-55d1-469d-a4b9-4db9992bbf56","Type":"ContainerStarted","Data":"52a3473b0fb6b9986b54fdfe19af6b5074a295f8c1f29dd811efcdd912472601"} Jan 01 08:48:32 crc kubenswrapper[4867]: I0101 08:48:32.648476 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xfnlz" event={"ID":"68297d63-ca47-4d11-8e40-3d6903527773","Type":"ContainerDied","Data":"e85110665a9760d9243d90ff2d6fa3fc090d42026e6702e65e1f66826e5feb36"} Jan 01 08:48:32 crc kubenswrapper[4867]: I0101 08:48:32.648523 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e85110665a9760d9243d90ff2d6fa3fc090d42026e6702e65e1f66826e5feb36" Jan 01 08:48:32 crc kubenswrapper[4867]: I0101 08:48:32.648557 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xfnlz" Jan 01 08:48:32 crc kubenswrapper[4867]: I0101 08:48:32.762944 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 01 08:48:32 crc kubenswrapper[4867]: E0101 08:48:32.763446 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68297d63-ca47-4d11-8e40-3d6903527773" containerName="nova-cell0-conductor-db-sync" Jan 01 08:48:32 crc kubenswrapper[4867]: I0101 08:48:32.763468 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="68297d63-ca47-4d11-8e40-3d6903527773" containerName="nova-cell0-conductor-db-sync" Jan 01 08:48:32 crc kubenswrapper[4867]: I0101 08:48:32.763681 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="68297d63-ca47-4d11-8e40-3d6903527773" containerName="nova-cell0-conductor-db-sync" Jan 01 08:48:32 crc kubenswrapper[4867]: I0101 08:48:32.764415 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 01 08:48:32 crc kubenswrapper[4867]: I0101 08:48:32.768223 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-khd69" Jan 01 08:48:32 crc kubenswrapper[4867]: I0101 08:48:32.794020 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 01 08:48:32 crc kubenswrapper[4867]: I0101 08:48:32.797318 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 01 08:48:32 crc kubenswrapper[4867]: I0101 08:48:32.871667 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb\") " pod="openstack/nova-cell0-conductor-0" Jan 01 08:48:32 crc kubenswrapper[4867]: I0101 08:48:32.871737 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68bv9\" (UniqueName: \"kubernetes.io/projected/7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb-kube-api-access-68bv9\") pod \"nova-cell0-conductor-0\" (UID: \"7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb\") " pod="openstack/nova-cell0-conductor-0" Jan 01 08:48:32 crc kubenswrapper[4867]: I0101 08:48:32.871866 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb\") " pod="openstack/nova-cell0-conductor-0" Jan 01 08:48:32 crc kubenswrapper[4867]: I0101 08:48:32.973596 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb\") " pod="openstack/nova-cell0-conductor-0" Jan 01 08:48:32 crc kubenswrapper[4867]: I0101 08:48:32.973715 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb\") " pod="openstack/nova-cell0-conductor-0" Jan 01 08:48:32 crc kubenswrapper[4867]: I0101 08:48:32.973748 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68bv9\" (UniqueName: \"kubernetes.io/projected/7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb-kube-api-access-68bv9\") pod \"nova-cell0-conductor-0\" (UID: \"7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb\") " pod="openstack/nova-cell0-conductor-0" Jan 01 08:48:32 crc kubenswrapper[4867]: I0101 08:48:32.979039 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb\") " pod="openstack/nova-cell0-conductor-0" Jan 01 08:48:32 crc kubenswrapper[4867]: I0101 08:48:32.994530 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb\") " pod="openstack/nova-cell0-conductor-0" Jan 01 08:48:33 crc kubenswrapper[4867]: I0101 08:48:33.011466 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68bv9\" (UniqueName: \"kubernetes.io/projected/7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb-kube-api-access-68bv9\") pod \"nova-cell0-conductor-0\" (UID: \"7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb\") " pod="openstack/nova-cell0-conductor-0" Jan 01 08:48:33 crc kubenswrapper[4867]: I0101 08:48:33.099395 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 01 08:48:33 crc kubenswrapper[4867]: I0101 08:48:33.546731 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 01 08:48:33 crc kubenswrapper[4867]: W0101 08:48:33.553451 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cb3185a_7a53_4d1a_a1c0_ec2fa0490ffb.slice/crio-3a2f796a54c9f1518366d91291aafe612e4c60f2a9c2315e7ef35e839ec7d762 WatchSource:0}: Error finding container 3a2f796a54c9f1518366d91291aafe612e4c60f2a9c2315e7ef35e839ec7d762: Status 404 returned error can't find the container with id 3a2f796a54c9f1518366d91291aafe612e4c60f2a9c2315e7ef35e839ec7d762 Jan 01 08:48:33 crc kubenswrapper[4867]: I0101 08:48:33.665616 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb","Type":"ContainerStarted","Data":"3a2f796a54c9f1518366d91291aafe612e4c60f2a9c2315e7ef35e839ec7d762"} Jan 01 08:48:33 crc kubenswrapper[4867]: I0101 08:48:33.669149 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03a3b9bd-55d1-469d-a4b9-4db9992bbf56","Type":"ContainerStarted","Data":"50618d689c3b7e41657dd6efc02228bf93f504ca050c1afe3dbad4fd3151fc33"} Jan 01 08:48:34 crc kubenswrapper[4867]: I0101 08:48:34.682602 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb","Type":"ContainerStarted","Data":"e36af79288ec74b9ac3b28d475ec0bec31b44ef20ef075ee5431a9a0e5c8698a"} Jan 01 08:48:34 crc kubenswrapper[4867]: I0101 08:48:34.682984 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 01 08:48:34 crc kubenswrapper[4867]: I0101 08:48:34.686518 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03a3b9bd-55d1-469d-a4b9-4db9992bbf56","Type":"ContainerStarted","Data":"740ae53dfca1fdd4b49fbcfc52d0dc2f6cf189a2a4a0c665564dff50f369cb81"} Jan 01 08:48:34 crc kubenswrapper[4867]: I0101 08:48:34.686651 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 01 08:48:34 crc kubenswrapper[4867]: I0101 08:48:34.718302 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.718279891 podStartE2EDuration="2.718279891s" podCreationTimestamp="2026-01-01 08:48:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:48:34.710625047 +0000 UTC m=+1323.845893826" watchObservedRunningTime="2026-01-01 08:48:34.718279891 +0000 UTC m=+1323.853548670" Jan 01 08:48:34 crc kubenswrapper[4867]: I0101 08:48:34.736725 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.045467672 podStartE2EDuration="5.736708188s" podCreationTimestamp="2026-01-01 08:48:29 +0000 UTC" firstStartedPulling="2026-01-01 08:48:30.517792201 +0000 UTC m=+1319.653061010" lastFinishedPulling="2026-01-01 08:48:34.209032757 +0000 UTC m=+1323.344301526" observedRunningTime="2026-01-01 08:48:34.734734523 +0000 UTC m=+1323.870003332" watchObservedRunningTime="2026-01-01 08:48:34.736708188 +0000 UTC m=+1323.871976967" Jan 01 08:48:34 crc kubenswrapper[4867]: I0101 08:48:34.988490 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 01 08:48:34 crc kubenswrapper[4867]: I0101 08:48:34.988574 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 01 08:48:35 crc kubenswrapper[4867]: I0101 08:48:35.048739 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 01 08:48:35 crc kubenswrapper[4867]: I0101 08:48:35.061724 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 01 08:48:35 crc kubenswrapper[4867]: I0101 08:48:35.695399 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 01 08:48:35 crc kubenswrapper[4867]: I0101 08:48:35.695692 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 01 08:48:35 crc kubenswrapper[4867]: I0101 08:48:35.909138 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 01 08:48:35 crc kubenswrapper[4867]: I0101 08:48:35.909188 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 01 08:48:35 crc kubenswrapper[4867]: I0101 08:48:35.954392 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 01 08:48:35 crc kubenswrapper[4867]: I0101 08:48:35.957314 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 01 08:48:36 crc kubenswrapper[4867]: I0101 08:48:36.704709 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 01 08:48:36 crc kubenswrapper[4867]: I0101 08:48:36.706037 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 01 08:48:37 crc kubenswrapper[4867]: I0101 08:48:37.743710 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 01 08:48:37 crc kubenswrapper[4867]: I0101 08:48:37.744226 4867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 01 08:48:37 crc kubenswrapper[4867]: I0101 08:48:37.748726 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 01 08:48:38 crc kubenswrapper[4867]: I0101 08:48:38.140522 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 01 08:48:38 crc kubenswrapper[4867]: I0101 08:48:38.642558 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-dlqwt"] Jan 01 08:48:38 crc kubenswrapper[4867]: I0101 08:48:38.643964 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-dlqwt" Jan 01 08:48:38 crc kubenswrapper[4867]: I0101 08:48:38.646225 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 01 08:48:38 crc kubenswrapper[4867]: I0101 08:48:38.646370 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 01 08:48:38 crc kubenswrapper[4867]: I0101 08:48:38.666170 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-dlqwt"] Jan 01 08:48:38 crc kubenswrapper[4867]: I0101 08:48:38.686633 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 01 08:48:38 crc kubenswrapper[4867]: I0101 08:48:38.710865 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48551ac5-5853-40d9-843b-c14538e078d7-config-data\") pod \"nova-cell0-cell-mapping-dlqwt\" (UID: \"48551ac5-5853-40d9-843b-c14538e078d7\") " pod="openstack/nova-cell0-cell-mapping-dlqwt" Jan 01 08:48:38 crc kubenswrapper[4867]: I0101 08:48:38.710979 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48551ac5-5853-40d9-843b-c14538e078d7-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-dlqwt\" (UID: \"48551ac5-5853-40d9-843b-c14538e078d7\") " pod="openstack/nova-cell0-cell-mapping-dlqwt" Jan 01 08:48:38 crc kubenswrapper[4867]: I0101 08:48:38.711099 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6d4f\" (UniqueName: \"kubernetes.io/projected/48551ac5-5853-40d9-843b-c14538e078d7-kube-api-access-d6d4f\") pod \"nova-cell0-cell-mapping-dlqwt\" (UID: \"48551ac5-5853-40d9-843b-c14538e078d7\") " pod="openstack/nova-cell0-cell-mapping-dlqwt" Jan 01 08:48:38 crc kubenswrapper[4867]: I0101 08:48:38.711225 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48551ac5-5853-40d9-843b-c14538e078d7-scripts\") pod \"nova-cell0-cell-mapping-dlqwt\" (UID: \"48551ac5-5853-40d9-843b-c14538e078d7\") " pod="openstack/nova-cell0-cell-mapping-dlqwt" Jan 01 08:48:38 crc kubenswrapper[4867]: I0101 08:48:38.723150 4867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 01 08:48:38 crc kubenswrapper[4867]: I0101 08:48:38.812780 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48551ac5-5853-40d9-843b-c14538e078d7-config-data\") pod \"nova-cell0-cell-mapping-dlqwt\" (UID: \"48551ac5-5853-40d9-843b-c14538e078d7\") " pod="openstack/nova-cell0-cell-mapping-dlqwt" Jan 01 08:48:38 crc kubenswrapper[4867]: I0101 08:48:38.812843 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48551ac5-5853-40d9-843b-c14538e078d7-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-dlqwt\" (UID: \"48551ac5-5853-40d9-843b-c14538e078d7\") " pod="openstack/nova-cell0-cell-mapping-dlqwt" Jan 01 08:48:38 crc kubenswrapper[4867]: I0101 08:48:38.812922 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6d4f\" (UniqueName: \"kubernetes.io/projected/48551ac5-5853-40d9-843b-c14538e078d7-kube-api-access-d6d4f\") pod \"nova-cell0-cell-mapping-dlqwt\" (UID: \"48551ac5-5853-40d9-843b-c14538e078d7\") " pod="openstack/nova-cell0-cell-mapping-dlqwt" Jan 01 08:48:38 crc kubenswrapper[4867]: I0101 08:48:38.812967 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48551ac5-5853-40d9-843b-c14538e078d7-scripts\") pod \"nova-cell0-cell-mapping-dlqwt\" (UID: \"48551ac5-5853-40d9-843b-c14538e078d7\") " pod="openstack/nova-cell0-cell-mapping-dlqwt" Jan 01 08:48:38 crc kubenswrapper[4867]: I0101 08:48:38.819505 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48551ac5-5853-40d9-843b-c14538e078d7-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-dlqwt\" (UID: \"48551ac5-5853-40d9-843b-c14538e078d7\") " pod="openstack/nova-cell0-cell-mapping-dlqwt" Jan 01 08:48:38 crc kubenswrapper[4867]: I0101 08:48:38.826625 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48551ac5-5853-40d9-843b-c14538e078d7-config-data\") pod \"nova-cell0-cell-mapping-dlqwt\" (UID: \"48551ac5-5853-40d9-843b-c14538e078d7\") " pod="openstack/nova-cell0-cell-mapping-dlqwt" Jan 01 08:48:38 crc kubenswrapper[4867]: I0101 08:48:38.838939 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48551ac5-5853-40d9-843b-c14538e078d7-scripts\") pod \"nova-cell0-cell-mapping-dlqwt\" (UID: \"48551ac5-5853-40d9-843b-c14538e078d7\") " pod="openstack/nova-cell0-cell-mapping-dlqwt" Jan 01 08:48:38 crc kubenswrapper[4867]: I0101 08:48:38.842435 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6d4f\" (UniqueName: \"kubernetes.io/projected/48551ac5-5853-40d9-843b-c14538e078d7-kube-api-access-d6d4f\") pod \"nova-cell0-cell-mapping-dlqwt\" (UID: \"48551ac5-5853-40d9-843b-c14538e078d7\") " pod="openstack/nova-cell0-cell-mapping-dlqwt" Jan 01 08:48:38 crc kubenswrapper[4867]: I0101 08:48:38.856275 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 01 08:48:38 crc kubenswrapper[4867]: I0101 08:48:38.857781 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 01 08:48:38 crc kubenswrapper[4867]: I0101 08:48:38.869552 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 01 08:48:38 crc kubenswrapper[4867]: I0101 08:48:38.892154 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 01 08:48:38 crc kubenswrapper[4867]: I0101 08:48:38.907049 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 01 08:48:38 crc kubenswrapper[4867]: I0101 08:48:38.908686 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 01 08:48:38 crc kubenswrapper[4867]: I0101 08:48:38.916858 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 01 08:48:38 crc kubenswrapper[4867]: I0101 08:48:38.922298 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 01 08:48:38 crc kubenswrapper[4867]: I0101 08:48:38.967256 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-dlqwt" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.016114 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwbzn\" (UniqueName: \"kubernetes.io/projected/45703f7c-e7c0-426b-9fb1-2f9db0295f23-kube-api-access-kwbzn\") pod \"nova-api-0\" (UID: \"45703f7c-e7c0-426b-9fb1-2f9db0295f23\") " pod="openstack/nova-api-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.016185 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45703f7c-e7c0-426b-9fb1-2f9db0295f23-logs\") pod \"nova-api-0\" (UID: \"45703f7c-e7c0-426b-9fb1-2f9db0295f23\") " pod="openstack/nova-api-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.016235 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45703f7c-e7c0-426b-9fb1-2f9db0295f23-config-data\") pod \"nova-api-0\" (UID: \"45703f7c-e7c0-426b-9fb1-2f9db0295f23\") " pod="openstack/nova-api-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.016256 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63390c2c-eb58-4b38-b11a-8c26319d66cb-logs\") pod \"nova-metadata-0\" (UID: \"63390c2c-eb58-4b38-b11a-8c26319d66cb\") " pod="openstack/nova-metadata-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.016281 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlz5z\" (UniqueName: \"kubernetes.io/projected/63390c2c-eb58-4b38-b11a-8c26319d66cb-kube-api-access-rlz5z\") pod \"nova-metadata-0\" (UID: \"63390c2c-eb58-4b38-b11a-8c26319d66cb\") " pod="openstack/nova-metadata-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.016305 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63390c2c-eb58-4b38-b11a-8c26319d66cb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"63390c2c-eb58-4b38-b11a-8c26319d66cb\") " pod="openstack/nova-metadata-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.016350 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63390c2c-eb58-4b38-b11a-8c26319d66cb-config-data\") pod \"nova-metadata-0\" (UID: \"63390c2c-eb58-4b38-b11a-8c26319d66cb\") " pod="openstack/nova-metadata-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.016367 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45703f7c-e7c0-426b-9fb1-2f9db0295f23-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"45703f7c-e7c0-426b-9fb1-2f9db0295f23\") " pod="openstack/nova-api-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.037749 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.039431 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.074736 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.120828 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwbzn\" (UniqueName: \"kubernetes.io/projected/45703f7c-e7c0-426b-9fb1-2f9db0295f23-kube-api-access-kwbzn\") pod \"nova-api-0\" (UID: \"45703f7c-e7c0-426b-9fb1-2f9db0295f23\") " pod="openstack/nova-api-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.120894 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c38e70a-7d9b-4601-a3e6-524ad937e365-config-data\") pod \"nova-scheduler-0\" (UID: \"8c38e70a-7d9b-4601-a3e6-524ad937e365\") " pod="openstack/nova-scheduler-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.120954 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45703f7c-e7c0-426b-9fb1-2f9db0295f23-logs\") pod \"nova-api-0\" (UID: \"45703f7c-e7c0-426b-9fb1-2f9db0295f23\") " pod="openstack/nova-api-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.120978 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c38e70a-7d9b-4601-a3e6-524ad937e365-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8c38e70a-7d9b-4601-a3e6-524ad937e365\") " pod="openstack/nova-scheduler-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.121006 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45703f7c-e7c0-426b-9fb1-2f9db0295f23-config-data\") pod \"nova-api-0\" (UID: \"45703f7c-e7c0-426b-9fb1-2f9db0295f23\") " pod="openstack/nova-api-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.121028 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63390c2c-eb58-4b38-b11a-8c26319d66cb-logs\") pod \"nova-metadata-0\" (UID: \"63390c2c-eb58-4b38-b11a-8c26319d66cb\") " pod="openstack/nova-metadata-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.121072 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlz5z\" (UniqueName: \"kubernetes.io/projected/63390c2c-eb58-4b38-b11a-8c26319d66cb-kube-api-access-rlz5z\") pod \"nova-metadata-0\" (UID: \"63390c2c-eb58-4b38-b11a-8c26319d66cb\") " pod="openstack/nova-metadata-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.121099 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63390c2c-eb58-4b38-b11a-8c26319d66cb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"63390c2c-eb58-4b38-b11a-8c26319d66cb\") " pod="openstack/nova-metadata-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.121143 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63390c2c-eb58-4b38-b11a-8c26319d66cb-config-data\") pod \"nova-metadata-0\" (UID: \"63390c2c-eb58-4b38-b11a-8c26319d66cb\") " pod="openstack/nova-metadata-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.121163 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45703f7c-e7c0-426b-9fb1-2f9db0295f23-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"45703f7c-e7c0-426b-9fb1-2f9db0295f23\") " pod="openstack/nova-api-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.121208 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v54xg\" (UniqueName: \"kubernetes.io/projected/8c38e70a-7d9b-4601-a3e6-524ad937e365-kube-api-access-v54xg\") pod \"nova-scheduler-0\" (UID: \"8c38e70a-7d9b-4601-a3e6-524ad937e365\") " pod="openstack/nova-scheduler-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.121923 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45703f7c-e7c0-426b-9fb1-2f9db0295f23-logs\") pod \"nova-api-0\" (UID: \"45703f7c-e7c0-426b-9fb1-2f9db0295f23\") " pod="openstack/nova-api-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.123524 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63390c2c-eb58-4b38-b11a-8c26319d66cb-logs\") pod \"nova-metadata-0\" (UID: \"63390c2c-eb58-4b38-b11a-8c26319d66cb\") " pod="openstack/nova-metadata-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.134599 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63390c2c-eb58-4b38-b11a-8c26319d66cb-config-data\") pod \"nova-metadata-0\" (UID: \"63390c2c-eb58-4b38-b11a-8c26319d66cb\") " pod="openstack/nova-metadata-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.136714 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45703f7c-e7c0-426b-9fb1-2f9db0295f23-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"45703f7c-e7c0-426b-9fb1-2f9db0295f23\") " pod="openstack/nova-api-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.157808 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.165095 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwbzn\" (UniqueName: \"kubernetes.io/projected/45703f7c-e7c0-426b-9fb1-2f9db0295f23-kube-api-access-kwbzn\") pod \"nova-api-0\" (UID: \"45703f7c-e7c0-426b-9fb1-2f9db0295f23\") " pod="openstack/nova-api-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.165440 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlz5z\" (UniqueName: \"kubernetes.io/projected/63390c2c-eb58-4b38-b11a-8c26319d66cb-kube-api-access-rlz5z\") pod \"nova-metadata-0\" (UID: \"63390c2c-eb58-4b38-b11a-8c26319d66cb\") " pod="openstack/nova-metadata-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.175193 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63390c2c-eb58-4b38-b11a-8c26319d66cb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"63390c2c-eb58-4b38-b11a-8c26319d66cb\") " pod="openstack/nova-metadata-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.175815 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45703f7c-e7c0-426b-9fb1-2f9db0295f23-config-data\") pod \"nova-api-0\" (UID: \"45703f7c-e7c0-426b-9fb1-2f9db0295f23\") " pod="openstack/nova-api-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.204265 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77f475f9d5-jjvqz"] Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.210008 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77f475f9d5-jjvqz" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.213786 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77f475f9d5-jjvqz"] Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.223677 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c38e70a-7d9b-4601-a3e6-524ad937e365-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8c38e70a-7d9b-4601-a3e6-524ad937e365\") " pod="openstack/nova-scheduler-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.223800 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v54xg\" (UniqueName: \"kubernetes.io/projected/8c38e70a-7d9b-4601-a3e6-524ad937e365-kube-api-access-v54xg\") pod \"nova-scheduler-0\" (UID: \"8c38e70a-7d9b-4601-a3e6-524ad937e365\") " pod="openstack/nova-scheduler-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.223833 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c38e70a-7d9b-4601-a3e6-524ad937e365-config-data\") pod \"nova-scheduler-0\" (UID: \"8c38e70a-7d9b-4601-a3e6-524ad937e365\") " pod="openstack/nova-scheduler-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.234525 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c38e70a-7d9b-4601-a3e6-524ad937e365-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8c38e70a-7d9b-4601-a3e6-524ad937e365\") " pod="openstack/nova-scheduler-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.234863 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.236047 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.242634 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c38e70a-7d9b-4601-a3e6-524ad937e365-config-data\") pod \"nova-scheduler-0\" (UID: \"8c38e70a-7d9b-4601-a3e6-524ad937e365\") " pod="openstack/nova-scheduler-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.243151 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.243682 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.253873 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.254314 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.264986 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v54xg\" (UniqueName: \"kubernetes.io/projected/8c38e70a-7d9b-4601-a3e6-524ad937e365-kube-api-access-v54xg\") pod \"nova-scheduler-0\" (UID: \"8c38e70a-7d9b-4601-a3e6-524ad937e365\") " pod="openstack/nova-scheduler-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.325418 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/603e44e2-6aba-45b8-a80a-96e4420dbcfc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"603e44e2-6aba-45b8-a80a-96e4420dbcfc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.325456 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/519d8b68-1fa4-425c-adc6-0a0687e3b165-dns-svc\") pod \"dnsmasq-dns-77f475f9d5-jjvqz\" (UID: \"519d8b68-1fa4-425c-adc6-0a0687e3b165\") " pod="openstack/dnsmasq-dns-77f475f9d5-jjvqz" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.325936 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtdw8\" (UniqueName: \"kubernetes.io/projected/519d8b68-1fa4-425c-adc6-0a0687e3b165-kube-api-access-rtdw8\") pod \"dnsmasq-dns-77f475f9d5-jjvqz\" (UID: \"519d8b68-1fa4-425c-adc6-0a0687e3b165\") " pod="openstack/dnsmasq-dns-77f475f9d5-jjvqz" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.325982 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/519d8b68-1fa4-425c-adc6-0a0687e3b165-ovsdbserver-nb\") pod \"dnsmasq-dns-77f475f9d5-jjvqz\" (UID: \"519d8b68-1fa4-425c-adc6-0a0687e3b165\") " pod="openstack/dnsmasq-dns-77f475f9d5-jjvqz" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.326000 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/519d8b68-1fa4-425c-adc6-0a0687e3b165-config\") pod \"dnsmasq-dns-77f475f9d5-jjvqz\" (UID: \"519d8b68-1fa4-425c-adc6-0a0687e3b165\") " pod="openstack/dnsmasq-dns-77f475f9d5-jjvqz" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.326063 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhhm6\" (UniqueName: \"kubernetes.io/projected/603e44e2-6aba-45b8-a80a-96e4420dbcfc-kube-api-access-lhhm6\") pod \"nova-cell1-novncproxy-0\" (UID: \"603e44e2-6aba-45b8-a80a-96e4420dbcfc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.327161 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/519d8b68-1fa4-425c-adc6-0a0687e3b165-dns-swift-storage-0\") pod \"dnsmasq-dns-77f475f9d5-jjvqz\" (UID: \"519d8b68-1fa4-425c-adc6-0a0687e3b165\") " pod="openstack/dnsmasq-dns-77f475f9d5-jjvqz" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.327312 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/519d8b68-1fa4-425c-adc6-0a0687e3b165-ovsdbserver-sb\") pod \"dnsmasq-dns-77f475f9d5-jjvqz\" (UID: \"519d8b68-1fa4-425c-adc6-0a0687e3b165\") " pod="openstack/dnsmasq-dns-77f475f9d5-jjvqz" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.327475 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/603e44e2-6aba-45b8-a80a-96e4420dbcfc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"603e44e2-6aba-45b8-a80a-96e4420dbcfc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.375383 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.431906 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtdw8\" (UniqueName: \"kubernetes.io/projected/519d8b68-1fa4-425c-adc6-0a0687e3b165-kube-api-access-rtdw8\") pod \"dnsmasq-dns-77f475f9d5-jjvqz\" (UID: \"519d8b68-1fa4-425c-adc6-0a0687e3b165\") " pod="openstack/dnsmasq-dns-77f475f9d5-jjvqz" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.431956 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/519d8b68-1fa4-425c-adc6-0a0687e3b165-ovsdbserver-nb\") pod \"dnsmasq-dns-77f475f9d5-jjvqz\" (UID: \"519d8b68-1fa4-425c-adc6-0a0687e3b165\") " pod="openstack/dnsmasq-dns-77f475f9d5-jjvqz" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.431978 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/519d8b68-1fa4-425c-adc6-0a0687e3b165-config\") pod \"dnsmasq-dns-77f475f9d5-jjvqz\" (UID: \"519d8b68-1fa4-425c-adc6-0a0687e3b165\") " pod="openstack/dnsmasq-dns-77f475f9d5-jjvqz" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.432002 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhhm6\" (UniqueName: \"kubernetes.io/projected/603e44e2-6aba-45b8-a80a-96e4420dbcfc-kube-api-access-lhhm6\") pod \"nova-cell1-novncproxy-0\" (UID: \"603e44e2-6aba-45b8-a80a-96e4420dbcfc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.432034 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/519d8b68-1fa4-425c-adc6-0a0687e3b165-dns-swift-storage-0\") pod \"dnsmasq-dns-77f475f9d5-jjvqz\" (UID: \"519d8b68-1fa4-425c-adc6-0a0687e3b165\") " pod="openstack/dnsmasq-dns-77f475f9d5-jjvqz" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.432081 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/519d8b68-1fa4-425c-adc6-0a0687e3b165-ovsdbserver-sb\") pod \"dnsmasq-dns-77f475f9d5-jjvqz\" (UID: \"519d8b68-1fa4-425c-adc6-0a0687e3b165\") " pod="openstack/dnsmasq-dns-77f475f9d5-jjvqz" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.432125 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/603e44e2-6aba-45b8-a80a-96e4420dbcfc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"603e44e2-6aba-45b8-a80a-96e4420dbcfc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.432175 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/603e44e2-6aba-45b8-a80a-96e4420dbcfc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"603e44e2-6aba-45b8-a80a-96e4420dbcfc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.432195 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/519d8b68-1fa4-425c-adc6-0a0687e3b165-dns-svc\") pod \"dnsmasq-dns-77f475f9d5-jjvqz\" (UID: \"519d8b68-1fa4-425c-adc6-0a0687e3b165\") " pod="openstack/dnsmasq-dns-77f475f9d5-jjvqz" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.434259 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/519d8b68-1fa4-425c-adc6-0a0687e3b165-ovsdbserver-nb\") pod \"dnsmasq-dns-77f475f9d5-jjvqz\" (UID: \"519d8b68-1fa4-425c-adc6-0a0687e3b165\") " pod="openstack/dnsmasq-dns-77f475f9d5-jjvqz" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.434766 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/519d8b68-1fa4-425c-adc6-0a0687e3b165-config\") pod \"dnsmasq-dns-77f475f9d5-jjvqz\" (UID: \"519d8b68-1fa4-425c-adc6-0a0687e3b165\") " pod="openstack/dnsmasq-dns-77f475f9d5-jjvqz" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.434859 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/519d8b68-1fa4-425c-adc6-0a0687e3b165-ovsdbserver-sb\") pod \"dnsmasq-dns-77f475f9d5-jjvqz\" (UID: \"519d8b68-1fa4-425c-adc6-0a0687e3b165\") " pod="openstack/dnsmasq-dns-77f475f9d5-jjvqz" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.435371 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/519d8b68-1fa4-425c-adc6-0a0687e3b165-dns-swift-storage-0\") pod \"dnsmasq-dns-77f475f9d5-jjvqz\" (UID: \"519d8b68-1fa4-425c-adc6-0a0687e3b165\") " pod="openstack/dnsmasq-dns-77f475f9d5-jjvqz" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.435906 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/519d8b68-1fa4-425c-adc6-0a0687e3b165-dns-svc\") pod \"dnsmasq-dns-77f475f9d5-jjvqz\" (UID: \"519d8b68-1fa4-425c-adc6-0a0687e3b165\") " pod="openstack/dnsmasq-dns-77f475f9d5-jjvqz" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.441927 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/603e44e2-6aba-45b8-a80a-96e4420dbcfc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"603e44e2-6aba-45b8-a80a-96e4420dbcfc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.450622 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtdw8\" (UniqueName: \"kubernetes.io/projected/519d8b68-1fa4-425c-adc6-0a0687e3b165-kube-api-access-rtdw8\") pod \"dnsmasq-dns-77f475f9d5-jjvqz\" (UID: \"519d8b68-1fa4-425c-adc6-0a0687e3b165\") " pod="openstack/dnsmasq-dns-77f475f9d5-jjvqz" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.456441 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/603e44e2-6aba-45b8-a80a-96e4420dbcfc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"603e44e2-6aba-45b8-a80a-96e4420dbcfc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.463066 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhhm6\" (UniqueName: \"kubernetes.io/projected/603e44e2-6aba-45b8-a80a-96e4420dbcfc-kube-api-access-lhhm6\") pod \"nova-cell1-novncproxy-0\" (UID: \"603e44e2-6aba-45b8-a80a-96e4420dbcfc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.485367 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.575353 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77f475f9d5-jjvqz" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.578608 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.650251 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-dlqwt"] Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.777982 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-dlqwt" event={"ID":"48551ac5-5853-40d9-843b-c14538e078d7","Type":"ContainerStarted","Data":"861b5fd474cdb12297e0340e840a1faaa203725cac515f907aa8e942738baf2c"} Jan 01 08:48:39 crc kubenswrapper[4867]: I0101 08:48:39.930848 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 01 08:48:40 crc kubenswrapper[4867]: I0101 08:48:40.134395 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fbpqm"] Jan 01 08:48:40 crc kubenswrapper[4867]: I0101 08:48:40.135862 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fbpqm" Jan 01 08:48:40 crc kubenswrapper[4867]: I0101 08:48:40.139709 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 01 08:48:40 crc kubenswrapper[4867]: I0101 08:48:40.139982 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 01 08:48:40 crc kubenswrapper[4867]: I0101 08:48:40.158247 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fbpqm"] Jan 01 08:48:40 crc kubenswrapper[4867]: I0101 08:48:40.228216 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 01 08:48:40 crc kubenswrapper[4867]: I0101 08:48:40.261477 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/606a07c4-3bbe-4968-a035-6d41b2cf3803-config-data\") pod \"nova-cell1-conductor-db-sync-fbpqm\" (UID: \"606a07c4-3bbe-4968-a035-6d41b2cf3803\") " pod="openstack/nova-cell1-conductor-db-sync-fbpqm" Jan 01 08:48:40 crc kubenswrapper[4867]: I0101 08:48:40.261558 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/606a07c4-3bbe-4968-a035-6d41b2cf3803-scripts\") pod \"nova-cell1-conductor-db-sync-fbpqm\" (UID: \"606a07c4-3bbe-4968-a035-6d41b2cf3803\") " pod="openstack/nova-cell1-conductor-db-sync-fbpqm" Jan 01 08:48:40 crc kubenswrapper[4867]: I0101 08:48:40.261612 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/606a07c4-3bbe-4968-a035-6d41b2cf3803-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fbpqm\" (UID: \"606a07c4-3bbe-4968-a035-6d41b2cf3803\") " pod="openstack/nova-cell1-conductor-db-sync-fbpqm" Jan 01 08:48:40 crc kubenswrapper[4867]: I0101 08:48:40.261632 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj9mm\" (UniqueName: \"kubernetes.io/projected/606a07c4-3bbe-4968-a035-6d41b2cf3803-kube-api-access-rj9mm\") pod \"nova-cell1-conductor-db-sync-fbpqm\" (UID: \"606a07c4-3bbe-4968-a035-6d41b2cf3803\") " pod="openstack/nova-cell1-conductor-db-sync-fbpqm" Jan 01 08:48:40 crc kubenswrapper[4867]: I0101 08:48:40.333946 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 01 08:48:40 crc kubenswrapper[4867]: I0101 08:48:40.341449 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 01 08:48:40 crc kubenswrapper[4867]: W0101 08:48:40.342588 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod603e44e2_6aba_45b8_a80a_96e4420dbcfc.slice/crio-135b9bf948682dde6ad1a19eeb3a691029185e8c8c7b87d6bbd18506800b07c7 WatchSource:0}: Error finding container 135b9bf948682dde6ad1a19eeb3a691029185e8c8c7b87d6bbd18506800b07c7: Status 404 returned error can't find the container with id 135b9bf948682dde6ad1a19eeb3a691029185e8c8c7b87d6bbd18506800b07c7 Jan 01 08:48:40 crc kubenswrapper[4867]: I0101 08:48:40.363927 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/606a07c4-3bbe-4968-a035-6d41b2cf3803-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fbpqm\" (UID: \"606a07c4-3bbe-4968-a035-6d41b2cf3803\") " pod="openstack/nova-cell1-conductor-db-sync-fbpqm" Jan 01 08:48:40 crc kubenswrapper[4867]: I0101 08:48:40.363976 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj9mm\" (UniqueName: \"kubernetes.io/projected/606a07c4-3bbe-4968-a035-6d41b2cf3803-kube-api-access-rj9mm\") pod \"nova-cell1-conductor-db-sync-fbpqm\" (UID: \"606a07c4-3bbe-4968-a035-6d41b2cf3803\") " pod="openstack/nova-cell1-conductor-db-sync-fbpqm" Jan 01 08:48:40 crc kubenswrapper[4867]: I0101 08:48:40.364065 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/606a07c4-3bbe-4968-a035-6d41b2cf3803-config-data\") pod \"nova-cell1-conductor-db-sync-fbpqm\" (UID: \"606a07c4-3bbe-4968-a035-6d41b2cf3803\") " pod="openstack/nova-cell1-conductor-db-sync-fbpqm" Jan 01 08:48:40 crc kubenswrapper[4867]: I0101 08:48:40.364117 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/606a07c4-3bbe-4968-a035-6d41b2cf3803-scripts\") pod \"nova-cell1-conductor-db-sync-fbpqm\" (UID: \"606a07c4-3bbe-4968-a035-6d41b2cf3803\") " pod="openstack/nova-cell1-conductor-db-sync-fbpqm" Jan 01 08:48:40 crc kubenswrapper[4867]: I0101 08:48:40.370050 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/606a07c4-3bbe-4968-a035-6d41b2cf3803-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fbpqm\" (UID: \"606a07c4-3bbe-4968-a035-6d41b2cf3803\") " pod="openstack/nova-cell1-conductor-db-sync-fbpqm" Jan 01 08:48:40 crc kubenswrapper[4867]: I0101 08:48:40.371209 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/606a07c4-3bbe-4968-a035-6d41b2cf3803-config-data\") pod \"nova-cell1-conductor-db-sync-fbpqm\" (UID: \"606a07c4-3bbe-4968-a035-6d41b2cf3803\") " pod="openstack/nova-cell1-conductor-db-sync-fbpqm" Jan 01 08:48:40 crc kubenswrapper[4867]: I0101 08:48:40.372299 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/606a07c4-3bbe-4968-a035-6d41b2cf3803-scripts\") pod \"nova-cell1-conductor-db-sync-fbpqm\" (UID: \"606a07c4-3bbe-4968-a035-6d41b2cf3803\") " pod="openstack/nova-cell1-conductor-db-sync-fbpqm" Jan 01 08:48:40 crc kubenswrapper[4867]: I0101 08:48:40.394341 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj9mm\" (UniqueName: \"kubernetes.io/projected/606a07c4-3bbe-4968-a035-6d41b2cf3803-kube-api-access-rj9mm\") pod \"nova-cell1-conductor-db-sync-fbpqm\" (UID: \"606a07c4-3bbe-4968-a035-6d41b2cf3803\") " pod="openstack/nova-cell1-conductor-db-sync-fbpqm" Jan 01 08:48:40 crc kubenswrapper[4867]: I0101 08:48:40.459292 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fbpqm" Jan 01 08:48:40 crc kubenswrapper[4867]: I0101 08:48:40.539833 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77f475f9d5-jjvqz"] Jan 01 08:48:40 crc kubenswrapper[4867]: I0101 08:48:40.794124 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-dlqwt" event={"ID":"48551ac5-5853-40d9-843b-c14538e078d7","Type":"ContainerStarted","Data":"60e97c9fdf3987bf4c61e1d7445036f685cabc3d422531233de8d828501df5ed"} Jan 01 08:48:40 crc kubenswrapper[4867]: I0101 08:48:40.799543 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8c38e70a-7d9b-4601-a3e6-524ad937e365","Type":"ContainerStarted","Data":"5bad67c9ec939fd2f87956c4e692955c04321d0320124ebe293a66c596a1d680"} Jan 01 08:48:40 crc kubenswrapper[4867]: I0101 08:48:40.806125 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"603e44e2-6aba-45b8-a80a-96e4420dbcfc","Type":"ContainerStarted","Data":"135b9bf948682dde6ad1a19eeb3a691029185e8c8c7b87d6bbd18506800b07c7"} Jan 01 08:48:40 crc kubenswrapper[4867]: I0101 08:48:40.808933 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77f475f9d5-jjvqz" event={"ID":"519d8b68-1fa4-425c-adc6-0a0687e3b165","Type":"ContainerStarted","Data":"b1e6f1789be25a4f226d5042a81af74160c714cc59c4b763cb6b3918929d6e1d"} Jan 01 08:48:40 crc kubenswrapper[4867]: I0101 08:48:40.812601 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"63390c2c-eb58-4b38-b11a-8c26319d66cb","Type":"ContainerStarted","Data":"dfb3a21221589664cc85b6623cd23917e40a91768902f9bef0fbea5cbc706d01"} Jan 01 08:48:40 crc kubenswrapper[4867]: I0101 08:48:40.814074 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45703f7c-e7c0-426b-9fb1-2f9db0295f23","Type":"ContainerStarted","Data":"60bd5b0cabe31cb4a0f42c2a6144ee80613d1e5a5d90c787e7ed7673f8faeb37"} Jan 01 08:48:40 crc kubenswrapper[4867]: I0101 08:48:40.820160 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-dlqwt" podStartSLOduration=2.8201434069999998 podStartE2EDuration="2.820143407s" podCreationTimestamp="2026-01-01 08:48:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:48:40.816390782 +0000 UTC m=+1329.951659551" watchObservedRunningTime="2026-01-01 08:48:40.820143407 +0000 UTC m=+1329.955412176" Jan 01 08:48:40 crc kubenswrapper[4867]: I0101 08:48:40.908716 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fbpqm"] Jan 01 08:48:41 crc kubenswrapper[4867]: I0101 08:48:41.826758 4867 generic.go:334] "Generic (PLEG): container finished" podID="519d8b68-1fa4-425c-adc6-0a0687e3b165" containerID="44136619ebd82bbba2da47b47d55992993ea904571bfdfc06834af4fe36380f7" exitCode=0 Jan 01 08:48:41 crc kubenswrapper[4867]: I0101 08:48:41.826823 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77f475f9d5-jjvqz" event={"ID":"519d8b68-1fa4-425c-adc6-0a0687e3b165","Type":"ContainerDied","Data":"44136619ebd82bbba2da47b47d55992993ea904571bfdfc06834af4fe36380f7"} Jan 01 08:48:41 crc kubenswrapper[4867]: I0101 08:48:41.831483 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fbpqm" event={"ID":"606a07c4-3bbe-4968-a035-6d41b2cf3803","Type":"ContainerStarted","Data":"2eec9c315c78c3cc244f82eab542c7f81c78a38711682898002441ea0482a959"} Jan 01 08:48:41 crc kubenswrapper[4867]: I0101 08:48:41.831546 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fbpqm" event={"ID":"606a07c4-3bbe-4968-a035-6d41b2cf3803","Type":"ContainerStarted","Data":"9545cebd21386a6ae9925b70a3ffceabe9a56cc8e2a4036e854baa209dd8f46a"} Jan 01 08:48:41 crc kubenswrapper[4867]: I0101 08:48:41.868350 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-fbpqm" podStartSLOduration=1.868333898 podStartE2EDuration="1.868333898s" podCreationTimestamp="2026-01-01 08:48:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:48:41.866321162 +0000 UTC m=+1331.001589931" watchObservedRunningTime="2026-01-01 08:48:41.868333898 +0000 UTC m=+1331.003602667" Jan 01 08:48:43 crc kubenswrapper[4867]: I0101 08:48:43.087283 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 01 08:48:43 crc kubenswrapper[4867]: I0101 08:48:43.103105 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 01 08:48:43 crc kubenswrapper[4867]: I0101 08:48:43.852048 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77f475f9d5-jjvqz" event={"ID":"519d8b68-1fa4-425c-adc6-0a0687e3b165","Type":"ContainerStarted","Data":"f71cb1eabae5450a1744b6879c5683961cc538839ffb45158dbd409f06cc3ea5"} Jan 01 08:48:43 crc kubenswrapper[4867]: I0101 08:48:43.852664 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77f475f9d5-jjvqz" Jan 01 08:48:43 crc kubenswrapper[4867]: I0101 08:48:43.858056 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"63390c2c-eb58-4b38-b11a-8c26319d66cb","Type":"ContainerStarted","Data":"047a80958dd0e1eae278aa0e7ad8f850258006aa4e94ace30da468c07f2b2706"} Jan 01 08:48:43 crc kubenswrapper[4867]: I0101 08:48:43.859788 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45703f7c-e7c0-426b-9fb1-2f9db0295f23","Type":"ContainerStarted","Data":"28e20cf58885b1ba6853ea54eeca00c752b6873592dcefe99a5ea42e2ad55b82"} Jan 01 08:48:43 crc kubenswrapper[4867]: I0101 08:48:43.861615 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8c38e70a-7d9b-4601-a3e6-524ad937e365","Type":"ContainerStarted","Data":"2334025c66327f2a613400895c9d08424c69ef98ce42baaff54ab605006a369e"} Jan 01 08:48:43 crc kubenswrapper[4867]: I0101 08:48:43.865320 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"603e44e2-6aba-45b8-a80a-96e4420dbcfc","Type":"ContainerStarted","Data":"f2735ab20c4cfd9df9b85bab04e44b30b480005938646b7829854374acce9273"} Jan 01 08:48:43 crc kubenswrapper[4867]: I0101 08:48:43.865456 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="603e44e2-6aba-45b8-a80a-96e4420dbcfc" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://f2735ab20c4cfd9df9b85bab04e44b30b480005938646b7829854374acce9273" gracePeriod=30 Jan 01 08:48:43 crc kubenswrapper[4867]: I0101 08:48:43.869921 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77f475f9d5-jjvqz" podStartSLOduration=5.869910582 podStartE2EDuration="5.869910582s" podCreationTimestamp="2026-01-01 08:48:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:48:43.868107602 +0000 UTC m=+1333.003376381" watchObservedRunningTime="2026-01-01 08:48:43.869910582 +0000 UTC m=+1333.005179361" Jan 01 08:48:43 crc kubenswrapper[4867]: I0101 08:48:43.891334 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.897818054 podStartE2EDuration="5.891318122s" podCreationTimestamp="2026-01-01 08:48:38 +0000 UTC" firstStartedPulling="2026-01-01 08:48:40.342695264 +0000 UTC m=+1329.477964023" lastFinishedPulling="2026-01-01 08:48:43.336195302 +0000 UTC m=+1332.471464091" observedRunningTime="2026-01-01 08:48:43.888620456 +0000 UTC m=+1333.023889245" watchObservedRunningTime="2026-01-01 08:48:43.891318122 +0000 UTC m=+1333.026586891" Jan 01 08:48:43 crc kubenswrapper[4867]: I0101 08:48:43.910925 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.924971535 podStartE2EDuration="4.910912141s" podCreationTimestamp="2026-01-01 08:48:39 +0000 UTC" firstStartedPulling="2026-01-01 08:48:40.347694014 +0000 UTC m=+1329.482962773" lastFinishedPulling="2026-01-01 08:48:43.33363461 +0000 UTC m=+1332.468903379" observedRunningTime="2026-01-01 08:48:43.904240194 +0000 UTC m=+1333.039508953" watchObservedRunningTime="2026-01-01 08:48:43.910912141 +0000 UTC m=+1333.046180910" Jan 01 08:48:44 crc kubenswrapper[4867]: I0101 08:48:44.486327 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 01 08:48:44 crc kubenswrapper[4867]: I0101 08:48:44.580014 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 01 08:48:44 crc kubenswrapper[4867]: I0101 08:48:44.876290 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"63390c2c-eb58-4b38-b11a-8c26319d66cb","Type":"ContainerStarted","Data":"38f6ae2c8a60714645b4e3beb596e737c38110232893f19f4cfdf5c6797cb71c"} Jan 01 08:48:44 crc kubenswrapper[4867]: I0101 08:48:44.876501 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="63390c2c-eb58-4b38-b11a-8c26319d66cb" containerName="nova-metadata-metadata" containerID="cri-o://38f6ae2c8a60714645b4e3beb596e737c38110232893f19f4cfdf5c6797cb71c" gracePeriod=30 Jan 01 08:48:44 crc kubenswrapper[4867]: I0101 08:48:44.876490 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="63390c2c-eb58-4b38-b11a-8c26319d66cb" containerName="nova-metadata-log" containerID="cri-o://047a80958dd0e1eae278aa0e7ad8f850258006aa4e94ace30da468c07f2b2706" gracePeriod=30 Jan 01 08:48:44 crc kubenswrapper[4867]: I0101 08:48:44.880586 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45703f7c-e7c0-426b-9fb1-2f9db0295f23","Type":"ContainerStarted","Data":"c2cfbb056087e12b2a2885b24f64300c04f4b5224ad362d98504d48a15610138"} Jan 01 08:48:44 crc kubenswrapper[4867]: I0101 08:48:44.910951 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.824191671 podStartE2EDuration="6.910927062s" podCreationTimestamp="2026-01-01 08:48:38 +0000 UTC" firstStartedPulling="2026-01-01 08:48:40.249836181 +0000 UTC m=+1329.385104940" lastFinishedPulling="2026-01-01 08:48:43.336571562 +0000 UTC m=+1332.471840331" observedRunningTime="2026-01-01 08:48:44.902068714 +0000 UTC m=+1334.037337483" watchObservedRunningTime="2026-01-01 08:48:44.910927062 +0000 UTC m=+1334.046195871" Jan 01 08:48:44 crc kubenswrapper[4867]: I0101 08:48:44.927971 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.54698321 podStartE2EDuration="6.927944679s" podCreationTimestamp="2026-01-01 08:48:38 +0000 UTC" firstStartedPulling="2026-01-01 08:48:39.955165011 +0000 UTC m=+1329.090433780" lastFinishedPulling="2026-01-01 08:48:43.33612649 +0000 UTC m=+1332.471395249" observedRunningTime="2026-01-01 08:48:44.924807471 +0000 UTC m=+1334.060076250" watchObservedRunningTime="2026-01-01 08:48:44.927944679 +0000 UTC m=+1334.063213488" Jan 01 08:48:45 crc kubenswrapper[4867]: I0101 08:48:45.510803 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 01 08:48:45 crc kubenswrapper[4867]: I0101 08:48:45.617924 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63390c2c-eb58-4b38-b11a-8c26319d66cb-config-data\") pod \"63390c2c-eb58-4b38-b11a-8c26319d66cb\" (UID: \"63390c2c-eb58-4b38-b11a-8c26319d66cb\") " Jan 01 08:48:45 crc kubenswrapper[4867]: I0101 08:48:45.618059 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlz5z\" (UniqueName: \"kubernetes.io/projected/63390c2c-eb58-4b38-b11a-8c26319d66cb-kube-api-access-rlz5z\") pod \"63390c2c-eb58-4b38-b11a-8c26319d66cb\" (UID: \"63390c2c-eb58-4b38-b11a-8c26319d66cb\") " Jan 01 08:48:45 crc kubenswrapper[4867]: I0101 08:48:45.618148 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63390c2c-eb58-4b38-b11a-8c26319d66cb-logs\") pod \"63390c2c-eb58-4b38-b11a-8c26319d66cb\" (UID: \"63390c2c-eb58-4b38-b11a-8c26319d66cb\") " Jan 01 08:48:45 crc kubenswrapper[4867]: I0101 08:48:45.618218 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63390c2c-eb58-4b38-b11a-8c26319d66cb-combined-ca-bundle\") pod \"63390c2c-eb58-4b38-b11a-8c26319d66cb\" (UID: \"63390c2c-eb58-4b38-b11a-8c26319d66cb\") " Jan 01 08:48:45 crc kubenswrapper[4867]: I0101 08:48:45.618659 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63390c2c-eb58-4b38-b11a-8c26319d66cb-logs" (OuterVolumeSpecName: "logs") pod "63390c2c-eb58-4b38-b11a-8c26319d66cb" (UID: "63390c2c-eb58-4b38-b11a-8c26319d66cb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:48:45 crc kubenswrapper[4867]: I0101 08:48:45.619154 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63390c2c-eb58-4b38-b11a-8c26319d66cb-logs\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:45 crc kubenswrapper[4867]: I0101 08:48:45.624740 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63390c2c-eb58-4b38-b11a-8c26319d66cb-kube-api-access-rlz5z" (OuterVolumeSpecName: "kube-api-access-rlz5z") pod "63390c2c-eb58-4b38-b11a-8c26319d66cb" (UID: "63390c2c-eb58-4b38-b11a-8c26319d66cb"). InnerVolumeSpecName "kube-api-access-rlz5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:48:45 crc kubenswrapper[4867]: I0101 08:48:45.661676 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63390c2c-eb58-4b38-b11a-8c26319d66cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63390c2c-eb58-4b38-b11a-8c26319d66cb" (UID: "63390c2c-eb58-4b38-b11a-8c26319d66cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:48:45 crc kubenswrapper[4867]: I0101 08:48:45.661708 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63390c2c-eb58-4b38-b11a-8c26319d66cb-config-data" (OuterVolumeSpecName: "config-data") pod "63390c2c-eb58-4b38-b11a-8c26319d66cb" (UID: "63390c2c-eb58-4b38-b11a-8c26319d66cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:48:45 crc kubenswrapper[4867]: I0101 08:48:45.720729 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63390c2c-eb58-4b38-b11a-8c26319d66cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:45 crc kubenswrapper[4867]: I0101 08:48:45.721006 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63390c2c-eb58-4b38-b11a-8c26319d66cb-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:45 crc kubenswrapper[4867]: I0101 08:48:45.721017 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlz5z\" (UniqueName: \"kubernetes.io/projected/63390c2c-eb58-4b38-b11a-8c26319d66cb-kube-api-access-rlz5z\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:45 crc kubenswrapper[4867]: I0101 08:48:45.889826 4867 generic.go:334] "Generic (PLEG): container finished" podID="63390c2c-eb58-4b38-b11a-8c26319d66cb" containerID="38f6ae2c8a60714645b4e3beb596e737c38110232893f19f4cfdf5c6797cb71c" exitCode=0 Jan 01 08:48:45 crc kubenswrapper[4867]: I0101 08:48:45.889870 4867 generic.go:334] "Generic (PLEG): container finished" podID="63390c2c-eb58-4b38-b11a-8c26319d66cb" containerID="047a80958dd0e1eae278aa0e7ad8f850258006aa4e94ace30da468c07f2b2706" exitCode=143 Jan 01 08:48:45 crc kubenswrapper[4867]: I0101 08:48:45.889970 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 01 08:48:45 crc kubenswrapper[4867]: I0101 08:48:45.889972 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"63390c2c-eb58-4b38-b11a-8c26319d66cb","Type":"ContainerDied","Data":"38f6ae2c8a60714645b4e3beb596e737c38110232893f19f4cfdf5c6797cb71c"} Jan 01 08:48:45 crc kubenswrapper[4867]: I0101 08:48:45.890041 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"63390c2c-eb58-4b38-b11a-8c26319d66cb","Type":"ContainerDied","Data":"047a80958dd0e1eae278aa0e7ad8f850258006aa4e94ace30da468c07f2b2706"} Jan 01 08:48:45 crc kubenswrapper[4867]: I0101 08:48:45.890065 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"63390c2c-eb58-4b38-b11a-8c26319d66cb","Type":"ContainerDied","Data":"dfb3a21221589664cc85b6623cd23917e40a91768902f9bef0fbea5cbc706d01"} Jan 01 08:48:45 crc kubenswrapper[4867]: I0101 08:48:45.890093 4867 scope.go:117] "RemoveContainer" containerID="38f6ae2c8a60714645b4e3beb596e737c38110232893f19f4cfdf5c6797cb71c" Jan 01 08:48:45 crc kubenswrapper[4867]: I0101 08:48:45.949033 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 01 08:48:45 crc kubenswrapper[4867]: I0101 08:48:45.949187 4867 scope.go:117] "RemoveContainer" containerID="047a80958dd0e1eae278aa0e7ad8f850258006aa4e94ace30da468c07f2b2706" Jan 01 08:48:45 crc kubenswrapper[4867]: I0101 08:48:45.975735 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 01 08:48:45 crc kubenswrapper[4867]: I0101 08:48:45.987725 4867 scope.go:117] "RemoveContainer" containerID="38f6ae2c8a60714645b4e3beb596e737c38110232893f19f4cfdf5c6797cb71c" Jan 01 08:48:45 crc kubenswrapper[4867]: E0101 08:48:45.988717 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38f6ae2c8a60714645b4e3beb596e737c38110232893f19f4cfdf5c6797cb71c\": container with ID starting with 38f6ae2c8a60714645b4e3beb596e737c38110232893f19f4cfdf5c6797cb71c not found: ID does not exist" containerID="38f6ae2c8a60714645b4e3beb596e737c38110232893f19f4cfdf5c6797cb71c" Jan 01 08:48:45 crc kubenswrapper[4867]: I0101 08:48:45.988747 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38f6ae2c8a60714645b4e3beb596e737c38110232893f19f4cfdf5c6797cb71c"} err="failed to get container status \"38f6ae2c8a60714645b4e3beb596e737c38110232893f19f4cfdf5c6797cb71c\": rpc error: code = NotFound desc = could not find container \"38f6ae2c8a60714645b4e3beb596e737c38110232893f19f4cfdf5c6797cb71c\": container with ID starting with 38f6ae2c8a60714645b4e3beb596e737c38110232893f19f4cfdf5c6797cb71c not found: ID does not exist" Jan 01 08:48:45 crc kubenswrapper[4867]: I0101 08:48:45.988768 4867 scope.go:117] "RemoveContainer" containerID="047a80958dd0e1eae278aa0e7ad8f850258006aa4e94ace30da468c07f2b2706" Jan 01 08:48:45 crc kubenswrapper[4867]: I0101 08:48:45.990820 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 01 08:48:45 crc kubenswrapper[4867]: E0101 08:48:45.991213 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63390c2c-eb58-4b38-b11a-8c26319d66cb" containerName="nova-metadata-log" Jan 01 08:48:45 crc kubenswrapper[4867]: I0101 08:48:45.991230 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="63390c2c-eb58-4b38-b11a-8c26319d66cb" containerName="nova-metadata-log" Jan 01 08:48:45 crc kubenswrapper[4867]: E0101 08:48:45.991249 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63390c2c-eb58-4b38-b11a-8c26319d66cb" containerName="nova-metadata-metadata" Jan 01 08:48:45 crc kubenswrapper[4867]: I0101 08:48:45.991255 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="63390c2c-eb58-4b38-b11a-8c26319d66cb" containerName="nova-metadata-metadata" Jan 01 08:48:45 crc kubenswrapper[4867]: I0101 08:48:45.991409 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="63390c2c-eb58-4b38-b11a-8c26319d66cb" containerName="nova-metadata-log" Jan 01 08:48:45 crc kubenswrapper[4867]: I0101 08:48:45.991426 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="63390c2c-eb58-4b38-b11a-8c26319d66cb" containerName="nova-metadata-metadata" Jan 01 08:48:45 crc kubenswrapper[4867]: E0101 08:48:45.991537 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"047a80958dd0e1eae278aa0e7ad8f850258006aa4e94ace30da468c07f2b2706\": container with ID starting with 047a80958dd0e1eae278aa0e7ad8f850258006aa4e94ace30da468c07f2b2706 not found: ID does not exist" containerID="047a80958dd0e1eae278aa0e7ad8f850258006aa4e94ace30da468c07f2b2706" Jan 01 08:48:45 crc kubenswrapper[4867]: I0101 08:48:45.991589 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"047a80958dd0e1eae278aa0e7ad8f850258006aa4e94ace30da468c07f2b2706"} err="failed to get container status \"047a80958dd0e1eae278aa0e7ad8f850258006aa4e94ace30da468c07f2b2706\": rpc error: code = NotFound desc = could not find container \"047a80958dd0e1eae278aa0e7ad8f850258006aa4e94ace30da468c07f2b2706\": container with ID starting with 047a80958dd0e1eae278aa0e7ad8f850258006aa4e94ace30da468c07f2b2706 not found: ID does not exist" Jan 01 08:48:45 crc kubenswrapper[4867]: I0101 08:48:45.991621 4867 scope.go:117] "RemoveContainer" containerID="38f6ae2c8a60714645b4e3beb596e737c38110232893f19f4cfdf5c6797cb71c" Jan 01 08:48:45 crc kubenswrapper[4867]: I0101 08:48:45.992275 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 01 08:48:45 crc kubenswrapper[4867]: I0101 08:48:45.997303 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38f6ae2c8a60714645b4e3beb596e737c38110232893f19f4cfdf5c6797cb71c"} err="failed to get container status \"38f6ae2c8a60714645b4e3beb596e737c38110232893f19f4cfdf5c6797cb71c\": rpc error: code = NotFound desc = could not find container \"38f6ae2c8a60714645b4e3beb596e737c38110232893f19f4cfdf5c6797cb71c\": container with ID starting with 38f6ae2c8a60714645b4e3beb596e737c38110232893f19f4cfdf5c6797cb71c not found: ID does not exist" Jan 01 08:48:45 crc kubenswrapper[4867]: I0101 08:48:45.997332 4867 scope.go:117] "RemoveContainer" containerID="047a80958dd0e1eae278aa0e7ad8f850258006aa4e94ace30da468c07f2b2706" Jan 01 08:48:45 crc kubenswrapper[4867]: I0101 08:48:45.997464 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 01 08:48:45 crc kubenswrapper[4867]: I0101 08:48:45.997631 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 01 08:48:45 crc kubenswrapper[4867]: I0101 08:48:45.998217 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"047a80958dd0e1eae278aa0e7ad8f850258006aa4e94ace30da468c07f2b2706"} err="failed to get container status \"047a80958dd0e1eae278aa0e7ad8f850258006aa4e94ace30da468c07f2b2706\": rpc error: code = NotFound desc = could not find container \"047a80958dd0e1eae278aa0e7ad8f850258006aa4e94ace30da468c07f2b2706\": container with ID starting with 047a80958dd0e1eae278aa0e7ad8f850258006aa4e94ace30da468c07f2b2706 not found: ID does not exist" Jan 01 08:48:46 crc kubenswrapper[4867]: I0101 08:48:46.001048 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 01 08:48:46 crc kubenswrapper[4867]: I0101 08:48:46.038510 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f6d8ad7-76ec-450a-8c49-d2c0396bc764-logs\") pod \"nova-metadata-0\" (UID: \"4f6d8ad7-76ec-450a-8c49-d2c0396bc764\") " pod="openstack/nova-metadata-0" Jan 01 08:48:46 crc kubenswrapper[4867]: I0101 08:48:46.038580 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f6d8ad7-76ec-450a-8c49-d2c0396bc764-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4f6d8ad7-76ec-450a-8c49-d2c0396bc764\") " pod="openstack/nova-metadata-0" Jan 01 08:48:46 crc kubenswrapper[4867]: I0101 08:48:46.038631 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f6d8ad7-76ec-450a-8c49-d2c0396bc764-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4f6d8ad7-76ec-450a-8c49-d2c0396bc764\") " pod="openstack/nova-metadata-0" Jan 01 08:48:46 crc kubenswrapper[4867]: I0101 08:48:46.038683 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2lkc\" (UniqueName: \"kubernetes.io/projected/4f6d8ad7-76ec-450a-8c49-d2c0396bc764-kube-api-access-q2lkc\") pod \"nova-metadata-0\" (UID: \"4f6d8ad7-76ec-450a-8c49-d2c0396bc764\") " pod="openstack/nova-metadata-0" Jan 01 08:48:46 crc kubenswrapper[4867]: I0101 08:48:46.038714 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f6d8ad7-76ec-450a-8c49-d2c0396bc764-config-data\") pod \"nova-metadata-0\" (UID: \"4f6d8ad7-76ec-450a-8c49-d2c0396bc764\") " pod="openstack/nova-metadata-0" Jan 01 08:48:46 crc kubenswrapper[4867]: I0101 08:48:46.140794 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f6d8ad7-76ec-450a-8c49-d2c0396bc764-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4f6d8ad7-76ec-450a-8c49-d2c0396bc764\") " pod="openstack/nova-metadata-0" Jan 01 08:48:46 crc kubenswrapper[4867]: I0101 08:48:46.140852 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f6d8ad7-76ec-450a-8c49-d2c0396bc764-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4f6d8ad7-76ec-450a-8c49-d2c0396bc764\") " pod="openstack/nova-metadata-0" Jan 01 08:48:46 crc kubenswrapper[4867]: I0101 08:48:46.140924 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2lkc\" (UniqueName: \"kubernetes.io/projected/4f6d8ad7-76ec-450a-8c49-d2c0396bc764-kube-api-access-q2lkc\") pod \"nova-metadata-0\" (UID: \"4f6d8ad7-76ec-450a-8c49-d2c0396bc764\") " pod="openstack/nova-metadata-0" Jan 01 08:48:46 crc kubenswrapper[4867]: I0101 08:48:46.140958 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f6d8ad7-76ec-450a-8c49-d2c0396bc764-config-data\") pod \"nova-metadata-0\" (UID: \"4f6d8ad7-76ec-450a-8c49-d2c0396bc764\") " pod="openstack/nova-metadata-0" Jan 01 08:48:46 crc kubenswrapper[4867]: I0101 08:48:46.141050 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f6d8ad7-76ec-450a-8c49-d2c0396bc764-logs\") pod \"nova-metadata-0\" (UID: \"4f6d8ad7-76ec-450a-8c49-d2c0396bc764\") " pod="openstack/nova-metadata-0" Jan 01 08:48:46 crc kubenswrapper[4867]: I0101 08:48:46.143437 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f6d8ad7-76ec-450a-8c49-d2c0396bc764-logs\") pod \"nova-metadata-0\" (UID: \"4f6d8ad7-76ec-450a-8c49-d2c0396bc764\") " pod="openstack/nova-metadata-0" Jan 01 08:48:46 crc kubenswrapper[4867]: I0101 08:48:46.144083 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f6d8ad7-76ec-450a-8c49-d2c0396bc764-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4f6d8ad7-76ec-450a-8c49-d2c0396bc764\") " pod="openstack/nova-metadata-0" Jan 01 08:48:46 crc kubenswrapper[4867]: I0101 08:48:46.148728 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f6d8ad7-76ec-450a-8c49-d2c0396bc764-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4f6d8ad7-76ec-450a-8c49-d2c0396bc764\") " pod="openstack/nova-metadata-0" Jan 01 08:48:46 crc kubenswrapper[4867]: I0101 08:48:46.159823 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f6d8ad7-76ec-450a-8c49-d2c0396bc764-config-data\") pod \"nova-metadata-0\" (UID: \"4f6d8ad7-76ec-450a-8c49-d2c0396bc764\") " pod="openstack/nova-metadata-0" Jan 01 08:48:46 crc kubenswrapper[4867]: I0101 08:48:46.184924 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2lkc\" (UniqueName: \"kubernetes.io/projected/4f6d8ad7-76ec-450a-8c49-d2c0396bc764-kube-api-access-q2lkc\") pod \"nova-metadata-0\" (UID: \"4f6d8ad7-76ec-450a-8c49-d2c0396bc764\") " pod="openstack/nova-metadata-0" Jan 01 08:48:46 crc kubenswrapper[4867]: I0101 08:48:46.324655 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 01 08:48:46 crc kubenswrapper[4867]: I0101 08:48:46.835711 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 01 08:48:46 crc kubenswrapper[4867]: I0101 08:48:46.904307 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4f6d8ad7-76ec-450a-8c49-d2c0396bc764","Type":"ContainerStarted","Data":"484b028c307aff1e3893af7ba91e9f7426723d718bcc980938fbdee0d83948ef"} Jan 01 08:48:47 crc kubenswrapper[4867]: I0101 08:48:47.154776 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63390c2c-eb58-4b38-b11a-8c26319d66cb" path="/var/lib/kubelet/pods/63390c2c-eb58-4b38-b11a-8c26319d66cb/volumes" Jan 01 08:48:47 crc kubenswrapper[4867]: I0101 08:48:47.934228 4867 generic.go:334] "Generic (PLEG): container finished" podID="48551ac5-5853-40d9-843b-c14538e078d7" containerID="60e97c9fdf3987bf4c61e1d7445036f685cabc3d422531233de8d828501df5ed" exitCode=0 Jan 01 08:48:47 crc kubenswrapper[4867]: I0101 08:48:47.934290 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-dlqwt" event={"ID":"48551ac5-5853-40d9-843b-c14538e078d7","Type":"ContainerDied","Data":"60e97c9fdf3987bf4c61e1d7445036f685cabc3d422531233de8d828501df5ed"} Jan 01 08:48:47 crc kubenswrapper[4867]: I0101 08:48:47.939122 4867 generic.go:334] "Generic (PLEG): container finished" podID="606a07c4-3bbe-4968-a035-6d41b2cf3803" containerID="2eec9c315c78c3cc244f82eab542c7f81c78a38711682898002441ea0482a959" exitCode=0 Jan 01 08:48:47 crc kubenswrapper[4867]: I0101 08:48:47.939204 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fbpqm" event={"ID":"606a07c4-3bbe-4968-a035-6d41b2cf3803","Type":"ContainerDied","Data":"2eec9c315c78c3cc244f82eab542c7f81c78a38711682898002441ea0482a959"} Jan 01 08:48:47 crc kubenswrapper[4867]: I0101 08:48:47.942917 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4f6d8ad7-76ec-450a-8c49-d2c0396bc764","Type":"ContainerStarted","Data":"7e0c2be2f217fd4044aab87644f56292979f8cb4dce2b4a5929cf4dcdf057eb6"} Jan 01 08:48:47 crc kubenswrapper[4867]: I0101 08:48:47.942967 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4f6d8ad7-76ec-450a-8c49-d2c0396bc764","Type":"ContainerStarted","Data":"5abe6c3a0d7fc60ad04b531e33b4b38647a2189e9acb85d581b7652b371759a8"} Jan 01 08:48:48 crc kubenswrapper[4867]: I0101 08:48:48.018669 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.018648062 podStartE2EDuration="3.018648062s" podCreationTimestamp="2026-01-01 08:48:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:48:48.010963276 +0000 UTC m=+1337.146232065" watchObservedRunningTime="2026-01-01 08:48:48.018648062 +0000 UTC m=+1337.153916831" Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.245589 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.246296 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.456874 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-dlqwt" Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.467127 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fbpqm" Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.486142 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.522628 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48551ac5-5853-40d9-843b-c14538e078d7-combined-ca-bundle\") pod \"48551ac5-5853-40d9-843b-c14538e078d7\" (UID: \"48551ac5-5853-40d9-843b-c14538e078d7\") " Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.522694 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6d4f\" (UniqueName: \"kubernetes.io/projected/48551ac5-5853-40d9-843b-c14538e078d7-kube-api-access-d6d4f\") pod \"48551ac5-5853-40d9-843b-c14538e078d7\" (UID: \"48551ac5-5853-40d9-843b-c14538e078d7\") " Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.522793 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/606a07c4-3bbe-4968-a035-6d41b2cf3803-scripts\") pod \"606a07c4-3bbe-4968-a035-6d41b2cf3803\" (UID: \"606a07c4-3bbe-4968-a035-6d41b2cf3803\") " Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.522857 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48551ac5-5853-40d9-843b-c14538e078d7-config-data\") pod \"48551ac5-5853-40d9-843b-c14538e078d7\" (UID: \"48551ac5-5853-40d9-843b-c14538e078d7\") " Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.522897 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj9mm\" (UniqueName: \"kubernetes.io/projected/606a07c4-3bbe-4968-a035-6d41b2cf3803-kube-api-access-rj9mm\") pod \"606a07c4-3bbe-4968-a035-6d41b2cf3803\" (UID: \"606a07c4-3bbe-4968-a035-6d41b2cf3803\") " Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.522934 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/606a07c4-3bbe-4968-a035-6d41b2cf3803-config-data\") pod \"606a07c4-3bbe-4968-a035-6d41b2cf3803\" (UID: \"606a07c4-3bbe-4968-a035-6d41b2cf3803\") " Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.523075 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/606a07c4-3bbe-4968-a035-6d41b2cf3803-combined-ca-bundle\") pod \"606a07c4-3bbe-4968-a035-6d41b2cf3803\" (UID: \"606a07c4-3bbe-4968-a035-6d41b2cf3803\") " Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.523122 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48551ac5-5853-40d9-843b-c14538e078d7-scripts\") pod \"48551ac5-5853-40d9-843b-c14538e078d7\" (UID: \"48551ac5-5853-40d9-843b-c14538e078d7\") " Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.529810 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/606a07c4-3bbe-4968-a035-6d41b2cf3803-scripts" (OuterVolumeSpecName: "scripts") pod "606a07c4-3bbe-4968-a035-6d41b2cf3803" (UID: "606a07c4-3bbe-4968-a035-6d41b2cf3803"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.529899 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48551ac5-5853-40d9-843b-c14538e078d7-scripts" (OuterVolumeSpecName: "scripts") pod "48551ac5-5853-40d9-843b-c14538e078d7" (UID: "48551ac5-5853-40d9-843b-c14538e078d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.531398 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48551ac5-5853-40d9-843b-c14538e078d7-kube-api-access-d6d4f" (OuterVolumeSpecName: "kube-api-access-d6d4f") pod "48551ac5-5853-40d9-843b-c14538e078d7" (UID: "48551ac5-5853-40d9-843b-c14538e078d7"). InnerVolumeSpecName "kube-api-access-d6d4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.545718 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.546105 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/606a07c4-3bbe-4968-a035-6d41b2cf3803-kube-api-access-rj9mm" (OuterVolumeSpecName: "kube-api-access-rj9mm") pod "606a07c4-3bbe-4968-a035-6d41b2cf3803" (UID: "606a07c4-3bbe-4968-a035-6d41b2cf3803"). InnerVolumeSpecName "kube-api-access-rj9mm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.563310 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/606a07c4-3bbe-4968-a035-6d41b2cf3803-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "606a07c4-3bbe-4968-a035-6d41b2cf3803" (UID: "606a07c4-3bbe-4968-a035-6d41b2cf3803"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.571547 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/606a07c4-3bbe-4968-a035-6d41b2cf3803-config-data" (OuterVolumeSpecName: "config-data") pod "606a07c4-3bbe-4968-a035-6d41b2cf3803" (UID: "606a07c4-3bbe-4968-a035-6d41b2cf3803"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.578417 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77f475f9d5-jjvqz" Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.590844 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48551ac5-5853-40d9-843b-c14538e078d7-config-data" (OuterVolumeSpecName: "config-data") pod "48551ac5-5853-40d9-843b-c14538e078d7" (UID: "48551ac5-5853-40d9-843b-c14538e078d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.620542 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48551ac5-5853-40d9-843b-c14538e078d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48551ac5-5853-40d9-843b-c14538e078d7" (UID: "48551ac5-5853-40d9-843b-c14538e078d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.626256 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/606a07c4-3bbe-4968-a035-6d41b2cf3803-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.626460 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/606a07c4-3bbe-4968-a035-6d41b2cf3803-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.626545 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48551ac5-5853-40d9-843b-c14538e078d7-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.626622 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48551ac5-5853-40d9-843b-c14538e078d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.626853 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6d4f\" (UniqueName: \"kubernetes.io/projected/48551ac5-5853-40d9-843b-c14538e078d7-kube-api-access-d6d4f\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.626894 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/606a07c4-3bbe-4968-a035-6d41b2cf3803-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.626906 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48551ac5-5853-40d9-843b-c14538e078d7-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.626916 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj9mm\" (UniqueName: \"kubernetes.io/projected/606a07c4-3bbe-4968-a035-6d41b2cf3803-kube-api-access-rj9mm\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.657094 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b79d6d5d9-r54bp"] Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.657369 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b79d6d5d9-r54bp" podUID="75ce895d-6831-4af5-9e10-481ce05ec976" containerName="dnsmasq-dns" containerID="cri-o://28e9aa24e604b0ba373ef4c4672912d39dad49eeb4da59f06c89ed951c249b9b" gracePeriod=10 Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.973560 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fbpqm" event={"ID":"606a07c4-3bbe-4968-a035-6d41b2cf3803","Type":"ContainerDied","Data":"9545cebd21386a6ae9925b70a3ffceabe9a56cc8e2a4036e854baa209dd8f46a"} Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.973607 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9545cebd21386a6ae9925b70a3ffceabe9a56cc8e2a4036e854baa209dd8f46a" Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.973630 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fbpqm" Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.981277 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-dlqwt" event={"ID":"48551ac5-5853-40d9-843b-c14538e078d7","Type":"ContainerDied","Data":"861b5fd474cdb12297e0340e840a1faaa203725cac515f907aa8e942738baf2c"} Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.981313 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="861b5fd474cdb12297e0340e840a1faaa203725cac515f907aa8e942738baf2c" Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.981398 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-dlqwt" Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.992826 4867 generic.go:334] "Generic (PLEG): container finished" podID="75ce895d-6831-4af5-9e10-481ce05ec976" containerID="28e9aa24e604b0ba373ef4c4672912d39dad49eeb4da59f06c89ed951c249b9b" exitCode=0 Jan 01 08:48:49 crc kubenswrapper[4867]: I0101 08:48:49.993562 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b79d6d5d9-r54bp" event={"ID":"75ce895d-6831-4af5-9e10-481ce05ec976","Type":"ContainerDied","Data":"28e9aa24e604b0ba373ef4c4672912d39dad49eeb4da59f06c89ed951c249b9b"} Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.043234 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.054258 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 01 08:48:50 crc kubenswrapper[4867]: E0101 08:48:50.054999 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48551ac5-5853-40d9-843b-c14538e078d7" containerName="nova-manage" Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.055072 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="48551ac5-5853-40d9-843b-c14538e078d7" containerName="nova-manage" Jan 01 08:48:50 crc kubenswrapper[4867]: E0101 08:48:50.055135 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="606a07c4-3bbe-4968-a035-6d41b2cf3803" containerName="nova-cell1-conductor-db-sync" Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.055218 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="606a07c4-3bbe-4968-a035-6d41b2cf3803" containerName="nova-cell1-conductor-db-sync" Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.055453 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="606a07c4-3bbe-4968-a035-6d41b2cf3803" containerName="nova-cell1-conductor-db-sync" Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.055589 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="48551ac5-5853-40d9-843b-c14538e078d7" containerName="nova-manage" Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.056280 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.059825 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.069930 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.087657 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b79d6d5d9-r54bp" Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.139723 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vz6wl\" (UniqueName: \"kubernetes.io/projected/75ce895d-6831-4af5-9e10-481ce05ec976-kube-api-access-vz6wl\") pod \"75ce895d-6831-4af5-9e10-481ce05ec976\" (UID: \"75ce895d-6831-4af5-9e10-481ce05ec976\") " Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.139904 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75ce895d-6831-4af5-9e10-481ce05ec976-ovsdbserver-sb\") pod \"75ce895d-6831-4af5-9e10-481ce05ec976\" (UID: \"75ce895d-6831-4af5-9e10-481ce05ec976\") " Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.139952 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75ce895d-6831-4af5-9e10-481ce05ec976-dns-swift-storage-0\") pod \"75ce895d-6831-4af5-9e10-481ce05ec976\" (UID: \"75ce895d-6831-4af5-9e10-481ce05ec976\") " Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.140014 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75ce895d-6831-4af5-9e10-481ce05ec976-ovsdbserver-nb\") pod \"75ce895d-6831-4af5-9e10-481ce05ec976\" (UID: \"75ce895d-6831-4af5-9e10-481ce05ec976\") " Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.140142 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75ce895d-6831-4af5-9e10-481ce05ec976-config\") pod \"75ce895d-6831-4af5-9e10-481ce05ec976\" (UID: \"75ce895d-6831-4af5-9e10-481ce05ec976\") " Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.140291 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75ce895d-6831-4af5-9e10-481ce05ec976-dns-svc\") pod \"75ce895d-6831-4af5-9e10-481ce05ec976\" (UID: \"75ce895d-6831-4af5-9e10-481ce05ec976\") " Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.140596 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8799ae41-c9cb-409a-ac59-3e6b59bb0198-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8799ae41-c9cb-409a-ac59-3e6b59bb0198\") " pod="openstack/nova-cell1-conductor-0" Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.140682 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8799ae41-c9cb-409a-ac59-3e6b59bb0198-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8799ae41-c9cb-409a-ac59-3e6b59bb0198\") " pod="openstack/nova-cell1-conductor-0" Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.140731 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx5tm\" (UniqueName: \"kubernetes.io/projected/8799ae41-c9cb-409a-ac59-3e6b59bb0198-kube-api-access-kx5tm\") pod \"nova-cell1-conductor-0\" (UID: \"8799ae41-c9cb-409a-ac59-3e6b59bb0198\") " pod="openstack/nova-cell1-conductor-0" Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.144066 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75ce895d-6831-4af5-9e10-481ce05ec976-kube-api-access-vz6wl" (OuterVolumeSpecName: "kube-api-access-vz6wl") pod "75ce895d-6831-4af5-9e10-481ce05ec976" (UID: "75ce895d-6831-4af5-9e10-481ce05ec976"). InnerVolumeSpecName "kube-api-access-vz6wl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.197551 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75ce895d-6831-4af5-9e10-481ce05ec976-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "75ce895d-6831-4af5-9e10-481ce05ec976" (UID: "75ce895d-6831-4af5-9e10-481ce05ec976"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.242909 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8799ae41-c9cb-409a-ac59-3e6b59bb0198-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8799ae41-c9cb-409a-ac59-3e6b59bb0198\") " pod="openstack/nova-cell1-conductor-0" Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.243211 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8799ae41-c9cb-409a-ac59-3e6b59bb0198-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8799ae41-c9cb-409a-ac59-3e6b59bb0198\") " pod="openstack/nova-cell1-conductor-0" Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.243743 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx5tm\" (UniqueName: \"kubernetes.io/projected/8799ae41-c9cb-409a-ac59-3e6b59bb0198-kube-api-access-kx5tm\") pod \"nova-cell1-conductor-0\" (UID: \"8799ae41-c9cb-409a-ac59-3e6b59bb0198\") " pod="openstack/nova-cell1-conductor-0" Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.243871 4867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75ce895d-6831-4af5-9e10-481ce05ec976-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.243956 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vz6wl\" (UniqueName: \"kubernetes.io/projected/75ce895d-6831-4af5-9e10-481ce05ec976-kube-api-access-vz6wl\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.244444 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75ce895d-6831-4af5-9e10-481ce05ec976-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "75ce895d-6831-4af5-9e10-481ce05ec976" (UID: "75ce895d-6831-4af5-9e10-481ce05ec976"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.251644 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8799ae41-c9cb-409a-ac59-3e6b59bb0198-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8799ae41-c9cb-409a-ac59-3e6b59bb0198\") " pod="openstack/nova-cell1-conductor-0" Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.254141 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8799ae41-c9cb-409a-ac59-3e6b59bb0198-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8799ae41-c9cb-409a-ac59-3e6b59bb0198\") " pod="openstack/nova-cell1-conductor-0" Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.274928 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75ce895d-6831-4af5-9e10-481ce05ec976-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "75ce895d-6831-4af5-9e10-481ce05ec976" (UID: "75ce895d-6831-4af5-9e10-481ce05ec976"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.275583 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx5tm\" (UniqueName: \"kubernetes.io/projected/8799ae41-c9cb-409a-ac59-3e6b59bb0198-kube-api-access-kx5tm\") pod \"nova-cell1-conductor-0\" (UID: \"8799ae41-c9cb-409a-ac59-3e6b59bb0198\") " pod="openstack/nova-cell1-conductor-0" Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.283506 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75ce895d-6831-4af5-9e10-481ce05ec976-config" (OuterVolumeSpecName: "config") pod "75ce895d-6831-4af5-9e10-481ce05ec976" (UID: "75ce895d-6831-4af5-9e10-481ce05ec976"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.304371 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75ce895d-6831-4af5-9e10-481ce05ec976-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "75ce895d-6831-4af5-9e10-481ce05ec976" (UID: "75ce895d-6831-4af5-9e10-481ce05ec976"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.319071 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="45703f7c-e7c0-426b-9fb1-2f9db0295f23" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.186:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.319335 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="45703f7c-e7c0-426b-9fb1-2f9db0295f23" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.186:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.319443 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.319764 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="45703f7c-e7c0-426b-9fb1-2f9db0295f23" containerName="nova-api-log" containerID="cri-o://28e20cf58885b1ba6853ea54eeca00c752b6873592dcefe99a5ea42e2ad55b82" gracePeriod=30 Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.320279 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="45703f7c-e7c0-426b-9fb1-2f9db0295f23" containerName="nova-api-api" containerID="cri-o://c2cfbb056087e12b2a2885b24f64300c04f4b5224ad362d98504d48a15610138" gracePeriod=30 Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.344440 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.344709 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4f6d8ad7-76ec-450a-8c49-d2c0396bc764" containerName="nova-metadata-log" containerID="cri-o://5abe6c3a0d7fc60ad04b531e33b4b38647a2189e9acb85d581b7652b371759a8" gracePeriod=30 Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.344768 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4f6d8ad7-76ec-450a-8c49-d2c0396bc764" containerName="nova-metadata-metadata" containerID="cri-o://7e0c2be2f217fd4044aab87644f56292979f8cb4dce2b4a5929cf4dcdf057eb6" gracePeriod=30 Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.345816 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75ce895d-6831-4af5-9e10-481ce05ec976-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.345833 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75ce895d-6831-4af5-9e10-481ce05ec976-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.345844 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75ce895d-6831-4af5-9e10-481ce05ec976-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.345853 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75ce895d-6831-4af5-9e10-481ce05ec976-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.396133 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.757164 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.895860 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 01 08:48:50 crc kubenswrapper[4867]: I0101 08:48:50.954063 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.011103 4867 generic.go:334] "Generic (PLEG): container finished" podID="4f6d8ad7-76ec-450a-8c49-d2c0396bc764" containerID="7e0c2be2f217fd4044aab87644f56292979f8cb4dce2b4a5929cf4dcdf057eb6" exitCode=0 Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.011790 4867 generic.go:334] "Generic (PLEG): container finished" podID="4f6d8ad7-76ec-450a-8c49-d2c0396bc764" containerID="5abe6c3a0d7fc60ad04b531e33b4b38647a2189e9acb85d581b7652b371759a8" exitCode=143 Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.011216 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.011223 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4f6d8ad7-76ec-450a-8c49-d2c0396bc764","Type":"ContainerDied","Data":"7e0c2be2f217fd4044aab87644f56292979f8cb4dce2b4a5929cf4dcdf057eb6"} Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.012152 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4f6d8ad7-76ec-450a-8c49-d2c0396bc764","Type":"ContainerDied","Data":"5abe6c3a0d7fc60ad04b531e33b4b38647a2189e9acb85d581b7652b371759a8"} Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.012237 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4f6d8ad7-76ec-450a-8c49-d2c0396bc764","Type":"ContainerDied","Data":"484b028c307aff1e3893af7ba91e9f7426723d718bcc980938fbdee0d83948ef"} Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.012344 4867 scope.go:117] "RemoveContainer" containerID="7e0c2be2f217fd4044aab87644f56292979f8cb4dce2b4a5929cf4dcdf057eb6" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.018605 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8799ae41-c9cb-409a-ac59-3e6b59bb0198","Type":"ContainerStarted","Data":"26a9e5e2df70974612bfa34e9b15e287492a7dc38a03f008f32a904f9ed08b17"} Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.021908 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b79d6d5d9-r54bp" event={"ID":"75ce895d-6831-4af5-9e10-481ce05ec976","Type":"ContainerDied","Data":"ff22583786a24ca906bb95ac067f368cd098fb90fade5ad13377558e08ffe79d"} Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.021962 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b79d6d5d9-r54bp" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.035553 4867 generic.go:334] "Generic (PLEG): container finished" podID="45703f7c-e7c0-426b-9fb1-2f9db0295f23" containerID="28e20cf58885b1ba6853ea54eeca00c752b6873592dcefe99a5ea42e2ad55b82" exitCode=143 Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.035973 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45703f7c-e7c0-426b-9fb1-2f9db0295f23","Type":"ContainerDied","Data":"28e20cf58885b1ba6853ea54eeca00c752b6873592dcefe99a5ea42e2ad55b82"} Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.054673 4867 scope.go:117] "RemoveContainer" containerID="5abe6c3a0d7fc60ad04b531e33b4b38647a2189e9acb85d581b7652b371759a8" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.067104 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b79d6d5d9-r54bp"] Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.072932 4867 scope.go:117] "RemoveContainer" containerID="7e0c2be2f217fd4044aab87644f56292979f8cb4dce2b4a5929cf4dcdf057eb6" Jan 01 08:48:51 crc kubenswrapper[4867]: E0101 08:48:51.073324 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e0c2be2f217fd4044aab87644f56292979f8cb4dce2b4a5929cf4dcdf057eb6\": container with ID starting with 7e0c2be2f217fd4044aab87644f56292979f8cb4dce2b4a5929cf4dcdf057eb6 not found: ID does not exist" containerID="7e0c2be2f217fd4044aab87644f56292979f8cb4dce2b4a5929cf4dcdf057eb6" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.073380 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e0c2be2f217fd4044aab87644f56292979f8cb4dce2b4a5929cf4dcdf057eb6"} err="failed to get container status \"7e0c2be2f217fd4044aab87644f56292979f8cb4dce2b4a5929cf4dcdf057eb6\": rpc error: code = NotFound desc = could not find container \"7e0c2be2f217fd4044aab87644f56292979f8cb4dce2b4a5929cf4dcdf057eb6\": container with ID starting with 7e0c2be2f217fd4044aab87644f56292979f8cb4dce2b4a5929cf4dcdf057eb6 not found: ID does not exist" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.073414 4867 scope.go:117] "RemoveContainer" containerID="5abe6c3a0d7fc60ad04b531e33b4b38647a2189e9acb85d581b7652b371759a8" Jan 01 08:48:51 crc kubenswrapper[4867]: E0101 08:48:51.073695 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5abe6c3a0d7fc60ad04b531e33b4b38647a2189e9acb85d581b7652b371759a8\": container with ID starting with 5abe6c3a0d7fc60ad04b531e33b4b38647a2189e9acb85d581b7652b371759a8 not found: ID does not exist" containerID="5abe6c3a0d7fc60ad04b531e33b4b38647a2189e9acb85d581b7652b371759a8" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.073725 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5abe6c3a0d7fc60ad04b531e33b4b38647a2189e9acb85d581b7652b371759a8"} err="failed to get container status \"5abe6c3a0d7fc60ad04b531e33b4b38647a2189e9acb85d581b7652b371759a8\": rpc error: code = NotFound desc = could not find container \"5abe6c3a0d7fc60ad04b531e33b4b38647a2189e9acb85d581b7652b371759a8\": container with ID starting with 5abe6c3a0d7fc60ad04b531e33b4b38647a2189e9acb85d581b7652b371759a8 not found: ID does not exist" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.073743 4867 scope.go:117] "RemoveContainer" containerID="7e0c2be2f217fd4044aab87644f56292979f8cb4dce2b4a5929cf4dcdf057eb6" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.074963 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e0c2be2f217fd4044aab87644f56292979f8cb4dce2b4a5929cf4dcdf057eb6"} err="failed to get container status \"7e0c2be2f217fd4044aab87644f56292979f8cb4dce2b4a5929cf4dcdf057eb6\": rpc error: code = NotFound desc = could not find container \"7e0c2be2f217fd4044aab87644f56292979f8cb4dce2b4a5929cf4dcdf057eb6\": container with ID starting with 7e0c2be2f217fd4044aab87644f56292979f8cb4dce2b4a5929cf4dcdf057eb6 not found: ID does not exist" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.075005 4867 scope.go:117] "RemoveContainer" containerID="5abe6c3a0d7fc60ad04b531e33b4b38647a2189e9acb85d581b7652b371759a8" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.075468 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5abe6c3a0d7fc60ad04b531e33b4b38647a2189e9acb85d581b7652b371759a8"} err="failed to get container status \"5abe6c3a0d7fc60ad04b531e33b4b38647a2189e9acb85d581b7652b371759a8\": rpc error: code = NotFound desc = could not find container \"5abe6c3a0d7fc60ad04b531e33b4b38647a2189e9acb85d581b7652b371759a8\": container with ID starting with 5abe6c3a0d7fc60ad04b531e33b4b38647a2189e9acb85d581b7652b371759a8 not found: ID does not exist" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.075520 4867 scope.go:117] "RemoveContainer" containerID="28e9aa24e604b0ba373ef4c4672912d39dad49eeb4da59f06c89ed951c249b9b" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.077541 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b79d6d5d9-r54bp"] Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.094480 4867 scope.go:117] "RemoveContainer" containerID="63ee244fea7e994dedf00c867ecb5721ef862ee406239ee082533fb853e9f50b" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.142659 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75ce895d-6831-4af5-9e10-481ce05ec976" path="/var/lib/kubelet/pods/75ce895d-6831-4af5-9e10-481ce05ec976/volumes" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.160859 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f6d8ad7-76ec-450a-8c49-d2c0396bc764-config-data\") pod \"4f6d8ad7-76ec-450a-8c49-d2c0396bc764\" (UID: \"4f6d8ad7-76ec-450a-8c49-d2c0396bc764\") " Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.161185 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2lkc\" (UniqueName: \"kubernetes.io/projected/4f6d8ad7-76ec-450a-8c49-d2c0396bc764-kube-api-access-q2lkc\") pod \"4f6d8ad7-76ec-450a-8c49-d2c0396bc764\" (UID: \"4f6d8ad7-76ec-450a-8c49-d2c0396bc764\") " Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.161448 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f6d8ad7-76ec-450a-8c49-d2c0396bc764-nova-metadata-tls-certs\") pod \"4f6d8ad7-76ec-450a-8c49-d2c0396bc764\" (UID: \"4f6d8ad7-76ec-450a-8c49-d2c0396bc764\") " Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.161572 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f6d8ad7-76ec-450a-8c49-d2c0396bc764-combined-ca-bundle\") pod \"4f6d8ad7-76ec-450a-8c49-d2c0396bc764\" (UID: \"4f6d8ad7-76ec-450a-8c49-d2c0396bc764\") " Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.161708 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f6d8ad7-76ec-450a-8c49-d2c0396bc764-logs\") pod \"4f6d8ad7-76ec-450a-8c49-d2c0396bc764\" (UID: \"4f6d8ad7-76ec-450a-8c49-d2c0396bc764\") " Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.162659 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f6d8ad7-76ec-450a-8c49-d2c0396bc764-logs" (OuterVolumeSpecName: "logs") pod "4f6d8ad7-76ec-450a-8c49-d2c0396bc764" (UID: "4f6d8ad7-76ec-450a-8c49-d2c0396bc764"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.174593 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f6d8ad7-76ec-450a-8c49-d2c0396bc764-kube-api-access-q2lkc" (OuterVolumeSpecName: "kube-api-access-q2lkc") pod "4f6d8ad7-76ec-450a-8c49-d2c0396bc764" (UID: "4f6d8ad7-76ec-450a-8c49-d2c0396bc764"). InnerVolumeSpecName "kube-api-access-q2lkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.194071 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f6d8ad7-76ec-450a-8c49-d2c0396bc764-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f6d8ad7-76ec-450a-8c49-d2c0396bc764" (UID: "4f6d8ad7-76ec-450a-8c49-d2c0396bc764"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.204353 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f6d8ad7-76ec-450a-8c49-d2c0396bc764-config-data" (OuterVolumeSpecName: "config-data") pod "4f6d8ad7-76ec-450a-8c49-d2c0396bc764" (UID: "4f6d8ad7-76ec-450a-8c49-d2c0396bc764"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.215591 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f6d8ad7-76ec-450a-8c49-d2c0396bc764-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "4f6d8ad7-76ec-450a-8c49-d2c0396bc764" (UID: "4f6d8ad7-76ec-450a-8c49-d2c0396bc764"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.264673 4867 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f6d8ad7-76ec-450a-8c49-d2c0396bc764-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.264738 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f6d8ad7-76ec-450a-8c49-d2c0396bc764-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.264758 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f6d8ad7-76ec-450a-8c49-d2c0396bc764-logs\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.264775 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f6d8ad7-76ec-450a-8c49-d2c0396bc764-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.264794 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2lkc\" (UniqueName: \"kubernetes.io/projected/4f6d8ad7-76ec-450a-8c49-d2c0396bc764-kube-api-access-q2lkc\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.330719 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.330764 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.365278 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.384731 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.443796 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 01 08:48:51 crc kubenswrapper[4867]: E0101 08:48:51.444336 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75ce895d-6831-4af5-9e10-481ce05ec976" containerName="dnsmasq-dns" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.444359 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="75ce895d-6831-4af5-9e10-481ce05ec976" containerName="dnsmasq-dns" Jan 01 08:48:51 crc kubenswrapper[4867]: E0101 08:48:51.444379 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f6d8ad7-76ec-450a-8c49-d2c0396bc764" containerName="nova-metadata-metadata" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.444387 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f6d8ad7-76ec-450a-8c49-d2c0396bc764" containerName="nova-metadata-metadata" Jan 01 08:48:51 crc kubenswrapper[4867]: E0101 08:48:51.444421 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75ce895d-6831-4af5-9e10-481ce05ec976" containerName="init" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.444430 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="75ce895d-6831-4af5-9e10-481ce05ec976" containerName="init" Jan 01 08:48:51 crc kubenswrapper[4867]: E0101 08:48:51.444455 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f6d8ad7-76ec-450a-8c49-d2c0396bc764" containerName="nova-metadata-log" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.444463 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f6d8ad7-76ec-450a-8c49-d2c0396bc764" containerName="nova-metadata-log" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.444663 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f6d8ad7-76ec-450a-8c49-d2c0396bc764" containerName="nova-metadata-log" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.444687 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f6d8ad7-76ec-450a-8c49-d2c0396bc764" containerName="nova-metadata-metadata" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.444715 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="75ce895d-6831-4af5-9e10-481ce05ec976" containerName="dnsmasq-dns" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.446877 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.449763 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.451583 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.470295 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.570247 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6957a8b-ec17-4cd0-8dab-5bb710fd0768-config-data\") pod \"nova-metadata-0\" (UID: \"e6957a8b-ec17-4cd0-8dab-5bb710fd0768\") " pod="openstack/nova-metadata-0" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.570302 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6957a8b-ec17-4cd0-8dab-5bb710fd0768-logs\") pod \"nova-metadata-0\" (UID: \"e6957a8b-ec17-4cd0-8dab-5bb710fd0768\") " pod="openstack/nova-metadata-0" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.570379 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6957a8b-ec17-4cd0-8dab-5bb710fd0768-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e6957a8b-ec17-4cd0-8dab-5bb710fd0768\") " pod="openstack/nova-metadata-0" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.570415 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6957a8b-ec17-4cd0-8dab-5bb710fd0768-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e6957a8b-ec17-4cd0-8dab-5bb710fd0768\") " pod="openstack/nova-metadata-0" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.570485 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rxsx\" (UniqueName: \"kubernetes.io/projected/e6957a8b-ec17-4cd0-8dab-5bb710fd0768-kube-api-access-7rxsx\") pod \"nova-metadata-0\" (UID: \"e6957a8b-ec17-4cd0-8dab-5bb710fd0768\") " pod="openstack/nova-metadata-0" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.672172 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6957a8b-ec17-4cd0-8dab-5bb710fd0768-config-data\") pod \"nova-metadata-0\" (UID: \"e6957a8b-ec17-4cd0-8dab-5bb710fd0768\") " pod="openstack/nova-metadata-0" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.672453 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6957a8b-ec17-4cd0-8dab-5bb710fd0768-logs\") pod \"nova-metadata-0\" (UID: \"e6957a8b-ec17-4cd0-8dab-5bb710fd0768\") " pod="openstack/nova-metadata-0" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.672515 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6957a8b-ec17-4cd0-8dab-5bb710fd0768-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e6957a8b-ec17-4cd0-8dab-5bb710fd0768\") " pod="openstack/nova-metadata-0" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.672550 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6957a8b-ec17-4cd0-8dab-5bb710fd0768-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e6957a8b-ec17-4cd0-8dab-5bb710fd0768\") " pod="openstack/nova-metadata-0" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.672621 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rxsx\" (UniqueName: \"kubernetes.io/projected/e6957a8b-ec17-4cd0-8dab-5bb710fd0768-kube-api-access-7rxsx\") pod \"nova-metadata-0\" (UID: \"e6957a8b-ec17-4cd0-8dab-5bb710fd0768\") " pod="openstack/nova-metadata-0" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.673275 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6957a8b-ec17-4cd0-8dab-5bb710fd0768-logs\") pod \"nova-metadata-0\" (UID: \"e6957a8b-ec17-4cd0-8dab-5bb710fd0768\") " pod="openstack/nova-metadata-0" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.677408 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6957a8b-ec17-4cd0-8dab-5bb710fd0768-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e6957a8b-ec17-4cd0-8dab-5bb710fd0768\") " pod="openstack/nova-metadata-0" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.678005 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6957a8b-ec17-4cd0-8dab-5bb710fd0768-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e6957a8b-ec17-4cd0-8dab-5bb710fd0768\") " pod="openstack/nova-metadata-0" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.678541 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6957a8b-ec17-4cd0-8dab-5bb710fd0768-config-data\") pod \"nova-metadata-0\" (UID: \"e6957a8b-ec17-4cd0-8dab-5bb710fd0768\") " pod="openstack/nova-metadata-0" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.696664 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rxsx\" (UniqueName: \"kubernetes.io/projected/e6957a8b-ec17-4cd0-8dab-5bb710fd0768-kube-api-access-7rxsx\") pod \"nova-metadata-0\" (UID: \"e6957a8b-ec17-4cd0-8dab-5bb710fd0768\") " pod="openstack/nova-metadata-0" Jan 01 08:48:51 crc kubenswrapper[4867]: I0101 08:48:51.771129 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 01 08:48:52 crc kubenswrapper[4867]: I0101 08:48:52.047194 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8799ae41-c9cb-409a-ac59-3e6b59bb0198","Type":"ContainerStarted","Data":"f57ce717c258cef589d7a47e6fbf0facf4d6e2d61727c0cbd20f621c798a45bd"} Jan 01 08:48:52 crc kubenswrapper[4867]: I0101 08:48:52.047317 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="8c38e70a-7d9b-4601-a3e6-524ad937e365" containerName="nova-scheduler-scheduler" containerID="cri-o://2334025c66327f2a613400895c9d08424c69ef98ce42baaff54ab605006a369e" gracePeriod=30 Jan 01 08:48:52 crc kubenswrapper[4867]: I0101 08:48:52.048007 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 01 08:48:52 crc kubenswrapper[4867]: I0101 08:48:52.068273 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.068256082 podStartE2EDuration="2.068256082s" podCreationTimestamp="2026-01-01 08:48:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:48:52.063048097 +0000 UTC m=+1341.198316876" watchObservedRunningTime="2026-01-01 08:48:52.068256082 +0000 UTC m=+1341.203524851" Jan 01 08:48:52 crc kubenswrapper[4867]: I0101 08:48:52.238930 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 01 08:48:53 crc kubenswrapper[4867]: I0101 08:48:53.058462 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e6957a8b-ec17-4cd0-8dab-5bb710fd0768","Type":"ContainerStarted","Data":"356cad7efac4f0b020691384b49bb64e24caa87f420e466b6e5cfc155590a794"} Jan 01 08:48:53 crc kubenswrapper[4867]: I0101 08:48:53.059190 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e6957a8b-ec17-4cd0-8dab-5bb710fd0768","Type":"ContainerStarted","Data":"9b4577e665ce732cfa00dd33590202e0ea78c4f187e0fd4acbdf6494ea291d59"} Jan 01 08:48:53 crc kubenswrapper[4867]: I0101 08:48:53.059251 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e6957a8b-ec17-4cd0-8dab-5bb710fd0768","Type":"ContainerStarted","Data":"f1540f8c963fbb8fa383f1b97e139442889ad9a71a0a7c1633c40196f1c3ec94"} Jan 01 08:48:53 crc kubenswrapper[4867]: I0101 08:48:53.087596 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.087574224 podStartE2EDuration="2.087574224s" podCreationTimestamp="2026-01-01 08:48:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:48:53.081247327 +0000 UTC m=+1342.216516096" watchObservedRunningTime="2026-01-01 08:48:53.087574224 +0000 UTC m=+1342.222843013" Jan 01 08:48:53 crc kubenswrapper[4867]: I0101 08:48:53.142827 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f6d8ad7-76ec-450a-8c49-d2c0396bc764" path="/var/lib/kubelet/pods/4f6d8ad7-76ec-450a-8c49-d2c0396bc764/volumes" Jan 01 08:48:54 crc kubenswrapper[4867]: E0101 08:48:54.489798 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2334025c66327f2a613400895c9d08424c69ef98ce42baaff54ab605006a369e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 01 08:48:54 crc kubenswrapper[4867]: E0101 08:48:54.492499 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2334025c66327f2a613400895c9d08424c69ef98ce42baaff54ab605006a369e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 01 08:48:54 crc kubenswrapper[4867]: E0101 08:48:54.494844 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2334025c66327f2a613400895c9d08424c69ef98ce42baaff54ab605006a369e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 01 08:48:54 crc kubenswrapper[4867]: E0101 08:48:54.494979 4867 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="8c38e70a-7d9b-4601-a3e6-524ad937e365" containerName="nova-scheduler-scheduler" Jan 01 08:48:55 crc kubenswrapper[4867]: I0101 08:48:55.089518 4867 generic.go:334] "Generic (PLEG): container finished" podID="8c38e70a-7d9b-4601-a3e6-524ad937e365" containerID="2334025c66327f2a613400895c9d08424c69ef98ce42baaff54ab605006a369e" exitCode=0 Jan 01 08:48:55 crc kubenswrapper[4867]: I0101 08:48:55.089617 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8c38e70a-7d9b-4601-a3e6-524ad937e365","Type":"ContainerDied","Data":"2334025c66327f2a613400895c9d08424c69ef98ce42baaff54ab605006a369e"} Jan 01 08:48:55 crc kubenswrapper[4867]: I0101 08:48:55.418377 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 01 08:48:55 crc kubenswrapper[4867]: I0101 08:48:55.544965 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c38e70a-7d9b-4601-a3e6-524ad937e365-config-data\") pod \"8c38e70a-7d9b-4601-a3e6-524ad937e365\" (UID: \"8c38e70a-7d9b-4601-a3e6-524ad937e365\") " Jan 01 08:48:55 crc kubenswrapper[4867]: I0101 08:48:55.545394 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c38e70a-7d9b-4601-a3e6-524ad937e365-combined-ca-bundle\") pod \"8c38e70a-7d9b-4601-a3e6-524ad937e365\" (UID: \"8c38e70a-7d9b-4601-a3e6-524ad937e365\") " Jan 01 08:48:55 crc kubenswrapper[4867]: I0101 08:48:55.545560 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v54xg\" (UniqueName: \"kubernetes.io/projected/8c38e70a-7d9b-4601-a3e6-524ad937e365-kube-api-access-v54xg\") pod \"8c38e70a-7d9b-4601-a3e6-524ad937e365\" (UID: \"8c38e70a-7d9b-4601-a3e6-524ad937e365\") " Jan 01 08:48:55 crc kubenswrapper[4867]: I0101 08:48:55.554098 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c38e70a-7d9b-4601-a3e6-524ad937e365-kube-api-access-v54xg" (OuterVolumeSpecName: "kube-api-access-v54xg") pod "8c38e70a-7d9b-4601-a3e6-524ad937e365" (UID: "8c38e70a-7d9b-4601-a3e6-524ad937e365"). InnerVolumeSpecName "kube-api-access-v54xg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:48:55 crc kubenswrapper[4867]: I0101 08:48:55.587083 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c38e70a-7d9b-4601-a3e6-524ad937e365-config-data" (OuterVolumeSpecName: "config-data") pod "8c38e70a-7d9b-4601-a3e6-524ad937e365" (UID: "8c38e70a-7d9b-4601-a3e6-524ad937e365"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:48:55 crc kubenswrapper[4867]: I0101 08:48:55.591319 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c38e70a-7d9b-4601-a3e6-524ad937e365-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c38e70a-7d9b-4601-a3e6-524ad937e365" (UID: "8c38e70a-7d9b-4601-a3e6-524ad937e365"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:48:55 crc kubenswrapper[4867]: I0101 08:48:55.648394 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v54xg\" (UniqueName: \"kubernetes.io/projected/8c38e70a-7d9b-4601-a3e6-524ad937e365-kube-api-access-v54xg\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:55 crc kubenswrapper[4867]: I0101 08:48:55.648449 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c38e70a-7d9b-4601-a3e6-524ad937e365-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:55 crc kubenswrapper[4867]: I0101 08:48:55.648464 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c38e70a-7d9b-4601-a3e6-524ad937e365-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.103681 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8c38e70a-7d9b-4601-a3e6-524ad937e365","Type":"ContainerDied","Data":"5bad67c9ec939fd2f87956c4e692955c04321d0320124ebe293a66c596a1d680"} Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.104062 4867 scope.go:117] "RemoveContainer" containerID="2334025c66327f2a613400895c9d08424c69ef98ce42baaff54ab605006a369e" Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.104189 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.117729 4867 generic.go:334] "Generic (PLEG): container finished" podID="45703f7c-e7c0-426b-9fb1-2f9db0295f23" containerID="c2cfbb056087e12b2a2885b24f64300c04f4b5224ad362d98504d48a15610138" exitCode=0 Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.117788 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45703f7c-e7c0-426b-9fb1-2f9db0295f23","Type":"ContainerDied","Data":"c2cfbb056087e12b2a2885b24f64300c04f4b5224ad362d98504d48a15610138"} Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.117820 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45703f7c-e7c0-426b-9fb1-2f9db0295f23","Type":"ContainerDied","Data":"60bd5b0cabe31cb4a0f42c2a6144ee80613d1e5a5d90c787e7ed7673f8faeb37"} Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.117834 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60bd5b0cabe31cb4a0f42c2a6144ee80613d1e5a5d90c787e7ed7673f8faeb37" Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.188808 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.201740 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.208787 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.237792 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 01 08:48:56 crc kubenswrapper[4867]: E0101 08:48:56.238206 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45703f7c-e7c0-426b-9fb1-2f9db0295f23" containerName="nova-api-api" Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.238224 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="45703f7c-e7c0-426b-9fb1-2f9db0295f23" containerName="nova-api-api" Jan 01 08:48:56 crc kubenswrapper[4867]: E0101 08:48:56.238247 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45703f7c-e7c0-426b-9fb1-2f9db0295f23" containerName="nova-api-log" Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.238254 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="45703f7c-e7c0-426b-9fb1-2f9db0295f23" containerName="nova-api-log" Jan 01 08:48:56 crc kubenswrapper[4867]: E0101 08:48:56.238265 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c38e70a-7d9b-4601-a3e6-524ad937e365" containerName="nova-scheduler-scheduler" Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.238270 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c38e70a-7d9b-4601-a3e6-524ad937e365" containerName="nova-scheduler-scheduler" Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.238440 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="45703f7c-e7c0-426b-9fb1-2f9db0295f23" containerName="nova-api-api" Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.238452 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="45703f7c-e7c0-426b-9fb1-2f9db0295f23" containerName="nova-api-log" Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.238462 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c38e70a-7d9b-4601-a3e6-524ad937e365" containerName="nova-scheduler-scheduler" Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.239079 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.242341 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.246235 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.264990 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45703f7c-e7c0-426b-9fb1-2f9db0295f23-logs\") pod \"45703f7c-e7c0-426b-9fb1-2f9db0295f23\" (UID: \"45703f7c-e7c0-426b-9fb1-2f9db0295f23\") " Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.265124 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwbzn\" (UniqueName: \"kubernetes.io/projected/45703f7c-e7c0-426b-9fb1-2f9db0295f23-kube-api-access-kwbzn\") pod \"45703f7c-e7c0-426b-9fb1-2f9db0295f23\" (UID: \"45703f7c-e7c0-426b-9fb1-2f9db0295f23\") " Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.265234 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45703f7c-e7c0-426b-9fb1-2f9db0295f23-config-data\") pod \"45703f7c-e7c0-426b-9fb1-2f9db0295f23\" (UID: \"45703f7c-e7c0-426b-9fb1-2f9db0295f23\") " Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.265261 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45703f7c-e7c0-426b-9fb1-2f9db0295f23-combined-ca-bundle\") pod \"45703f7c-e7c0-426b-9fb1-2f9db0295f23\" (UID: \"45703f7c-e7c0-426b-9fb1-2f9db0295f23\") " Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.265691 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45703f7c-e7c0-426b-9fb1-2f9db0295f23-logs" (OuterVolumeSpecName: "logs") pod "45703f7c-e7c0-426b-9fb1-2f9db0295f23" (UID: "45703f7c-e7c0-426b-9fb1-2f9db0295f23"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.271013 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45703f7c-e7c0-426b-9fb1-2f9db0295f23-kube-api-access-kwbzn" (OuterVolumeSpecName: "kube-api-access-kwbzn") pod "45703f7c-e7c0-426b-9fb1-2f9db0295f23" (UID: "45703f7c-e7c0-426b-9fb1-2f9db0295f23"). InnerVolumeSpecName "kube-api-access-kwbzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.290766 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45703f7c-e7c0-426b-9fb1-2f9db0295f23-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45703f7c-e7c0-426b-9fb1-2f9db0295f23" (UID: "45703f7c-e7c0-426b-9fb1-2f9db0295f23"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.307130 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45703f7c-e7c0-426b-9fb1-2f9db0295f23-config-data" (OuterVolumeSpecName: "config-data") pod "45703f7c-e7c0-426b-9fb1-2f9db0295f23" (UID: "45703f7c-e7c0-426b-9fb1-2f9db0295f23"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.367561 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m74kx\" (UniqueName: \"kubernetes.io/projected/4b9eff86-c80d-4eb0-8a44-1e9c6511c90d-kube-api-access-m74kx\") pod \"nova-scheduler-0\" (UID: \"4b9eff86-c80d-4eb0-8a44-1e9c6511c90d\") " pod="openstack/nova-scheduler-0" Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.367637 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b9eff86-c80d-4eb0-8a44-1e9c6511c90d-config-data\") pod \"nova-scheduler-0\" (UID: \"4b9eff86-c80d-4eb0-8a44-1e9c6511c90d\") " pod="openstack/nova-scheduler-0" Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.367711 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b9eff86-c80d-4eb0-8a44-1e9c6511c90d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4b9eff86-c80d-4eb0-8a44-1e9c6511c90d\") " pod="openstack/nova-scheduler-0" Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.367807 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwbzn\" (UniqueName: \"kubernetes.io/projected/45703f7c-e7c0-426b-9fb1-2f9db0295f23-kube-api-access-kwbzn\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.367822 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45703f7c-e7c0-426b-9fb1-2f9db0295f23-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.367833 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45703f7c-e7c0-426b-9fb1-2f9db0295f23-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.367844 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45703f7c-e7c0-426b-9fb1-2f9db0295f23-logs\") on node \"crc\" DevicePath \"\"" Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.470421 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m74kx\" (UniqueName: \"kubernetes.io/projected/4b9eff86-c80d-4eb0-8a44-1e9c6511c90d-kube-api-access-m74kx\") pod \"nova-scheduler-0\" (UID: \"4b9eff86-c80d-4eb0-8a44-1e9c6511c90d\") " pod="openstack/nova-scheduler-0" Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.470791 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b9eff86-c80d-4eb0-8a44-1e9c6511c90d-config-data\") pod \"nova-scheduler-0\" (UID: \"4b9eff86-c80d-4eb0-8a44-1e9c6511c90d\") " pod="openstack/nova-scheduler-0" Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.470924 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b9eff86-c80d-4eb0-8a44-1e9c6511c90d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4b9eff86-c80d-4eb0-8a44-1e9c6511c90d\") " pod="openstack/nova-scheduler-0" Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.476284 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b9eff86-c80d-4eb0-8a44-1e9c6511c90d-config-data\") pod \"nova-scheduler-0\" (UID: \"4b9eff86-c80d-4eb0-8a44-1e9c6511c90d\") " pod="openstack/nova-scheduler-0" Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.476700 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b9eff86-c80d-4eb0-8a44-1e9c6511c90d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4b9eff86-c80d-4eb0-8a44-1e9c6511c90d\") " pod="openstack/nova-scheduler-0" Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.491527 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m74kx\" (UniqueName: \"kubernetes.io/projected/4b9eff86-c80d-4eb0-8a44-1e9c6511c90d-kube-api-access-m74kx\") pod \"nova-scheduler-0\" (UID: \"4b9eff86-c80d-4eb0-8a44-1e9c6511c90d\") " pod="openstack/nova-scheduler-0" Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.564997 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.771631 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 01 08:48:56 crc kubenswrapper[4867]: I0101 08:48:56.773040 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 01 08:48:57 crc kubenswrapper[4867]: I0101 08:48:57.135240 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 01 08:48:57 crc kubenswrapper[4867]: W0101 08:48:57.150548 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b9eff86_c80d_4eb0_8a44_1e9c6511c90d.slice/crio-d8f4660105e8f8775e189d85f67c1273646ebfb074d208d234896b0c50e9cf88 WatchSource:0}: Error finding container d8f4660105e8f8775e189d85f67c1273646ebfb074d208d234896b0c50e9cf88: Status 404 returned error can't find the container with id d8f4660105e8f8775e189d85f67c1273646ebfb074d208d234896b0c50e9cf88 Jan 01 08:48:57 crc kubenswrapper[4867]: I0101 08:48:57.153551 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c38e70a-7d9b-4601-a3e6-524ad937e365" path="/var/lib/kubelet/pods/8c38e70a-7d9b-4601-a3e6-524ad937e365/volumes" Jan 01 08:48:57 crc kubenswrapper[4867]: I0101 08:48:57.161134 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 01 08:48:57 crc kubenswrapper[4867]: I0101 08:48:57.329386 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 01 08:48:57 crc kubenswrapper[4867]: I0101 08:48:57.337958 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 01 08:48:57 crc kubenswrapper[4867]: I0101 08:48:57.351753 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 01 08:48:57 crc kubenswrapper[4867]: I0101 08:48:57.359623 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 01 08:48:57 crc kubenswrapper[4867]: I0101 08:48:57.364147 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 01 08:48:57 crc kubenswrapper[4867]: I0101 08:48:57.372479 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 01 08:48:57 crc kubenswrapper[4867]: I0101 08:48:57.493574 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9jrc\" (UniqueName: \"kubernetes.io/projected/5e39c244-f85a-4705-8796-128001d4cde3-kube-api-access-x9jrc\") pod \"nova-api-0\" (UID: \"5e39c244-f85a-4705-8796-128001d4cde3\") " pod="openstack/nova-api-0" Jan 01 08:48:57 crc kubenswrapper[4867]: I0101 08:48:57.493696 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e39c244-f85a-4705-8796-128001d4cde3-logs\") pod \"nova-api-0\" (UID: \"5e39c244-f85a-4705-8796-128001d4cde3\") " pod="openstack/nova-api-0" Jan 01 08:48:57 crc kubenswrapper[4867]: I0101 08:48:57.493745 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e39c244-f85a-4705-8796-128001d4cde3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5e39c244-f85a-4705-8796-128001d4cde3\") " pod="openstack/nova-api-0" Jan 01 08:48:57 crc kubenswrapper[4867]: I0101 08:48:57.493957 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e39c244-f85a-4705-8796-128001d4cde3-config-data\") pod \"nova-api-0\" (UID: \"5e39c244-f85a-4705-8796-128001d4cde3\") " pod="openstack/nova-api-0" Jan 01 08:48:57 crc kubenswrapper[4867]: I0101 08:48:57.595737 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e39c244-f85a-4705-8796-128001d4cde3-logs\") pod \"nova-api-0\" (UID: \"5e39c244-f85a-4705-8796-128001d4cde3\") " pod="openstack/nova-api-0" Jan 01 08:48:57 crc kubenswrapper[4867]: I0101 08:48:57.596047 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e39c244-f85a-4705-8796-128001d4cde3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5e39c244-f85a-4705-8796-128001d4cde3\") " pod="openstack/nova-api-0" Jan 01 08:48:57 crc kubenswrapper[4867]: I0101 08:48:57.596901 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e39c244-f85a-4705-8796-128001d4cde3-config-data\") pod \"nova-api-0\" (UID: \"5e39c244-f85a-4705-8796-128001d4cde3\") " pod="openstack/nova-api-0" Jan 01 08:48:57 crc kubenswrapper[4867]: I0101 08:48:57.596470 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e39c244-f85a-4705-8796-128001d4cde3-logs\") pod \"nova-api-0\" (UID: \"5e39c244-f85a-4705-8796-128001d4cde3\") " pod="openstack/nova-api-0" Jan 01 08:48:57 crc kubenswrapper[4867]: I0101 08:48:57.597105 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9jrc\" (UniqueName: \"kubernetes.io/projected/5e39c244-f85a-4705-8796-128001d4cde3-kube-api-access-x9jrc\") pod \"nova-api-0\" (UID: \"5e39c244-f85a-4705-8796-128001d4cde3\") " pod="openstack/nova-api-0" Jan 01 08:48:57 crc kubenswrapper[4867]: I0101 08:48:57.600183 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e39c244-f85a-4705-8796-128001d4cde3-config-data\") pod \"nova-api-0\" (UID: \"5e39c244-f85a-4705-8796-128001d4cde3\") " pod="openstack/nova-api-0" Jan 01 08:48:57 crc kubenswrapper[4867]: I0101 08:48:57.600400 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e39c244-f85a-4705-8796-128001d4cde3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5e39c244-f85a-4705-8796-128001d4cde3\") " pod="openstack/nova-api-0" Jan 01 08:48:57 crc kubenswrapper[4867]: I0101 08:48:57.626537 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9jrc\" (UniqueName: \"kubernetes.io/projected/5e39c244-f85a-4705-8796-128001d4cde3-kube-api-access-x9jrc\") pod \"nova-api-0\" (UID: \"5e39c244-f85a-4705-8796-128001d4cde3\") " pod="openstack/nova-api-0" Jan 01 08:48:57 crc kubenswrapper[4867]: I0101 08:48:57.677970 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 01 08:48:58 crc kubenswrapper[4867]: I0101 08:48:58.144555 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4b9eff86-c80d-4eb0-8a44-1e9c6511c90d","Type":"ContainerStarted","Data":"9c77d5a97fabc9b4bb9a0cc9407ed27256502d418156abe1b4ff4d50a8b42982"} Jan 01 08:48:58 crc kubenswrapper[4867]: I0101 08:48:58.145047 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4b9eff86-c80d-4eb0-8a44-1e9c6511c90d","Type":"ContainerStarted","Data":"d8f4660105e8f8775e189d85f67c1273646ebfb074d208d234896b0c50e9cf88"} Jan 01 08:48:58 crc kubenswrapper[4867]: I0101 08:48:58.167862 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.167839054 podStartE2EDuration="2.167839054s" podCreationTimestamp="2026-01-01 08:48:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:48:58.161086955 +0000 UTC m=+1347.296355764" watchObservedRunningTime="2026-01-01 08:48:58.167839054 +0000 UTC m=+1347.303107863" Jan 01 08:48:58 crc kubenswrapper[4867]: W0101 08:48:58.209467 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e39c244_f85a_4705_8796_128001d4cde3.slice/crio-8d0163237ccb1997aaf97aebe269d1e5d35bb7cc964b152f7d266f9612a54cec WatchSource:0}: Error finding container 8d0163237ccb1997aaf97aebe269d1e5d35bb7cc964b152f7d266f9612a54cec: Status 404 returned error can't find the container with id 8d0163237ccb1997aaf97aebe269d1e5d35bb7cc964b152f7d266f9612a54cec Jan 01 08:48:58 crc kubenswrapper[4867]: I0101 08:48:58.210444 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 01 08:48:59 crc kubenswrapper[4867]: I0101 08:48:59.145629 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45703f7c-e7c0-426b-9fb1-2f9db0295f23" path="/var/lib/kubelet/pods/45703f7c-e7c0-426b-9fb1-2f9db0295f23/volumes" Jan 01 08:48:59 crc kubenswrapper[4867]: I0101 08:48:59.158771 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5e39c244-f85a-4705-8796-128001d4cde3","Type":"ContainerStarted","Data":"18cc00d9f4fe7cf8d6e93600dfee079045baa8f15639e9f2adc34ea6f18cfd5c"} Jan 01 08:48:59 crc kubenswrapper[4867]: I0101 08:48:59.158864 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5e39c244-f85a-4705-8796-128001d4cde3","Type":"ContainerStarted","Data":"f5edf9df88825c6ebca64f6da373d88a3a08740a86b319653515c9c430b8cc56"} Jan 01 08:48:59 crc kubenswrapper[4867]: I0101 08:48:59.158914 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5e39c244-f85a-4705-8796-128001d4cde3","Type":"ContainerStarted","Data":"8d0163237ccb1997aaf97aebe269d1e5d35bb7cc964b152f7d266f9612a54cec"} Jan 01 08:48:59 crc kubenswrapper[4867]: I0101 08:48:59.189052 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.189025618 podStartE2EDuration="2.189025618s" podCreationTimestamp="2026-01-01 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:48:59.178449702 +0000 UTC m=+1348.313718491" watchObservedRunningTime="2026-01-01 08:48:59.189025618 +0000 UTC m=+1348.324294387" Jan 01 08:49:00 crc kubenswrapper[4867]: I0101 08:49:00.115879 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 01 08:49:00 crc kubenswrapper[4867]: I0101 08:49:00.424801 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 01 08:49:01 crc kubenswrapper[4867]: I0101 08:49:01.566149 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 01 08:49:01 crc kubenswrapper[4867]: I0101 08:49:01.772242 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 01 08:49:01 crc kubenswrapper[4867]: I0101 08:49:01.772303 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 01 08:49:02 crc kubenswrapper[4867]: I0101 08:49:02.786068 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e6957a8b-ec17-4cd0-8dab-5bb710fd0768" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 01 08:49:02 crc kubenswrapper[4867]: I0101 08:49:02.786096 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e6957a8b-ec17-4cd0-8dab-5bb710fd0768" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 01 08:49:04 crc kubenswrapper[4867]: I0101 08:49:04.787698 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 01 08:49:04 crc kubenswrapper[4867]: I0101 08:49:04.789182 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="ac91c4fe-e982-4a27-b80f-e7d0d7659cc6" containerName="kube-state-metrics" containerID="cri-o://9caed6a6e124cb26a7f6e990787e129c9b7a7e22f995f9b92bc3c80929ed9d80" gracePeriod=30 Jan 01 08:49:05 crc kubenswrapper[4867]: I0101 08:49:05.233268 4867 generic.go:334] "Generic (PLEG): container finished" podID="ac91c4fe-e982-4a27-b80f-e7d0d7659cc6" containerID="9caed6a6e124cb26a7f6e990787e129c9b7a7e22f995f9b92bc3c80929ed9d80" exitCode=2 Jan 01 08:49:05 crc kubenswrapper[4867]: I0101 08:49:05.233323 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ac91c4fe-e982-4a27-b80f-e7d0d7659cc6","Type":"ContainerDied","Data":"9caed6a6e124cb26a7f6e990787e129c9b7a7e22f995f9b92bc3c80929ed9d80"} Jan 01 08:49:05 crc kubenswrapper[4867]: I0101 08:49:05.786020 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 01 08:49:05 crc kubenswrapper[4867]: I0101 08:49:05.856215 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmdnl\" (UniqueName: \"kubernetes.io/projected/ac91c4fe-e982-4a27-b80f-e7d0d7659cc6-kube-api-access-fmdnl\") pod \"ac91c4fe-e982-4a27-b80f-e7d0d7659cc6\" (UID: \"ac91c4fe-e982-4a27-b80f-e7d0d7659cc6\") " Jan 01 08:49:05 crc kubenswrapper[4867]: I0101 08:49:05.863336 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac91c4fe-e982-4a27-b80f-e7d0d7659cc6-kube-api-access-fmdnl" (OuterVolumeSpecName: "kube-api-access-fmdnl") pod "ac91c4fe-e982-4a27-b80f-e7d0d7659cc6" (UID: "ac91c4fe-e982-4a27-b80f-e7d0d7659cc6"). InnerVolumeSpecName "kube-api-access-fmdnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:49:05 crc kubenswrapper[4867]: I0101 08:49:05.958340 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmdnl\" (UniqueName: \"kubernetes.io/projected/ac91c4fe-e982-4a27-b80f-e7d0d7659cc6-kube-api-access-fmdnl\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:06 crc kubenswrapper[4867]: I0101 08:49:06.246014 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ac91c4fe-e982-4a27-b80f-e7d0d7659cc6","Type":"ContainerDied","Data":"0148a5641a066bae6765eee8c241f1ed0108d51f663b68195353d1ed0980e9a0"} Jan 01 08:49:06 crc kubenswrapper[4867]: I0101 08:49:06.246086 4867 scope.go:117] "RemoveContainer" containerID="9caed6a6e124cb26a7f6e990787e129c9b7a7e22f995f9b92bc3c80929ed9d80" Jan 01 08:49:06 crc kubenswrapper[4867]: I0101 08:49:06.246143 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 01 08:49:06 crc kubenswrapper[4867]: I0101 08:49:06.288060 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 01 08:49:06 crc kubenswrapper[4867]: I0101 08:49:06.302155 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 01 08:49:06 crc kubenswrapper[4867]: I0101 08:49:06.312949 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 01 08:49:06 crc kubenswrapper[4867]: E0101 08:49:06.313459 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac91c4fe-e982-4a27-b80f-e7d0d7659cc6" containerName="kube-state-metrics" Jan 01 08:49:06 crc kubenswrapper[4867]: I0101 08:49:06.313478 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac91c4fe-e982-4a27-b80f-e7d0d7659cc6" containerName="kube-state-metrics" Jan 01 08:49:06 crc kubenswrapper[4867]: I0101 08:49:06.313744 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac91c4fe-e982-4a27-b80f-e7d0d7659cc6" containerName="kube-state-metrics" Jan 01 08:49:06 crc kubenswrapper[4867]: I0101 08:49:06.314520 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 01 08:49:06 crc kubenswrapper[4867]: I0101 08:49:06.316635 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 01 08:49:06 crc kubenswrapper[4867]: I0101 08:49:06.317399 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 01 08:49:06 crc kubenswrapper[4867]: I0101 08:49:06.334153 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 01 08:49:06 crc kubenswrapper[4867]: I0101 08:49:06.470048 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/15b1cd3d-248e-4861-a69a-4c8d284babb3-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"15b1cd3d-248e-4861-a69a-4c8d284babb3\") " pod="openstack/kube-state-metrics-0" Jan 01 08:49:06 crc kubenswrapper[4867]: I0101 08:49:06.470201 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/15b1cd3d-248e-4861-a69a-4c8d284babb3-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"15b1cd3d-248e-4861-a69a-4c8d284babb3\") " pod="openstack/kube-state-metrics-0" Jan 01 08:49:06 crc kubenswrapper[4867]: I0101 08:49:06.470268 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lshtc\" (UniqueName: \"kubernetes.io/projected/15b1cd3d-248e-4861-a69a-4c8d284babb3-kube-api-access-lshtc\") pod \"kube-state-metrics-0\" (UID: \"15b1cd3d-248e-4861-a69a-4c8d284babb3\") " pod="openstack/kube-state-metrics-0" Jan 01 08:49:06 crc kubenswrapper[4867]: I0101 08:49:06.470318 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15b1cd3d-248e-4861-a69a-4c8d284babb3-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"15b1cd3d-248e-4861-a69a-4c8d284babb3\") " pod="openstack/kube-state-metrics-0" Jan 01 08:49:06 crc kubenswrapper[4867]: I0101 08:49:06.566225 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 01 08:49:06 crc kubenswrapper[4867]: I0101 08:49:06.571915 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lshtc\" (UniqueName: \"kubernetes.io/projected/15b1cd3d-248e-4861-a69a-4c8d284babb3-kube-api-access-lshtc\") pod \"kube-state-metrics-0\" (UID: \"15b1cd3d-248e-4861-a69a-4c8d284babb3\") " pod="openstack/kube-state-metrics-0" Jan 01 08:49:06 crc kubenswrapper[4867]: I0101 08:49:06.571979 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15b1cd3d-248e-4861-a69a-4c8d284babb3-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"15b1cd3d-248e-4861-a69a-4c8d284babb3\") " pod="openstack/kube-state-metrics-0" Jan 01 08:49:06 crc kubenswrapper[4867]: I0101 08:49:06.572032 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/15b1cd3d-248e-4861-a69a-4c8d284babb3-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"15b1cd3d-248e-4861-a69a-4c8d284babb3\") " pod="openstack/kube-state-metrics-0" Jan 01 08:49:06 crc kubenswrapper[4867]: I0101 08:49:06.572115 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/15b1cd3d-248e-4861-a69a-4c8d284babb3-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"15b1cd3d-248e-4861-a69a-4c8d284babb3\") " pod="openstack/kube-state-metrics-0" Jan 01 08:49:06 crc kubenswrapper[4867]: I0101 08:49:06.577379 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15b1cd3d-248e-4861-a69a-4c8d284babb3-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"15b1cd3d-248e-4861-a69a-4c8d284babb3\") " pod="openstack/kube-state-metrics-0" Jan 01 08:49:06 crc kubenswrapper[4867]: I0101 08:49:06.583393 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/15b1cd3d-248e-4861-a69a-4c8d284babb3-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"15b1cd3d-248e-4861-a69a-4c8d284babb3\") " pod="openstack/kube-state-metrics-0" Jan 01 08:49:06 crc kubenswrapper[4867]: I0101 08:49:06.590225 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/15b1cd3d-248e-4861-a69a-4c8d284babb3-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"15b1cd3d-248e-4861-a69a-4c8d284babb3\") " pod="openstack/kube-state-metrics-0" Jan 01 08:49:06 crc kubenswrapper[4867]: I0101 08:49:06.593598 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lshtc\" (UniqueName: \"kubernetes.io/projected/15b1cd3d-248e-4861-a69a-4c8d284babb3-kube-api-access-lshtc\") pod \"kube-state-metrics-0\" (UID: \"15b1cd3d-248e-4861-a69a-4c8d284babb3\") " pod="openstack/kube-state-metrics-0" Jan 01 08:49:06 crc kubenswrapper[4867]: I0101 08:49:06.618167 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 01 08:49:06 crc kubenswrapper[4867]: I0101 08:49:06.642251 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 01 08:49:06 crc kubenswrapper[4867]: I0101 08:49:06.718345 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:49:06 crc kubenswrapper[4867]: I0101 08:49:06.718670 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="03a3b9bd-55d1-469d-a4b9-4db9992bbf56" containerName="ceilometer-central-agent" containerID="cri-o://978e912450fcc6bc39516afc738b91cba25cf146101d3265e9a3d6435ed0c217" gracePeriod=30 Jan 01 08:49:06 crc kubenswrapper[4867]: I0101 08:49:06.718837 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="03a3b9bd-55d1-469d-a4b9-4db9992bbf56" containerName="proxy-httpd" containerID="cri-o://740ae53dfca1fdd4b49fbcfc52d0dc2f6cf189a2a4a0c665564dff50f369cb81" gracePeriod=30 Jan 01 08:49:06 crc kubenswrapper[4867]: I0101 08:49:06.718936 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="03a3b9bd-55d1-469d-a4b9-4db9992bbf56" containerName="sg-core" containerID="cri-o://50618d689c3b7e41657dd6efc02228bf93f504ca050c1afe3dbad4fd3151fc33" gracePeriod=30 Jan 01 08:49:06 crc kubenswrapper[4867]: I0101 08:49:06.719006 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="03a3b9bd-55d1-469d-a4b9-4db9992bbf56" containerName="ceilometer-notification-agent" containerID="cri-o://52a3473b0fb6b9986b54fdfe19af6b5074a295f8c1f29dd811efcdd912472601" gracePeriod=30 Jan 01 08:49:07 crc kubenswrapper[4867]: I0101 08:49:07.142193 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac91c4fe-e982-4a27-b80f-e7d0d7659cc6" path="/var/lib/kubelet/pods/ac91c4fe-e982-4a27-b80f-e7d0d7659cc6/volumes" Jan 01 08:49:07 crc kubenswrapper[4867]: I0101 08:49:07.151784 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 01 08:49:07 crc kubenswrapper[4867]: I0101 08:49:07.255008 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"15b1cd3d-248e-4861-a69a-4c8d284babb3","Type":"ContainerStarted","Data":"5b2c2937f1d076d55f8da93966befa0dda487e66069dd4617660f8aadafad024"} Jan 01 08:49:07 crc kubenswrapper[4867]: I0101 08:49:07.257685 4867 generic.go:334] "Generic (PLEG): container finished" podID="03a3b9bd-55d1-469d-a4b9-4db9992bbf56" containerID="740ae53dfca1fdd4b49fbcfc52d0dc2f6cf189a2a4a0c665564dff50f369cb81" exitCode=0 Jan 01 08:49:07 crc kubenswrapper[4867]: I0101 08:49:07.257715 4867 generic.go:334] "Generic (PLEG): container finished" podID="03a3b9bd-55d1-469d-a4b9-4db9992bbf56" containerID="50618d689c3b7e41657dd6efc02228bf93f504ca050c1afe3dbad4fd3151fc33" exitCode=2 Jan 01 08:49:07 crc kubenswrapper[4867]: I0101 08:49:07.257726 4867 generic.go:334] "Generic (PLEG): container finished" podID="03a3b9bd-55d1-469d-a4b9-4db9992bbf56" containerID="978e912450fcc6bc39516afc738b91cba25cf146101d3265e9a3d6435ed0c217" exitCode=0 Jan 01 08:49:07 crc kubenswrapper[4867]: I0101 08:49:07.257786 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03a3b9bd-55d1-469d-a4b9-4db9992bbf56","Type":"ContainerDied","Data":"740ae53dfca1fdd4b49fbcfc52d0dc2f6cf189a2a4a0c665564dff50f369cb81"} Jan 01 08:49:07 crc kubenswrapper[4867]: I0101 08:49:07.257841 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03a3b9bd-55d1-469d-a4b9-4db9992bbf56","Type":"ContainerDied","Data":"50618d689c3b7e41657dd6efc02228bf93f504ca050c1afe3dbad4fd3151fc33"} Jan 01 08:49:07 crc kubenswrapper[4867]: I0101 08:49:07.257860 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03a3b9bd-55d1-469d-a4b9-4db9992bbf56","Type":"ContainerDied","Data":"978e912450fcc6bc39516afc738b91cba25cf146101d3265e9a3d6435ed0c217"} Jan 01 08:49:07 crc kubenswrapper[4867]: I0101 08:49:07.285975 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 01 08:49:07 crc kubenswrapper[4867]: I0101 08:49:07.679040 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 01 08:49:07 crc kubenswrapper[4867]: I0101 08:49:07.679410 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 01 08:49:08 crc kubenswrapper[4867]: I0101 08:49:08.269738 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"15b1cd3d-248e-4861-a69a-4c8d284babb3","Type":"ContainerStarted","Data":"241032430a352896d36a6db8af38c17729d1b448351883f302237d33b4869790"} Jan 01 08:49:08 crc kubenswrapper[4867]: I0101 08:49:08.301609 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.872907458 podStartE2EDuration="2.301583294s" podCreationTimestamp="2026-01-01 08:49:06 +0000 UTC" firstStartedPulling="2026-01-01 08:49:07.157234289 +0000 UTC m=+1356.292503058" lastFinishedPulling="2026-01-01 08:49:07.585910115 +0000 UTC m=+1356.721178894" observedRunningTime="2026-01-01 08:49:08.293633421 +0000 UTC m=+1357.428902210" watchObservedRunningTime="2026-01-01 08:49:08.301583294 +0000 UTC m=+1357.436852063" Jan 01 08:49:08 crc kubenswrapper[4867]: I0101 08:49:08.765470 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5e39c244-f85a-4705-8796-128001d4cde3" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 01 08:49:08 crc kubenswrapper[4867]: I0101 08:49:08.765556 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5e39c244-f85a-4705-8796-128001d4cde3" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.013307 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.126486 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-combined-ca-bundle\") pod \"03a3b9bd-55d1-469d-a4b9-4db9992bbf56\" (UID: \"03a3b9bd-55d1-469d-a4b9-4db9992bbf56\") " Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.126582 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-run-httpd\") pod \"03a3b9bd-55d1-469d-a4b9-4db9992bbf56\" (UID: \"03a3b9bd-55d1-469d-a4b9-4db9992bbf56\") " Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.126654 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-config-data\") pod \"03a3b9bd-55d1-469d-a4b9-4db9992bbf56\" (UID: \"03a3b9bd-55d1-469d-a4b9-4db9992bbf56\") " Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.126777 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-log-httpd\") pod \"03a3b9bd-55d1-469d-a4b9-4db9992bbf56\" (UID: \"03a3b9bd-55d1-469d-a4b9-4db9992bbf56\") " Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.126822 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4jxg\" (UniqueName: \"kubernetes.io/projected/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-kube-api-access-f4jxg\") pod \"03a3b9bd-55d1-469d-a4b9-4db9992bbf56\" (UID: \"03a3b9bd-55d1-469d-a4b9-4db9992bbf56\") " Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.126858 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-scripts\") pod \"03a3b9bd-55d1-469d-a4b9-4db9992bbf56\" (UID: \"03a3b9bd-55d1-469d-a4b9-4db9992bbf56\") " Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.126963 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-sg-core-conf-yaml\") pod \"03a3b9bd-55d1-469d-a4b9-4db9992bbf56\" (UID: \"03a3b9bd-55d1-469d-a4b9-4db9992bbf56\") " Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.135344 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "03a3b9bd-55d1-469d-a4b9-4db9992bbf56" (UID: "03a3b9bd-55d1-469d-a4b9-4db9992bbf56"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.137824 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "03a3b9bd-55d1-469d-a4b9-4db9992bbf56" (UID: "03a3b9bd-55d1-469d-a4b9-4db9992bbf56"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.148141 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-scripts" (OuterVolumeSpecName: "scripts") pod "03a3b9bd-55d1-469d-a4b9-4db9992bbf56" (UID: "03a3b9bd-55d1-469d-a4b9-4db9992bbf56"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.154706 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-kube-api-access-f4jxg" (OuterVolumeSpecName: "kube-api-access-f4jxg") pod "03a3b9bd-55d1-469d-a4b9-4db9992bbf56" (UID: "03a3b9bd-55d1-469d-a4b9-4db9992bbf56"). InnerVolumeSpecName "kube-api-access-f4jxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.169138 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "03a3b9bd-55d1-469d-a4b9-4db9992bbf56" (UID: "03a3b9bd-55d1-469d-a4b9-4db9992bbf56"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.235045 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03a3b9bd-55d1-469d-a4b9-4db9992bbf56" (UID: "03a3b9bd-55d1-469d-a4b9-4db9992bbf56"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.235315 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.235401 4867 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.235478 4867 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.235617 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4jxg\" (UniqueName: \"kubernetes.io/projected/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-kube-api-access-f4jxg\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.235695 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.235798 4867 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.255257 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-config-data" (OuterVolumeSpecName: "config-data") pod "03a3b9bd-55d1-469d-a4b9-4db9992bbf56" (UID: "03a3b9bd-55d1-469d-a4b9-4db9992bbf56"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.290707 4867 generic.go:334] "Generic (PLEG): container finished" podID="03a3b9bd-55d1-469d-a4b9-4db9992bbf56" containerID="52a3473b0fb6b9986b54fdfe19af6b5074a295f8c1f29dd811efcdd912472601" exitCode=0 Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.290758 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.290770 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03a3b9bd-55d1-469d-a4b9-4db9992bbf56","Type":"ContainerDied","Data":"52a3473b0fb6b9986b54fdfe19af6b5074a295f8c1f29dd811efcdd912472601"} Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.291796 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03a3b9bd-55d1-469d-a4b9-4db9992bbf56","Type":"ContainerDied","Data":"151bb62db5f3ef7d4932771898c41abb4725fcc2ae3643ebdf430b7efdda85db"} Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.291834 4867 scope.go:117] "RemoveContainer" containerID="740ae53dfca1fdd4b49fbcfc52d0dc2f6cf189a2a4a0c665564dff50f369cb81" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.292146 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.318667 4867 scope.go:117] "RemoveContainer" containerID="50618d689c3b7e41657dd6efc02228bf93f504ca050c1afe3dbad4fd3151fc33" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.337001 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.338472 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03a3b9bd-55d1-469d-a4b9-4db9992bbf56-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.345119 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.351100 4867 scope.go:117] "RemoveContainer" containerID="52a3473b0fb6b9986b54fdfe19af6b5074a295f8c1f29dd811efcdd912472601" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.369416 4867 scope.go:117] "RemoveContainer" containerID="978e912450fcc6bc39516afc738b91cba25cf146101d3265e9a3d6435ed0c217" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.370083 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:49:09 crc kubenswrapper[4867]: E0101 08:49:09.370467 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03a3b9bd-55d1-469d-a4b9-4db9992bbf56" containerName="ceilometer-notification-agent" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.370484 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="03a3b9bd-55d1-469d-a4b9-4db9992bbf56" containerName="ceilometer-notification-agent" Jan 01 08:49:09 crc kubenswrapper[4867]: E0101 08:49:09.370526 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03a3b9bd-55d1-469d-a4b9-4db9992bbf56" containerName="proxy-httpd" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.370535 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="03a3b9bd-55d1-469d-a4b9-4db9992bbf56" containerName="proxy-httpd" Jan 01 08:49:09 crc kubenswrapper[4867]: E0101 08:49:09.370550 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03a3b9bd-55d1-469d-a4b9-4db9992bbf56" containerName="ceilometer-central-agent" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.370558 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="03a3b9bd-55d1-469d-a4b9-4db9992bbf56" containerName="ceilometer-central-agent" Jan 01 08:49:09 crc kubenswrapper[4867]: E0101 08:49:09.370582 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03a3b9bd-55d1-469d-a4b9-4db9992bbf56" containerName="sg-core" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.370590 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="03a3b9bd-55d1-469d-a4b9-4db9992bbf56" containerName="sg-core" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.370778 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="03a3b9bd-55d1-469d-a4b9-4db9992bbf56" containerName="ceilometer-notification-agent" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.370794 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="03a3b9bd-55d1-469d-a4b9-4db9992bbf56" containerName="sg-core" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.370814 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="03a3b9bd-55d1-469d-a4b9-4db9992bbf56" containerName="ceilometer-central-agent" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.370823 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="03a3b9bd-55d1-469d-a4b9-4db9992bbf56" containerName="proxy-httpd" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.372344 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.375599 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.375731 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.375848 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.381254 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.405768 4867 scope.go:117] "RemoveContainer" containerID="740ae53dfca1fdd4b49fbcfc52d0dc2f6cf189a2a4a0c665564dff50f369cb81" Jan 01 08:49:09 crc kubenswrapper[4867]: E0101 08:49:09.406509 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"740ae53dfca1fdd4b49fbcfc52d0dc2f6cf189a2a4a0c665564dff50f369cb81\": container with ID starting with 740ae53dfca1fdd4b49fbcfc52d0dc2f6cf189a2a4a0c665564dff50f369cb81 not found: ID does not exist" containerID="740ae53dfca1fdd4b49fbcfc52d0dc2f6cf189a2a4a0c665564dff50f369cb81" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.406552 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"740ae53dfca1fdd4b49fbcfc52d0dc2f6cf189a2a4a0c665564dff50f369cb81"} err="failed to get container status \"740ae53dfca1fdd4b49fbcfc52d0dc2f6cf189a2a4a0c665564dff50f369cb81\": rpc error: code = NotFound desc = could not find container \"740ae53dfca1fdd4b49fbcfc52d0dc2f6cf189a2a4a0c665564dff50f369cb81\": container with ID starting with 740ae53dfca1fdd4b49fbcfc52d0dc2f6cf189a2a4a0c665564dff50f369cb81 not found: ID does not exist" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.406587 4867 scope.go:117] "RemoveContainer" containerID="50618d689c3b7e41657dd6efc02228bf93f504ca050c1afe3dbad4fd3151fc33" Jan 01 08:49:09 crc kubenswrapper[4867]: E0101 08:49:09.407106 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50618d689c3b7e41657dd6efc02228bf93f504ca050c1afe3dbad4fd3151fc33\": container with ID starting with 50618d689c3b7e41657dd6efc02228bf93f504ca050c1afe3dbad4fd3151fc33 not found: ID does not exist" containerID="50618d689c3b7e41657dd6efc02228bf93f504ca050c1afe3dbad4fd3151fc33" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.407150 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50618d689c3b7e41657dd6efc02228bf93f504ca050c1afe3dbad4fd3151fc33"} err="failed to get container status \"50618d689c3b7e41657dd6efc02228bf93f504ca050c1afe3dbad4fd3151fc33\": rpc error: code = NotFound desc = could not find container \"50618d689c3b7e41657dd6efc02228bf93f504ca050c1afe3dbad4fd3151fc33\": container with ID starting with 50618d689c3b7e41657dd6efc02228bf93f504ca050c1afe3dbad4fd3151fc33 not found: ID does not exist" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.407178 4867 scope.go:117] "RemoveContainer" containerID="52a3473b0fb6b9986b54fdfe19af6b5074a295f8c1f29dd811efcdd912472601" Jan 01 08:49:09 crc kubenswrapper[4867]: E0101 08:49:09.407479 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52a3473b0fb6b9986b54fdfe19af6b5074a295f8c1f29dd811efcdd912472601\": container with ID starting with 52a3473b0fb6b9986b54fdfe19af6b5074a295f8c1f29dd811efcdd912472601 not found: ID does not exist" containerID="52a3473b0fb6b9986b54fdfe19af6b5074a295f8c1f29dd811efcdd912472601" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.407510 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52a3473b0fb6b9986b54fdfe19af6b5074a295f8c1f29dd811efcdd912472601"} err="failed to get container status \"52a3473b0fb6b9986b54fdfe19af6b5074a295f8c1f29dd811efcdd912472601\": rpc error: code = NotFound desc = could not find container \"52a3473b0fb6b9986b54fdfe19af6b5074a295f8c1f29dd811efcdd912472601\": container with ID starting with 52a3473b0fb6b9986b54fdfe19af6b5074a295f8c1f29dd811efcdd912472601 not found: ID does not exist" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.407535 4867 scope.go:117] "RemoveContainer" containerID="978e912450fcc6bc39516afc738b91cba25cf146101d3265e9a3d6435ed0c217" Jan 01 08:49:09 crc kubenswrapper[4867]: E0101 08:49:09.407990 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"978e912450fcc6bc39516afc738b91cba25cf146101d3265e9a3d6435ed0c217\": container with ID starting with 978e912450fcc6bc39516afc738b91cba25cf146101d3265e9a3d6435ed0c217 not found: ID does not exist" containerID="978e912450fcc6bc39516afc738b91cba25cf146101d3265e9a3d6435ed0c217" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.408013 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"978e912450fcc6bc39516afc738b91cba25cf146101d3265e9a3d6435ed0c217"} err="failed to get container status \"978e912450fcc6bc39516afc738b91cba25cf146101d3265e9a3d6435ed0c217\": rpc error: code = NotFound desc = could not find container \"978e912450fcc6bc39516afc738b91cba25cf146101d3265e9a3d6435ed0c217\": container with ID starting with 978e912450fcc6bc39516afc738b91cba25cf146101d3265e9a3d6435ed0c217 not found: ID does not exist" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.439942 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cd9e18b-6370-426d-8abe-52f0d39f7f79-log-httpd\") pod \"ceilometer-0\" (UID: \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\") " pod="openstack/ceilometer-0" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.440013 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cd9e18b-6370-426d-8abe-52f0d39f7f79-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\") " pod="openstack/ceilometer-0" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.440123 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjxkn\" (UniqueName: \"kubernetes.io/projected/9cd9e18b-6370-426d-8abe-52f0d39f7f79-kube-api-access-wjxkn\") pod \"ceilometer-0\" (UID: \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\") " pod="openstack/ceilometer-0" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.440238 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cd9e18b-6370-426d-8abe-52f0d39f7f79-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\") " pod="openstack/ceilometer-0" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.440321 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cd9e18b-6370-426d-8abe-52f0d39f7f79-config-data\") pod \"ceilometer-0\" (UID: \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\") " pod="openstack/ceilometer-0" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.440390 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9cd9e18b-6370-426d-8abe-52f0d39f7f79-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\") " pod="openstack/ceilometer-0" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.440544 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cd9e18b-6370-426d-8abe-52f0d39f7f79-scripts\") pod \"ceilometer-0\" (UID: \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\") " pod="openstack/ceilometer-0" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.440629 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cd9e18b-6370-426d-8abe-52f0d39f7f79-run-httpd\") pod \"ceilometer-0\" (UID: \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\") " pod="openstack/ceilometer-0" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.543143 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cd9e18b-6370-426d-8abe-52f0d39f7f79-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\") " pod="openstack/ceilometer-0" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.543275 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cd9e18b-6370-426d-8abe-52f0d39f7f79-config-data\") pod \"ceilometer-0\" (UID: \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\") " pod="openstack/ceilometer-0" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.543345 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9cd9e18b-6370-426d-8abe-52f0d39f7f79-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\") " pod="openstack/ceilometer-0" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.543457 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cd9e18b-6370-426d-8abe-52f0d39f7f79-scripts\") pod \"ceilometer-0\" (UID: \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\") " pod="openstack/ceilometer-0" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.543530 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cd9e18b-6370-426d-8abe-52f0d39f7f79-run-httpd\") pod \"ceilometer-0\" (UID: \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\") " pod="openstack/ceilometer-0" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.543595 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cd9e18b-6370-426d-8abe-52f0d39f7f79-log-httpd\") pod \"ceilometer-0\" (UID: \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\") " pod="openstack/ceilometer-0" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.543643 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cd9e18b-6370-426d-8abe-52f0d39f7f79-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\") " pod="openstack/ceilometer-0" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.543691 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjxkn\" (UniqueName: \"kubernetes.io/projected/9cd9e18b-6370-426d-8abe-52f0d39f7f79-kube-api-access-wjxkn\") pod \"ceilometer-0\" (UID: \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\") " pod="openstack/ceilometer-0" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.544464 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cd9e18b-6370-426d-8abe-52f0d39f7f79-log-httpd\") pod \"ceilometer-0\" (UID: \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\") " pod="openstack/ceilometer-0" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.544564 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cd9e18b-6370-426d-8abe-52f0d39f7f79-run-httpd\") pod \"ceilometer-0\" (UID: \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\") " pod="openstack/ceilometer-0" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.548752 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cd9e18b-6370-426d-8abe-52f0d39f7f79-scripts\") pod \"ceilometer-0\" (UID: \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\") " pod="openstack/ceilometer-0" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.548756 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9cd9e18b-6370-426d-8abe-52f0d39f7f79-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\") " pod="openstack/ceilometer-0" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.549687 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cd9e18b-6370-426d-8abe-52f0d39f7f79-config-data\") pod \"ceilometer-0\" (UID: \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\") " pod="openstack/ceilometer-0" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.550931 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cd9e18b-6370-426d-8abe-52f0d39f7f79-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\") " pod="openstack/ceilometer-0" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.553575 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cd9e18b-6370-426d-8abe-52f0d39f7f79-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\") " pod="openstack/ceilometer-0" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.559801 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjxkn\" (UniqueName: \"kubernetes.io/projected/9cd9e18b-6370-426d-8abe-52f0d39f7f79-kube-api-access-wjxkn\") pod \"ceilometer-0\" (UID: \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\") " pod="openstack/ceilometer-0" Jan 01 08:49:09 crc kubenswrapper[4867]: I0101 08:49:09.698687 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 01 08:49:10 crc kubenswrapper[4867]: I0101 08:49:10.146965 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:49:10 crc kubenswrapper[4867]: W0101 08:49:10.150047 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cd9e18b_6370_426d_8abe_52f0d39f7f79.slice/crio-06a4bafcc683c55bf1ba340d00f35b7e95f6dd91f9792e9c38a8a3a3d4520c5b WatchSource:0}: Error finding container 06a4bafcc683c55bf1ba340d00f35b7e95f6dd91f9792e9c38a8a3a3d4520c5b: Status 404 returned error can't find the container with id 06a4bafcc683c55bf1ba340d00f35b7e95f6dd91f9792e9c38a8a3a3d4520c5b Jan 01 08:49:10 crc kubenswrapper[4867]: I0101 08:49:10.300240 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9cd9e18b-6370-426d-8abe-52f0d39f7f79","Type":"ContainerStarted","Data":"06a4bafcc683c55bf1ba340d00f35b7e95f6dd91f9792e9c38a8a3a3d4520c5b"} Jan 01 08:49:11 crc kubenswrapper[4867]: I0101 08:49:11.164178 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03a3b9bd-55d1-469d-a4b9-4db9992bbf56" path="/var/lib/kubelet/pods/03a3b9bd-55d1-469d-a4b9-4db9992bbf56/volumes" Jan 01 08:49:11 crc kubenswrapper[4867]: I0101 08:49:11.317295 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9cd9e18b-6370-426d-8abe-52f0d39f7f79","Type":"ContainerStarted","Data":"90cdb73017d5f3f08cfd02b4a18c28334240a94444b14877d63a5284c5148a4d"} Jan 01 08:49:11 crc kubenswrapper[4867]: I0101 08:49:11.777337 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 01 08:49:11 crc kubenswrapper[4867]: I0101 08:49:11.784361 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 01 08:49:11 crc kubenswrapper[4867]: I0101 08:49:11.805346 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 01 08:49:12 crc kubenswrapper[4867]: I0101 08:49:12.338396 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9cd9e18b-6370-426d-8abe-52f0d39f7f79","Type":"ContainerStarted","Data":"23b1a5c5a7305f474cbd849ae117406891852f6fbfbb6f8790975280852ac339"} Jan 01 08:49:12 crc kubenswrapper[4867]: I0101 08:49:12.350085 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 01 08:49:13 crc kubenswrapper[4867]: I0101 08:49:13.350605 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9cd9e18b-6370-426d-8abe-52f0d39f7f79","Type":"ContainerStarted","Data":"b40182c3ffdd51d16b98399f4fe00bea8f8954118be07ee063de165f18d682f9"} Jan 01 08:49:14 crc kubenswrapper[4867]: I0101 08:49:14.375376 4867 generic.go:334] "Generic (PLEG): container finished" podID="603e44e2-6aba-45b8-a80a-96e4420dbcfc" containerID="f2735ab20c4cfd9df9b85bab04e44b30b480005938646b7829854374acce9273" exitCode=137 Jan 01 08:49:14 crc kubenswrapper[4867]: I0101 08:49:14.375911 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"603e44e2-6aba-45b8-a80a-96e4420dbcfc","Type":"ContainerDied","Data":"f2735ab20c4cfd9df9b85bab04e44b30b480005938646b7829854374acce9273"} Jan 01 08:49:14 crc kubenswrapper[4867]: I0101 08:49:14.376959 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"603e44e2-6aba-45b8-a80a-96e4420dbcfc","Type":"ContainerDied","Data":"135b9bf948682dde6ad1a19eeb3a691029185e8c8c7b87d6bbd18506800b07c7"} Jan 01 08:49:14 crc kubenswrapper[4867]: I0101 08:49:14.377066 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="135b9bf948682dde6ad1a19eeb3a691029185e8c8c7b87d6bbd18506800b07c7" Jan 01 08:49:14 crc kubenswrapper[4867]: I0101 08:49:14.382503 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 01 08:49:14 crc kubenswrapper[4867]: I0101 08:49:14.383978 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9cd9e18b-6370-426d-8abe-52f0d39f7f79","Type":"ContainerStarted","Data":"ecae568c8982e459f89970f824f8c539337807eb70a2ffe9a0c8dd6cabf29447"} Jan 01 08:49:14 crc kubenswrapper[4867]: I0101 08:49:14.384229 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 01 08:49:14 crc kubenswrapper[4867]: I0101 08:49:14.427125 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.633617482 podStartE2EDuration="5.427108884s" podCreationTimestamp="2026-01-01 08:49:09 +0000 UTC" firstStartedPulling="2026-01-01 08:49:10.152096065 +0000 UTC m=+1359.287364824" lastFinishedPulling="2026-01-01 08:49:13.945587457 +0000 UTC m=+1363.080856226" observedRunningTime="2026-01-01 08:49:14.418977458 +0000 UTC m=+1363.554246227" watchObservedRunningTime="2026-01-01 08:49:14.427108884 +0000 UTC m=+1363.562377653" Jan 01 08:49:14 crc kubenswrapper[4867]: I0101 08:49:14.445831 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/603e44e2-6aba-45b8-a80a-96e4420dbcfc-combined-ca-bundle\") pod \"603e44e2-6aba-45b8-a80a-96e4420dbcfc\" (UID: \"603e44e2-6aba-45b8-a80a-96e4420dbcfc\") " Jan 01 08:49:14 crc kubenswrapper[4867]: I0101 08:49:14.445938 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhhm6\" (UniqueName: \"kubernetes.io/projected/603e44e2-6aba-45b8-a80a-96e4420dbcfc-kube-api-access-lhhm6\") pod \"603e44e2-6aba-45b8-a80a-96e4420dbcfc\" (UID: \"603e44e2-6aba-45b8-a80a-96e4420dbcfc\") " Jan 01 08:49:14 crc kubenswrapper[4867]: I0101 08:49:14.446040 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/603e44e2-6aba-45b8-a80a-96e4420dbcfc-config-data\") pod \"603e44e2-6aba-45b8-a80a-96e4420dbcfc\" (UID: \"603e44e2-6aba-45b8-a80a-96e4420dbcfc\") " Jan 01 08:49:14 crc kubenswrapper[4867]: I0101 08:49:14.452036 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/603e44e2-6aba-45b8-a80a-96e4420dbcfc-kube-api-access-lhhm6" (OuterVolumeSpecName: "kube-api-access-lhhm6") pod "603e44e2-6aba-45b8-a80a-96e4420dbcfc" (UID: "603e44e2-6aba-45b8-a80a-96e4420dbcfc"). InnerVolumeSpecName "kube-api-access-lhhm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:49:14 crc kubenswrapper[4867]: I0101 08:49:14.477075 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/603e44e2-6aba-45b8-a80a-96e4420dbcfc-config-data" (OuterVolumeSpecName: "config-data") pod "603e44e2-6aba-45b8-a80a-96e4420dbcfc" (UID: "603e44e2-6aba-45b8-a80a-96e4420dbcfc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:49:14 crc kubenswrapper[4867]: I0101 08:49:14.479449 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/603e44e2-6aba-45b8-a80a-96e4420dbcfc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "603e44e2-6aba-45b8-a80a-96e4420dbcfc" (UID: "603e44e2-6aba-45b8-a80a-96e4420dbcfc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:49:14 crc kubenswrapper[4867]: I0101 08:49:14.548378 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/603e44e2-6aba-45b8-a80a-96e4420dbcfc-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:14 crc kubenswrapper[4867]: I0101 08:49:14.548410 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/603e44e2-6aba-45b8-a80a-96e4420dbcfc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:14 crc kubenswrapper[4867]: I0101 08:49:14.548424 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhhm6\" (UniqueName: \"kubernetes.io/projected/603e44e2-6aba-45b8-a80a-96e4420dbcfc-kube-api-access-lhhm6\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:15 crc kubenswrapper[4867]: I0101 08:49:15.393251 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 01 08:49:15 crc kubenswrapper[4867]: I0101 08:49:15.428544 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 01 08:49:15 crc kubenswrapper[4867]: I0101 08:49:15.441944 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 01 08:49:15 crc kubenswrapper[4867]: I0101 08:49:15.454608 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 01 08:49:15 crc kubenswrapper[4867]: E0101 08:49:15.455094 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="603e44e2-6aba-45b8-a80a-96e4420dbcfc" containerName="nova-cell1-novncproxy-novncproxy" Jan 01 08:49:15 crc kubenswrapper[4867]: I0101 08:49:15.455126 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="603e44e2-6aba-45b8-a80a-96e4420dbcfc" containerName="nova-cell1-novncproxy-novncproxy" Jan 01 08:49:15 crc kubenswrapper[4867]: I0101 08:49:15.455382 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="603e44e2-6aba-45b8-a80a-96e4420dbcfc" containerName="nova-cell1-novncproxy-novncproxy" Jan 01 08:49:15 crc kubenswrapper[4867]: I0101 08:49:15.456178 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 01 08:49:15 crc kubenswrapper[4867]: I0101 08:49:15.462689 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 01 08:49:15 crc kubenswrapper[4867]: I0101 08:49:15.462850 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 01 08:49:15 crc kubenswrapper[4867]: I0101 08:49:15.463100 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 01 08:49:15 crc kubenswrapper[4867]: I0101 08:49:15.479229 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 01 08:49:15 crc kubenswrapper[4867]: I0101 08:49:15.565557 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c8a7ced-4990-4ea2-baff-8d3adf064a56-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9c8a7ced-4990-4ea2-baff-8d3adf064a56\") " pod="openstack/nova-cell1-novncproxy-0" Jan 01 08:49:15 crc kubenswrapper[4867]: I0101 08:49:15.565606 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c8a7ced-4990-4ea2-baff-8d3adf064a56-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9c8a7ced-4990-4ea2-baff-8d3adf064a56\") " pod="openstack/nova-cell1-novncproxy-0" Jan 01 08:49:15 crc kubenswrapper[4867]: I0101 08:49:15.565633 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c8a7ced-4990-4ea2-baff-8d3adf064a56-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9c8a7ced-4990-4ea2-baff-8d3adf064a56\") " pod="openstack/nova-cell1-novncproxy-0" Jan 01 08:49:15 crc kubenswrapper[4867]: I0101 08:49:15.565800 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kcj4\" (UniqueName: \"kubernetes.io/projected/9c8a7ced-4990-4ea2-baff-8d3adf064a56-kube-api-access-9kcj4\") pod \"nova-cell1-novncproxy-0\" (UID: \"9c8a7ced-4990-4ea2-baff-8d3adf064a56\") " pod="openstack/nova-cell1-novncproxy-0" Jan 01 08:49:15 crc kubenswrapper[4867]: I0101 08:49:15.565856 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c8a7ced-4990-4ea2-baff-8d3adf064a56-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9c8a7ced-4990-4ea2-baff-8d3adf064a56\") " pod="openstack/nova-cell1-novncproxy-0" Jan 01 08:49:15 crc kubenswrapper[4867]: I0101 08:49:15.666875 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c8a7ced-4990-4ea2-baff-8d3adf064a56-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9c8a7ced-4990-4ea2-baff-8d3adf064a56\") " pod="openstack/nova-cell1-novncproxy-0" Jan 01 08:49:15 crc kubenswrapper[4867]: I0101 08:49:15.667846 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c8a7ced-4990-4ea2-baff-8d3adf064a56-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9c8a7ced-4990-4ea2-baff-8d3adf064a56\") " pod="openstack/nova-cell1-novncproxy-0" Jan 01 08:49:15 crc kubenswrapper[4867]: I0101 08:49:15.667971 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c8a7ced-4990-4ea2-baff-8d3adf064a56-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9c8a7ced-4990-4ea2-baff-8d3adf064a56\") " pod="openstack/nova-cell1-novncproxy-0" Jan 01 08:49:15 crc kubenswrapper[4867]: I0101 08:49:15.668054 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c8a7ced-4990-4ea2-baff-8d3adf064a56-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9c8a7ced-4990-4ea2-baff-8d3adf064a56\") " pod="openstack/nova-cell1-novncproxy-0" Jan 01 08:49:15 crc kubenswrapper[4867]: I0101 08:49:15.668264 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kcj4\" (UniqueName: \"kubernetes.io/projected/9c8a7ced-4990-4ea2-baff-8d3adf064a56-kube-api-access-9kcj4\") pod \"nova-cell1-novncproxy-0\" (UID: \"9c8a7ced-4990-4ea2-baff-8d3adf064a56\") " pod="openstack/nova-cell1-novncproxy-0" Jan 01 08:49:15 crc kubenswrapper[4867]: I0101 08:49:15.672580 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c8a7ced-4990-4ea2-baff-8d3adf064a56-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9c8a7ced-4990-4ea2-baff-8d3adf064a56\") " pod="openstack/nova-cell1-novncproxy-0" Jan 01 08:49:15 crc kubenswrapper[4867]: I0101 08:49:15.672936 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c8a7ced-4990-4ea2-baff-8d3adf064a56-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9c8a7ced-4990-4ea2-baff-8d3adf064a56\") " pod="openstack/nova-cell1-novncproxy-0" Jan 01 08:49:15 crc kubenswrapper[4867]: I0101 08:49:15.673096 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c8a7ced-4990-4ea2-baff-8d3adf064a56-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9c8a7ced-4990-4ea2-baff-8d3adf064a56\") " pod="openstack/nova-cell1-novncproxy-0" Jan 01 08:49:15 crc kubenswrapper[4867]: I0101 08:49:15.682325 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c8a7ced-4990-4ea2-baff-8d3adf064a56-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9c8a7ced-4990-4ea2-baff-8d3adf064a56\") " pod="openstack/nova-cell1-novncproxy-0" Jan 01 08:49:15 crc kubenswrapper[4867]: I0101 08:49:15.695313 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kcj4\" (UniqueName: \"kubernetes.io/projected/9c8a7ced-4990-4ea2-baff-8d3adf064a56-kube-api-access-9kcj4\") pod \"nova-cell1-novncproxy-0\" (UID: \"9c8a7ced-4990-4ea2-baff-8d3adf064a56\") " pod="openstack/nova-cell1-novncproxy-0" Jan 01 08:49:15 crc kubenswrapper[4867]: I0101 08:49:15.783964 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 01 08:49:16 crc kubenswrapper[4867]: W0101 08:49:16.254078 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c8a7ced_4990_4ea2_baff_8d3adf064a56.slice/crio-9bdbc5e52b0077ef250b50e9ebc4444a080a8eecfae92c07bd1b17a4120968c4 WatchSource:0}: Error finding container 9bdbc5e52b0077ef250b50e9ebc4444a080a8eecfae92c07bd1b17a4120968c4: Status 404 returned error can't find the container with id 9bdbc5e52b0077ef250b50e9ebc4444a080a8eecfae92c07bd1b17a4120968c4 Jan 01 08:49:16 crc kubenswrapper[4867]: I0101 08:49:16.258681 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 01 08:49:16 crc kubenswrapper[4867]: I0101 08:49:16.403150 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9c8a7ced-4990-4ea2-baff-8d3adf064a56","Type":"ContainerStarted","Data":"9bdbc5e52b0077ef250b50e9ebc4444a080a8eecfae92c07bd1b17a4120968c4"} Jan 01 08:49:16 crc kubenswrapper[4867]: I0101 08:49:16.652828 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 01 08:49:17 crc kubenswrapper[4867]: I0101 08:49:17.143247 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="603e44e2-6aba-45b8-a80a-96e4420dbcfc" path="/var/lib/kubelet/pods/603e44e2-6aba-45b8-a80a-96e4420dbcfc/volumes" Jan 01 08:49:17 crc kubenswrapper[4867]: I0101 08:49:17.413820 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9c8a7ced-4990-4ea2-baff-8d3adf064a56","Type":"ContainerStarted","Data":"c05c8abcfa20118c3f417e5d1941b6250e0b465b8219e18b8df907477937b8fd"} Jan 01 08:49:17 crc kubenswrapper[4867]: I0101 08:49:17.447404 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.447378261 podStartE2EDuration="2.447378261s" podCreationTimestamp="2026-01-01 08:49:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:49:17.447010511 +0000 UTC m=+1366.582279280" watchObservedRunningTime="2026-01-01 08:49:17.447378261 +0000 UTC m=+1366.582647030" Jan 01 08:49:17 crc kubenswrapper[4867]: I0101 08:49:17.693072 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 01 08:49:17 crc kubenswrapper[4867]: I0101 08:49:17.693453 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 01 08:49:17 crc kubenswrapper[4867]: I0101 08:49:17.695464 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 01 08:49:17 crc kubenswrapper[4867]: I0101 08:49:17.699403 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 01 08:49:18 crc kubenswrapper[4867]: I0101 08:49:18.422339 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 01 08:49:18 crc kubenswrapper[4867]: I0101 08:49:18.427459 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 01 08:49:18 crc kubenswrapper[4867]: I0101 08:49:18.604055 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fbdfbb78f-5g78q"] Jan 01 08:49:18 crc kubenswrapper[4867]: I0101 08:49:18.605432 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbdfbb78f-5g78q" Jan 01 08:49:18 crc kubenswrapper[4867]: I0101 08:49:18.638832 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d579322c-12b7-488b-8220-31ef35016c68-config\") pod \"dnsmasq-dns-fbdfbb78f-5g78q\" (UID: \"d579322c-12b7-488b-8220-31ef35016c68\") " pod="openstack/dnsmasq-dns-fbdfbb78f-5g78q" Jan 01 08:49:18 crc kubenswrapper[4867]: I0101 08:49:18.638928 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mztd6\" (UniqueName: \"kubernetes.io/projected/d579322c-12b7-488b-8220-31ef35016c68-kube-api-access-mztd6\") pod \"dnsmasq-dns-fbdfbb78f-5g78q\" (UID: \"d579322c-12b7-488b-8220-31ef35016c68\") " pod="openstack/dnsmasq-dns-fbdfbb78f-5g78q" Jan 01 08:49:18 crc kubenswrapper[4867]: I0101 08:49:18.639002 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d579322c-12b7-488b-8220-31ef35016c68-ovsdbserver-nb\") pod \"dnsmasq-dns-fbdfbb78f-5g78q\" (UID: \"d579322c-12b7-488b-8220-31ef35016c68\") " pod="openstack/dnsmasq-dns-fbdfbb78f-5g78q" Jan 01 08:49:18 crc kubenswrapper[4867]: I0101 08:49:18.639052 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d579322c-12b7-488b-8220-31ef35016c68-dns-svc\") pod \"dnsmasq-dns-fbdfbb78f-5g78q\" (UID: \"d579322c-12b7-488b-8220-31ef35016c68\") " pod="openstack/dnsmasq-dns-fbdfbb78f-5g78q" Jan 01 08:49:18 crc kubenswrapper[4867]: I0101 08:49:18.639105 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d579322c-12b7-488b-8220-31ef35016c68-ovsdbserver-sb\") pod \"dnsmasq-dns-fbdfbb78f-5g78q\" (UID: \"d579322c-12b7-488b-8220-31ef35016c68\") " pod="openstack/dnsmasq-dns-fbdfbb78f-5g78q" Jan 01 08:49:18 crc kubenswrapper[4867]: I0101 08:49:18.639201 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d579322c-12b7-488b-8220-31ef35016c68-dns-swift-storage-0\") pod \"dnsmasq-dns-fbdfbb78f-5g78q\" (UID: \"d579322c-12b7-488b-8220-31ef35016c68\") " pod="openstack/dnsmasq-dns-fbdfbb78f-5g78q" Jan 01 08:49:18 crc kubenswrapper[4867]: I0101 08:49:18.659012 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fbdfbb78f-5g78q"] Jan 01 08:49:18 crc kubenswrapper[4867]: I0101 08:49:18.740618 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d579322c-12b7-488b-8220-31ef35016c68-dns-swift-storage-0\") pod \"dnsmasq-dns-fbdfbb78f-5g78q\" (UID: \"d579322c-12b7-488b-8220-31ef35016c68\") " pod="openstack/dnsmasq-dns-fbdfbb78f-5g78q" Jan 01 08:49:18 crc kubenswrapper[4867]: I0101 08:49:18.740670 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d579322c-12b7-488b-8220-31ef35016c68-config\") pod \"dnsmasq-dns-fbdfbb78f-5g78q\" (UID: \"d579322c-12b7-488b-8220-31ef35016c68\") " pod="openstack/dnsmasq-dns-fbdfbb78f-5g78q" Jan 01 08:49:18 crc kubenswrapper[4867]: I0101 08:49:18.740708 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mztd6\" (UniqueName: \"kubernetes.io/projected/d579322c-12b7-488b-8220-31ef35016c68-kube-api-access-mztd6\") pod \"dnsmasq-dns-fbdfbb78f-5g78q\" (UID: \"d579322c-12b7-488b-8220-31ef35016c68\") " pod="openstack/dnsmasq-dns-fbdfbb78f-5g78q" Jan 01 08:49:18 crc kubenswrapper[4867]: I0101 08:49:18.740735 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d579322c-12b7-488b-8220-31ef35016c68-ovsdbserver-nb\") pod \"dnsmasq-dns-fbdfbb78f-5g78q\" (UID: \"d579322c-12b7-488b-8220-31ef35016c68\") " pod="openstack/dnsmasq-dns-fbdfbb78f-5g78q" Jan 01 08:49:18 crc kubenswrapper[4867]: I0101 08:49:18.740779 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d579322c-12b7-488b-8220-31ef35016c68-dns-svc\") pod \"dnsmasq-dns-fbdfbb78f-5g78q\" (UID: \"d579322c-12b7-488b-8220-31ef35016c68\") " pod="openstack/dnsmasq-dns-fbdfbb78f-5g78q" Jan 01 08:49:18 crc kubenswrapper[4867]: I0101 08:49:18.740810 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d579322c-12b7-488b-8220-31ef35016c68-ovsdbserver-sb\") pod \"dnsmasq-dns-fbdfbb78f-5g78q\" (UID: \"d579322c-12b7-488b-8220-31ef35016c68\") " pod="openstack/dnsmasq-dns-fbdfbb78f-5g78q" Jan 01 08:49:18 crc kubenswrapper[4867]: I0101 08:49:18.741732 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d579322c-12b7-488b-8220-31ef35016c68-ovsdbserver-sb\") pod \"dnsmasq-dns-fbdfbb78f-5g78q\" (UID: \"d579322c-12b7-488b-8220-31ef35016c68\") " pod="openstack/dnsmasq-dns-fbdfbb78f-5g78q" Jan 01 08:49:18 crc kubenswrapper[4867]: I0101 08:49:18.742026 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d579322c-12b7-488b-8220-31ef35016c68-dns-swift-storage-0\") pod \"dnsmasq-dns-fbdfbb78f-5g78q\" (UID: \"d579322c-12b7-488b-8220-31ef35016c68\") " pod="openstack/dnsmasq-dns-fbdfbb78f-5g78q" Jan 01 08:49:18 crc kubenswrapper[4867]: I0101 08:49:18.742348 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d579322c-12b7-488b-8220-31ef35016c68-ovsdbserver-nb\") pod \"dnsmasq-dns-fbdfbb78f-5g78q\" (UID: \"d579322c-12b7-488b-8220-31ef35016c68\") " pod="openstack/dnsmasq-dns-fbdfbb78f-5g78q" Jan 01 08:49:18 crc kubenswrapper[4867]: I0101 08:49:18.742638 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d579322c-12b7-488b-8220-31ef35016c68-config\") pod \"dnsmasq-dns-fbdfbb78f-5g78q\" (UID: \"d579322c-12b7-488b-8220-31ef35016c68\") " pod="openstack/dnsmasq-dns-fbdfbb78f-5g78q" Jan 01 08:49:18 crc kubenswrapper[4867]: I0101 08:49:18.742926 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d579322c-12b7-488b-8220-31ef35016c68-dns-svc\") pod \"dnsmasq-dns-fbdfbb78f-5g78q\" (UID: \"d579322c-12b7-488b-8220-31ef35016c68\") " pod="openstack/dnsmasq-dns-fbdfbb78f-5g78q" Jan 01 08:49:18 crc kubenswrapper[4867]: I0101 08:49:18.787327 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mztd6\" (UniqueName: \"kubernetes.io/projected/d579322c-12b7-488b-8220-31ef35016c68-kube-api-access-mztd6\") pod \"dnsmasq-dns-fbdfbb78f-5g78q\" (UID: \"d579322c-12b7-488b-8220-31ef35016c68\") " pod="openstack/dnsmasq-dns-fbdfbb78f-5g78q" Jan 01 08:49:18 crc kubenswrapper[4867]: I0101 08:49:18.936173 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbdfbb78f-5g78q" Jan 01 08:49:19 crc kubenswrapper[4867]: I0101 08:49:19.438982 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fbdfbb78f-5g78q"] Jan 01 08:49:20 crc kubenswrapper[4867]: I0101 08:49:20.455898 4867 generic.go:334] "Generic (PLEG): container finished" podID="d579322c-12b7-488b-8220-31ef35016c68" containerID="37310f221b90260f704cc9774670b03490fde04e0d9fb5eac5f18180fd865c4a" exitCode=0 Jan 01 08:49:20 crc kubenswrapper[4867]: I0101 08:49:20.456147 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbdfbb78f-5g78q" event={"ID":"d579322c-12b7-488b-8220-31ef35016c68","Type":"ContainerDied","Data":"37310f221b90260f704cc9774670b03490fde04e0d9fb5eac5f18180fd865c4a"} Jan 01 08:49:20 crc kubenswrapper[4867]: I0101 08:49:20.456332 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbdfbb78f-5g78q" event={"ID":"d579322c-12b7-488b-8220-31ef35016c68","Type":"ContainerStarted","Data":"bb6bf94584f988aec98816b2c74dc43a9b9138402681d5cf729debf913d051a3"} Jan 01 08:49:20 crc kubenswrapper[4867]: I0101 08:49:20.739610 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:49:20 crc kubenswrapper[4867]: I0101 08:49:20.740339 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9cd9e18b-6370-426d-8abe-52f0d39f7f79" containerName="ceilometer-central-agent" containerID="cri-o://90cdb73017d5f3f08cfd02b4a18c28334240a94444b14877d63a5284c5148a4d" gracePeriod=30 Jan 01 08:49:20 crc kubenswrapper[4867]: I0101 08:49:20.740398 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9cd9e18b-6370-426d-8abe-52f0d39f7f79" containerName="proxy-httpd" containerID="cri-o://ecae568c8982e459f89970f824f8c539337807eb70a2ffe9a0c8dd6cabf29447" gracePeriod=30 Jan 01 08:49:20 crc kubenswrapper[4867]: I0101 08:49:20.740396 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9cd9e18b-6370-426d-8abe-52f0d39f7f79" containerName="sg-core" containerID="cri-o://b40182c3ffdd51d16b98399f4fe00bea8f8954118be07ee063de165f18d682f9" gracePeriod=30 Jan 01 08:49:20 crc kubenswrapper[4867]: I0101 08:49:20.740476 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9cd9e18b-6370-426d-8abe-52f0d39f7f79" containerName="ceilometer-notification-agent" containerID="cri-o://23b1a5c5a7305f474cbd849ae117406891852f6fbfbb6f8790975280852ac339" gracePeriod=30 Jan 01 08:49:20 crc kubenswrapper[4867]: I0101 08:49:20.784575 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 01 08:49:21 crc kubenswrapper[4867]: I0101 08:49:21.010132 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 01 08:49:21 crc kubenswrapper[4867]: I0101 08:49:21.331462 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 08:49:21 crc kubenswrapper[4867]: I0101 08:49:21.331806 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 08:49:21 crc kubenswrapper[4867]: I0101 08:49:21.331856 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-69jph" Jan 01 08:49:21 crc kubenswrapper[4867]: I0101 08:49:21.332637 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fae12ab6ce4b32e7095b166bc2001d0435bf314dafdb60059b95e31213f00b52"} pod="openshift-machine-config-operator/machine-config-daemon-69jph" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 01 08:49:21 crc kubenswrapper[4867]: I0101 08:49:21.332699 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" containerID="cri-o://fae12ab6ce4b32e7095b166bc2001d0435bf314dafdb60059b95e31213f00b52" gracePeriod=600 Jan 01 08:49:21 crc kubenswrapper[4867]: I0101 08:49:21.488397 4867 generic.go:334] "Generic (PLEG): container finished" podID="9cd9e18b-6370-426d-8abe-52f0d39f7f79" containerID="ecae568c8982e459f89970f824f8c539337807eb70a2ffe9a0c8dd6cabf29447" exitCode=0 Jan 01 08:49:21 crc kubenswrapper[4867]: I0101 08:49:21.488435 4867 generic.go:334] "Generic (PLEG): container finished" podID="9cd9e18b-6370-426d-8abe-52f0d39f7f79" containerID="b40182c3ffdd51d16b98399f4fe00bea8f8954118be07ee063de165f18d682f9" exitCode=2 Jan 01 08:49:21 crc kubenswrapper[4867]: I0101 08:49:21.488447 4867 generic.go:334] "Generic (PLEG): container finished" podID="9cd9e18b-6370-426d-8abe-52f0d39f7f79" containerID="90cdb73017d5f3f08cfd02b4a18c28334240a94444b14877d63a5284c5148a4d" exitCode=0 Jan 01 08:49:21 crc kubenswrapper[4867]: I0101 08:49:21.488491 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9cd9e18b-6370-426d-8abe-52f0d39f7f79","Type":"ContainerDied","Data":"ecae568c8982e459f89970f824f8c539337807eb70a2ffe9a0c8dd6cabf29447"} Jan 01 08:49:21 crc kubenswrapper[4867]: I0101 08:49:21.488550 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9cd9e18b-6370-426d-8abe-52f0d39f7f79","Type":"ContainerDied","Data":"b40182c3ffdd51d16b98399f4fe00bea8f8954118be07ee063de165f18d682f9"} Jan 01 08:49:21 crc kubenswrapper[4867]: I0101 08:49:21.488565 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9cd9e18b-6370-426d-8abe-52f0d39f7f79","Type":"ContainerDied","Data":"90cdb73017d5f3f08cfd02b4a18c28334240a94444b14877d63a5284c5148a4d"} Jan 01 08:49:21 crc kubenswrapper[4867]: I0101 08:49:21.512455 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbdfbb78f-5g78q" event={"ID":"d579322c-12b7-488b-8220-31ef35016c68","Type":"ContainerStarted","Data":"9aaec2bdb437295ba5550a821ae8d1f9f3e0bbb9ba3c726894b4022fa400f982"} Jan 01 08:49:21 crc kubenswrapper[4867]: I0101 08:49:21.512565 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5e39c244-f85a-4705-8796-128001d4cde3" containerName="nova-api-log" containerID="cri-o://f5edf9df88825c6ebca64f6da373d88a3a08740a86b319653515c9c430b8cc56" gracePeriod=30 Jan 01 08:49:21 crc kubenswrapper[4867]: I0101 08:49:21.512653 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5e39c244-f85a-4705-8796-128001d4cde3" containerName="nova-api-api" containerID="cri-o://18cc00d9f4fe7cf8d6e93600dfee079045baa8f15639e9f2adc34ea6f18cfd5c" gracePeriod=30 Jan 01 08:49:21 crc kubenswrapper[4867]: I0101 08:49:21.551900 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fbdfbb78f-5g78q" podStartSLOduration=3.551871487 podStartE2EDuration="3.551871487s" podCreationTimestamp="2026-01-01 08:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:49:21.546842967 +0000 UTC m=+1370.682111746" watchObservedRunningTime="2026-01-01 08:49:21.551871487 +0000 UTC m=+1370.687140246" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.092009 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.115155 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cd9e18b-6370-426d-8abe-52f0d39f7f79-ceilometer-tls-certs\") pod \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\" (UID: \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\") " Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.115458 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjxkn\" (UniqueName: \"kubernetes.io/projected/9cd9e18b-6370-426d-8abe-52f0d39f7f79-kube-api-access-wjxkn\") pod \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\" (UID: \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\") " Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.115498 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9cd9e18b-6370-426d-8abe-52f0d39f7f79-sg-core-conf-yaml\") pod \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\" (UID: \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\") " Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.115564 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cd9e18b-6370-426d-8abe-52f0d39f7f79-log-httpd\") pod \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\" (UID: \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\") " Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.115607 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cd9e18b-6370-426d-8abe-52f0d39f7f79-scripts\") pod \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\" (UID: \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\") " Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.115685 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cd9e18b-6370-426d-8abe-52f0d39f7f79-combined-ca-bundle\") pod \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\" (UID: \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\") " Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.115747 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cd9e18b-6370-426d-8abe-52f0d39f7f79-run-httpd\") pod \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\" (UID: \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\") " Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.115801 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cd9e18b-6370-426d-8abe-52f0d39f7f79-config-data\") pod \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\" (UID: \"9cd9e18b-6370-426d-8abe-52f0d39f7f79\") " Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.122310 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cd9e18b-6370-426d-8abe-52f0d39f7f79-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9cd9e18b-6370-426d-8abe-52f0d39f7f79" (UID: "9cd9e18b-6370-426d-8abe-52f0d39f7f79"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.122597 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cd9e18b-6370-426d-8abe-52f0d39f7f79-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9cd9e18b-6370-426d-8abe-52f0d39f7f79" (UID: "9cd9e18b-6370-426d-8abe-52f0d39f7f79"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.123686 4867 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cd9e18b-6370-426d-8abe-52f0d39f7f79-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.123715 4867 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cd9e18b-6370-426d-8abe-52f0d39f7f79-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.132561 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cd9e18b-6370-426d-8abe-52f0d39f7f79-scripts" (OuterVolumeSpecName: "scripts") pod "9cd9e18b-6370-426d-8abe-52f0d39f7f79" (UID: "9cd9e18b-6370-426d-8abe-52f0d39f7f79"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.143195 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cd9e18b-6370-426d-8abe-52f0d39f7f79-kube-api-access-wjxkn" (OuterVolumeSpecName: "kube-api-access-wjxkn") pod "9cd9e18b-6370-426d-8abe-52f0d39f7f79" (UID: "9cd9e18b-6370-426d-8abe-52f0d39f7f79"). InnerVolumeSpecName "kube-api-access-wjxkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.194553 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cd9e18b-6370-426d-8abe-52f0d39f7f79-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9cd9e18b-6370-426d-8abe-52f0d39f7f79" (UID: "9cd9e18b-6370-426d-8abe-52f0d39f7f79"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.211424 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cd9e18b-6370-426d-8abe-52f0d39f7f79-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "9cd9e18b-6370-426d-8abe-52f0d39f7f79" (UID: "9cd9e18b-6370-426d-8abe-52f0d39f7f79"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.227659 4867 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9cd9e18b-6370-426d-8abe-52f0d39f7f79-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.227689 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cd9e18b-6370-426d-8abe-52f0d39f7f79-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.227698 4867 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cd9e18b-6370-426d-8abe-52f0d39f7f79-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.227707 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjxkn\" (UniqueName: \"kubernetes.io/projected/9cd9e18b-6370-426d-8abe-52f0d39f7f79-kube-api-access-wjxkn\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.250675 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cd9e18b-6370-426d-8abe-52f0d39f7f79-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9cd9e18b-6370-426d-8abe-52f0d39f7f79" (UID: "9cd9e18b-6370-426d-8abe-52f0d39f7f79"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.262400 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cd9e18b-6370-426d-8abe-52f0d39f7f79-config-data" (OuterVolumeSpecName: "config-data") pod "9cd9e18b-6370-426d-8abe-52f0d39f7f79" (UID: "9cd9e18b-6370-426d-8abe-52f0d39f7f79"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.329538 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cd9e18b-6370-426d-8abe-52f0d39f7f79-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.329739 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cd9e18b-6370-426d-8abe-52f0d39f7f79-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.524179 4867 generic.go:334] "Generic (PLEG): container finished" podID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerID="fae12ab6ce4b32e7095b166bc2001d0435bf314dafdb60059b95e31213f00b52" exitCode=0 Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.524276 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerDied","Data":"fae12ab6ce4b32e7095b166bc2001d0435bf314dafdb60059b95e31213f00b52"} Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.524529 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerStarted","Data":"a427096bdc8d52c036c3aaecb3f5ce96e01b2c654772d5d0fcf3342a5e745a56"} Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.524553 4867 scope.go:117] "RemoveContainer" containerID="81817d336fc213658d5e33bc8d0ea2842c8843cc5c0fbe3de4796b71ea1ba225" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.528176 4867 generic.go:334] "Generic (PLEG): container finished" podID="9cd9e18b-6370-426d-8abe-52f0d39f7f79" containerID="23b1a5c5a7305f474cbd849ae117406891852f6fbfbb6f8790975280852ac339" exitCode=0 Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.528255 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.528296 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9cd9e18b-6370-426d-8abe-52f0d39f7f79","Type":"ContainerDied","Data":"23b1a5c5a7305f474cbd849ae117406891852f6fbfbb6f8790975280852ac339"} Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.528453 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9cd9e18b-6370-426d-8abe-52f0d39f7f79","Type":"ContainerDied","Data":"06a4bafcc683c55bf1ba340d00f35b7e95f6dd91f9792e9c38a8a3a3d4520c5b"} Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.530600 4867 generic.go:334] "Generic (PLEG): container finished" podID="5e39c244-f85a-4705-8796-128001d4cde3" containerID="f5edf9df88825c6ebca64f6da373d88a3a08740a86b319653515c9c430b8cc56" exitCode=143 Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.530767 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5e39c244-f85a-4705-8796-128001d4cde3","Type":"ContainerDied","Data":"f5edf9df88825c6ebca64f6da373d88a3a08740a86b319653515c9c430b8cc56"} Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.530983 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fbdfbb78f-5g78q" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.559347 4867 scope.go:117] "RemoveContainer" containerID="ecae568c8982e459f89970f824f8c539337807eb70a2ffe9a0c8dd6cabf29447" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.583004 4867 scope.go:117] "RemoveContainer" containerID="b40182c3ffdd51d16b98399f4fe00bea8f8954118be07ee063de165f18d682f9" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.583868 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.595092 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.600522 4867 scope.go:117] "RemoveContainer" containerID="23b1a5c5a7305f474cbd849ae117406891852f6fbfbb6f8790975280852ac339" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.627637 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:49:22 crc kubenswrapper[4867]: E0101 08:49:22.628075 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cd9e18b-6370-426d-8abe-52f0d39f7f79" containerName="ceilometer-notification-agent" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.628091 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cd9e18b-6370-426d-8abe-52f0d39f7f79" containerName="ceilometer-notification-agent" Jan 01 08:49:22 crc kubenswrapper[4867]: E0101 08:49:22.628121 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cd9e18b-6370-426d-8abe-52f0d39f7f79" containerName="sg-core" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.628127 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cd9e18b-6370-426d-8abe-52f0d39f7f79" containerName="sg-core" Jan 01 08:49:22 crc kubenswrapper[4867]: E0101 08:49:22.628139 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cd9e18b-6370-426d-8abe-52f0d39f7f79" containerName="ceilometer-central-agent" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.628145 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cd9e18b-6370-426d-8abe-52f0d39f7f79" containerName="ceilometer-central-agent" Jan 01 08:49:22 crc kubenswrapper[4867]: E0101 08:49:22.628164 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cd9e18b-6370-426d-8abe-52f0d39f7f79" containerName="proxy-httpd" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.628171 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cd9e18b-6370-426d-8abe-52f0d39f7f79" containerName="proxy-httpd" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.628350 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cd9e18b-6370-426d-8abe-52f0d39f7f79" containerName="ceilometer-central-agent" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.628367 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cd9e18b-6370-426d-8abe-52f0d39f7f79" containerName="sg-core" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.628375 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cd9e18b-6370-426d-8abe-52f0d39f7f79" containerName="proxy-httpd" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.628388 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cd9e18b-6370-426d-8abe-52f0d39f7f79" containerName="ceilometer-notification-agent" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.629972 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.632292 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.632448 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.634194 4867 scope.go:117] "RemoveContainer" containerID="90cdb73017d5f3f08cfd02b4a18c28334240a94444b14877d63a5284c5148a4d" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.634524 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.635992 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.694065 4867 scope.go:117] "RemoveContainer" containerID="ecae568c8982e459f89970f824f8c539337807eb70a2ffe9a0c8dd6cabf29447" Jan 01 08:49:22 crc kubenswrapper[4867]: E0101 08:49:22.695025 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecae568c8982e459f89970f824f8c539337807eb70a2ffe9a0c8dd6cabf29447\": container with ID starting with ecae568c8982e459f89970f824f8c539337807eb70a2ffe9a0c8dd6cabf29447 not found: ID does not exist" containerID="ecae568c8982e459f89970f824f8c539337807eb70a2ffe9a0c8dd6cabf29447" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.695056 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecae568c8982e459f89970f824f8c539337807eb70a2ffe9a0c8dd6cabf29447"} err="failed to get container status \"ecae568c8982e459f89970f824f8c539337807eb70a2ffe9a0c8dd6cabf29447\": rpc error: code = NotFound desc = could not find container \"ecae568c8982e459f89970f824f8c539337807eb70a2ffe9a0c8dd6cabf29447\": container with ID starting with ecae568c8982e459f89970f824f8c539337807eb70a2ffe9a0c8dd6cabf29447 not found: ID does not exist" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.695075 4867 scope.go:117] "RemoveContainer" containerID="b40182c3ffdd51d16b98399f4fe00bea8f8954118be07ee063de165f18d682f9" Jan 01 08:49:22 crc kubenswrapper[4867]: E0101 08:49:22.704657 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b40182c3ffdd51d16b98399f4fe00bea8f8954118be07ee063de165f18d682f9\": container with ID starting with b40182c3ffdd51d16b98399f4fe00bea8f8954118be07ee063de165f18d682f9 not found: ID does not exist" containerID="b40182c3ffdd51d16b98399f4fe00bea8f8954118be07ee063de165f18d682f9" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.704687 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b40182c3ffdd51d16b98399f4fe00bea8f8954118be07ee063de165f18d682f9"} err="failed to get container status \"b40182c3ffdd51d16b98399f4fe00bea8f8954118be07ee063de165f18d682f9\": rpc error: code = NotFound desc = could not find container \"b40182c3ffdd51d16b98399f4fe00bea8f8954118be07ee063de165f18d682f9\": container with ID starting with b40182c3ffdd51d16b98399f4fe00bea8f8954118be07ee063de165f18d682f9 not found: ID does not exist" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.704708 4867 scope.go:117] "RemoveContainer" containerID="23b1a5c5a7305f474cbd849ae117406891852f6fbfbb6f8790975280852ac339" Jan 01 08:49:22 crc kubenswrapper[4867]: E0101 08:49:22.707112 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23b1a5c5a7305f474cbd849ae117406891852f6fbfbb6f8790975280852ac339\": container with ID starting with 23b1a5c5a7305f474cbd849ae117406891852f6fbfbb6f8790975280852ac339 not found: ID does not exist" containerID="23b1a5c5a7305f474cbd849ae117406891852f6fbfbb6f8790975280852ac339" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.707170 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23b1a5c5a7305f474cbd849ae117406891852f6fbfbb6f8790975280852ac339"} err="failed to get container status \"23b1a5c5a7305f474cbd849ae117406891852f6fbfbb6f8790975280852ac339\": rpc error: code = NotFound desc = could not find container \"23b1a5c5a7305f474cbd849ae117406891852f6fbfbb6f8790975280852ac339\": container with ID starting with 23b1a5c5a7305f474cbd849ae117406891852f6fbfbb6f8790975280852ac339 not found: ID does not exist" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.707205 4867 scope.go:117] "RemoveContainer" containerID="90cdb73017d5f3f08cfd02b4a18c28334240a94444b14877d63a5284c5148a4d" Jan 01 08:49:22 crc kubenswrapper[4867]: E0101 08:49:22.707690 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90cdb73017d5f3f08cfd02b4a18c28334240a94444b14877d63a5284c5148a4d\": container with ID starting with 90cdb73017d5f3f08cfd02b4a18c28334240a94444b14877d63a5284c5148a4d not found: ID does not exist" containerID="90cdb73017d5f3f08cfd02b4a18c28334240a94444b14877d63a5284c5148a4d" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.707730 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90cdb73017d5f3f08cfd02b4a18c28334240a94444b14877d63a5284c5148a4d"} err="failed to get container status \"90cdb73017d5f3f08cfd02b4a18c28334240a94444b14877d63a5284c5148a4d\": rpc error: code = NotFound desc = could not find container \"90cdb73017d5f3f08cfd02b4a18c28334240a94444b14877d63a5284c5148a4d\": container with ID starting with 90cdb73017d5f3f08cfd02b4a18c28334240a94444b14877d63a5284c5148a4d not found: ID does not exist" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.734810 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/958cf7f7-f879-4664-9498-8d57bd09a610-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"958cf7f7-f879-4664-9498-8d57bd09a610\") " pod="openstack/ceilometer-0" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.734908 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/958cf7f7-f879-4664-9498-8d57bd09a610-log-httpd\") pod \"ceilometer-0\" (UID: \"958cf7f7-f879-4664-9498-8d57bd09a610\") " pod="openstack/ceilometer-0" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.735120 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/958cf7f7-f879-4664-9498-8d57bd09a610-run-httpd\") pod \"ceilometer-0\" (UID: \"958cf7f7-f879-4664-9498-8d57bd09a610\") " pod="openstack/ceilometer-0" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.735199 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/958cf7f7-f879-4664-9498-8d57bd09a610-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"958cf7f7-f879-4664-9498-8d57bd09a610\") " pod="openstack/ceilometer-0" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.735243 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/958cf7f7-f879-4664-9498-8d57bd09a610-config-data\") pod \"ceilometer-0\" (UID: \"958cf7f7-f879-4664-9498-8d57bd09a610\") " pod="openstack/ceilometer-0" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.735357 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45jpb\" (UniqueName: \"kubernetes.io/projected/958cf7f7-f879-4664-9498-8d57bd09a610-kube-api-access-45jpb\") pod \"ceilometer-0\" (UID: \"958cf7f7-f879-4664-9498-8d57bd09a610\") " pod="openstack/ceilometer-0" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.735449 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/958cf7f7-f879-4664-9498-8d57bd09a610-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"958cf7f7-f879-4664-9498-8d57bd09a610\") " pod="openstack/ceilometer-0" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.735560 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/958cf7f7-f879-4664-9498-8d57bd09a610-scripts\") pod \"ceilometer-0\" (UID: \"958cf7f7-f879-4664-9498-8d57bd09a610\") " pod="openstack/ceilometer-0" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.836866 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45jpb\" (UniqueName: \"kubernetes.io/projected/958cf7f7-f879-4664-9498-8d57bd09a610-kube-api-access-45jpb\") pod \"ceilometer-0\" (UID: \"958cf7f7-f879-4664-9498-8d57bd09a610\") " pod="openstack/ceilometer-0" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.836953 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/958cf7f7-f879-4664-9498-8d57bd09a610-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"958cf7f7-f879-4664-9498-8d57bd09a610\") " pod="openstack/ceilometer-0" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.836997 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/958cf7f7-f879-4664-9498-8d57bd09a610-scripts\") pod \"ceilometer-0\" (UID: \"958cf7f7-f879-4664-9498-8d57bd09a610\") " pod="openstack/ceilometer-0" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.837789 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/958cf7f7-f879-4664-9498-8d57bd09a610-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"958cf7f7-f879-4664-9498-8d57bd09a610\") " pod="openstack/ceilometer-0" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.837851 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/958cf7f7-f879-4664-9498-8d57bd09a610-log-httpd\") pod \"ceilometer-0\" (UID: \"958cf7f7-f879-4664-9498-8d57bd09a610\") " pod="openstack/ceilometer-0" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.837917 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/958cf7f7-f879-4664-9498-8d57bd09a610-run-httpd\") pod \"ceilometer-0\" (UID: \"958cf7f7-f879-4664-9498-8d57bd09a610\") " pod="openstack/ceilometer-0" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.837972 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/958cf7f7-f879-4664-9498-8d57bd09a610-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"958cf7f7-f879-4664-9498-8d57bd09a610\") " pod="openstack/ceilometer-0" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.837995 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/958cf7f7-f879-4664-9498-8d57bd09a610-config-data\") pod \"ceilometer-0\" (UID: \"958cf7f7-f879-4664-9498-8d57bd09a610\") " pod="openstack/ceilometer-0" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.838396 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/958cf7f7-f879-4664-9498-8d57bd09a610-log-httpd\") pod \"ceilometer-0\" (UID: \"958cf7f7-f879-4664-9498-8d57bd09a610\") " pod="openstack/ceilometer-0" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.838474 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/958cf7f7-f879-4664-9498-8d57bd09a610-run-httpd\") pod \"ceilometer-0\" (UID: \"958cf7f7-f879-4664-9498-8d57bd09a610\") " pod="openstack/ceilometer-0" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.843188 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/958cf7f7-f879-4664-9498-8d57bd09a610-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"958cf7f7-f879-4664-9498-8d57bd09a610\") " pod="openstack/ceilometer-0" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.843479 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/958cf7f7-f879-4664-9498-8d57bd09a610-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"958cf7f7-f879-4664-9498-8d57bd09a610\") " pod="openstack/ceilometer-0" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.843493 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/958cf7f7-f879-4664-9498-8d57bd09a610-scripts\") pod \"ceilometer-0\" (UID: \"958cf7f7-f879-4664-9498-8d57bd09a610\") " pod="openstack/ceilometer-0" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.843822 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/958cf7f7-f879-4664-9498-8d57bd09a610-config-data\") pod \"ceilometer-0\" (UID: \"958cf7f7-f879-4664-9498-8d57bd09a610\") " pod="openstack/ceilometer-0" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.844036 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/958cf7f7-f879-4664-9498-8d57bd09a610-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"958cf7f7-f879-4664-9498-8d57bd09a610\") " pod="openstack/ceilometer-0" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.861383 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45jpb\" (UniqueName: \"kubernetes.io/projected/958cf7f7-f879-4664-9498-8d57bd09a610-kube-api-access-45jpb\") pod \"ceilometer-0\" (UID: \"958cf7f7-f879-4664-9498-8d57bd09a610\") " pod="openstack/ceilometer-0" Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.966793 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:49:22 crc kubenswrapper[4867]: I0101 08:49:22.967639 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 01 08:49:23 crc kubenswrapper[4867]: I0101 08:49:23.152910 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cd9e18b-6370-426d-8abe-52f0d39f7f79" path="/var/lib/kubelet/pods/9cd9e18b-6370-426d-8abe-52f0d39f7f79/volumes" Jan 01 08:49:23 crc kubenswrapper[4867]: I0101 08:49:23.441184 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:49:23 crc kubenswrapper[4867]: I0101 08:49:23.554908 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"958cf7f7-f879-4664-9498-8d57bd09a610","Type":"ContainerStarted","Data":"f268c2fe4b087478349b651ad144a33af98f62756abf2245d72c9a170dd2a785"} Jan 01 08:49:24 crc kubenswrapper[4867]: I0101 08:49:24.575496 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"958cf7f7-f879-4664-9498-8d57bd09a610","Type":"ContainerStarted","Data":"1052e9dffa2555f13954495051a973379522cfbe86b7caf00c4011162166caba"} Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.268869 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.393302 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e39c244-f85a-4705-8796-128001d4cde3-config-data\") pod \"5e39c244-f85a-4705-8796-128001d4cde3\" (UID: \"5e39c244-f85a-4705-8796-128001d4cde3\") " Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.393564 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9jrc\" (UniqueName: \"kubernetes.io/projected/5e39c244-f85a-4705-8796-128001d4cde3-kube-api-access-x9jrc\") pod \"5e39c244-f85a-4705-8796-128001d4cde3\" (UID: \"5e39c244-f85a-4705-8796-128001d4cde3\") " Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.393657 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e39c244-f85a-4705-8796-128001d4cde3-combined-ca-bundle\") pod \"5e39c244-f85a-4705-8796-128001d4cde3\" (UID: \"5e39c244-f85a-4705-8796-128001d4cde3\") " Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.393693 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e39c244-f85a-4705-8796-128001d4cde3-logs\") pod \"5e39c244-f85a-4705-8796-128001d4cde3\" (UID: \"5e39c244-f85a-4705-8796-128001d4cde3\") " Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.394525 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e39c244-f85a-4705-8796-128001d4cde3-logs" (OuterVolumeSpecName: "logs") pod "5e39c244-f85a-4705-8796-128001d4cde3" (UID: "5e39c244-f85a-4705-8796-128001d4cde3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.411224 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e39c244-f85a-4705-8796-128001d4cde3-kube-api-access-x9jrc" (OuterVolumeSpecName: "kube-api-access-x9jrc") pod "5e39c244-f85a-4705-8796-128001d4cde3" (UID: "5e39c244-f85a-4705-8796-128001d4cde3"). InnerVolumeSpecName "kube-api-access-x9jrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.419597 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e39c244-f85a-4705-8796-128001d4cde3-config-data" (OuterVolumeSpecName: "config-data") pod "5e39c244-f85a-4705-8796-128001d4cde3" (UID: "5e39c244-f85a-4705-8796-128001d4cde3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.428983 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e39c244-f85a-4705-8796-128001d4cde3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e39c244-f85a-4705-8796-128001d4cde3" (UID: "5e39c244-f85a-4705-8796-128001d4cde3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.495642 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9jrc\" (UniqueName: \"kubernetes.io/projected/5e39c244-f85a-4705-8796-128001d4cde3-kube-api-access-x9jrc\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.495672 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e39c244-f85a-4705-8796-128001d4cde3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.495686 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e39c244-f85a-4705-8796-128001d4cde3-logs\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.495697 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e39c244-f85a-4705-8796-128001d4cde3-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.585331 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"958cf7f7-f879-4664-9498-8d57bd09a610","Type":"ContainerStarted","Data":"2095c10ddf76eff95d5faccf408c3d246129513eeadd81646d10e32416ecd156"} Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.589142 4867 generic.go:334] "Generic (PLEG): container finished" podID="5e39c244-f85a-4705-8796-128001d4cde3" containerID="18cc00d9f4fe7cf8d6e93600dfee079045baa8f15639e9f2adc34ea6f18cfd5c" exitCode=0 Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.589178 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5e39c244-f85a-4705-8796-128001d4cde3","Type":"ContainerDied","Data":"18cc00d9f4fe7cf8d6e93600dfee079045baa8f15639e9f2adc34ea6f18cfd5c"} Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.589198 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5e39c244-f85a-4705-8796-128001d4cde3","Type":"ContainerDied","Data":"8d0163237ccb1997aaf97aebe269d1e5d35bb7cc964b152f7d266f9612a54cec"} Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.589213 4867 scope.go:117] "RemoveContainer" containerID="18cc00d9f4fe7cf8d6e93600dfee079045baa8f15639e9f2adc34ea6f18cfd5c" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.589345 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.628469 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.634027 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.634128 4867 scope.go:117] "RemoveContainer" containerID="f5edf9df88825c6ebca64f6da373d88a3a08740a86b319653515c9c430b8cc56" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.648605 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 01 08:49:25 crc kubenswrapper[4867]: E0101 08:49:25.648967 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e39c244-f85a-4705-8796-128001d4cde3" containerName="nova-api-api" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.648982 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e39c244-f85a-4705-8796-128001d4cde3" containerName="nova-api-api" Jan 01 08:49:25 crc kubenswrapper[4867]: E0101 08:49:25.649019 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e39c244-f85a-4705-8796-128001d4cde3" containerName="nova-api-log" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.649026 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e39c244-f85a-4705-8796-128001d4cde3" containerName="nova-api-log" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.649185 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e39c244-f85a-4705-8796-128001d4cde3" containerName="nova-api-api" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.649205 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e39c244-f85a-4705-8796-128001d4cde3" containerName="nova-api-log" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.650132 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.654196 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.654410 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.654449 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.664424 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.686038 4867 scope.go:117] "RemoveContainer" containerID="18cc00d9f4fe7cf8d6e93600dfee079045baa8f15639e9f2adc34ea6f18cfd5c" Jan 01 08:49:25 crc kubenswrapper[4867]: E0101 08:49:25.691316 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18cc00d9f4fe7cf8d6e93600dfee079045baa8f15639e9f2adc34ea6f18cfd5c\": container with ID starting with 18cc00d9f4fe7cf8d6e93600dfee079045baa8f15639e9f2adc34ea6f18cfd5c not found: ID does not exist" containerID="18cc00d9f4fe7cf8d6e93600dfee079045baa8f15639e9f2adc34ea6f18cfd5c" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.691341 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18cc00d9f4fe7cf8d6e93600dfee079045baa8f15639e9f2adc34ea6f18cfd5c"} err="failed to get container status \"18cc00d9f4fe7cf8d6e93600dfee079045baa8f15639e9f2adc34ea6f18cfd5c\": rpc error: code = NotFound desc = could not find container \"18cc00d9f4fe7cf8d6e93600dfee079045baa8f15639e9f2adc34ea6f18cfd5c\": container with ID starting with 18cc00d9f4fe7cf8d6e93600dfee079045baa8f15639e9f2adc34ea6f18cfd5c not found: ID does not exist" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.691360 4867 scope.go:117] "RemoveContainer" containerID="f5edf9df88825c6ebca64f6da373d88a3a08740a86b319653515c9c430b8cc56" Jan 01 08:49:25 crc kubenswrapper[4867]: E0101 08:49:25.694880 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5edf9df88825c6ebca64f6da373d88a3a08740a86b319653515c9c430b8cc56\": container with ID starting with f5edf9df88825c6ebca64f6da373d88a3a08740a86b319653515c9c430b8cc56 not found: ID does not exist" containerID="f5edf9df88825c6ebca64f6da373d88a3a08740a86b319653515c9c430b8cc56" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.694932 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5edf9df88825c6ebca64f6da373d88a3a08740a86b319653515c9c430b8cc56"} err="failed to get container status \"f5edf9df88825c6ebca64f6da373d88a3a08740a86b319653515c9c430b8cc56\": rpc error: code = NotFound desc = could not find container \"f5edf9df88825c6ebca64f6da373d88a3a08740a86b319653515c9c430b8cc56\": container with ID starting with f5edf9df88825c6ebca64f6da373d88a3a08740a86b319653515c9c430b8cc56 not found: ID does not exist" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.784949 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.801312 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16049526-b9a2-45ca-b45f-96b3b6e6ca15-logs\") pod \"nova-api-0\" (UID: \"16049526-b9a2-45ca-b45f-96b3b6e6ca15\") " pod="openstack/nova-api-0" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.801344 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16049526-b9a2-45ca-b45f-96b3b6e6ca15-internal-tls-certs\") pod \"nova-api-0\" (UID: \"16049526-b9a2-45ca-b45f-96b3b6e6ca15\") " pod="openstack/nova-api-0" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.801421 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8btp\" (UniqueName: \"kubernetes.io/projected/16049526-b9a2-45ca-b45f-96b3b6e6ca15-kube-api-access-p8btp\") pod \"nova-api-0\" (UID: \"16049526-b9a2-45ca-b45f-96b3b6e6ca15\") " pod="openstack/nova-api-0" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.801646 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16049526-b9a2-45ca-b45f-96b3b6e6ca15-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"16049526-b9a2-45ca-b45f-96b3b6e6ca15\") " pod="openstack/nova-api-0" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.801671 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16049526-b9a2-45ca-b45f-96b3b6e6ca15-public-tls-certs\") pod \"nova-api-0\" (UID: \"16049526-b9a2-45ca-b45f-96b3b6e6ca15\") " pod="openstack/nova-api-0" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.801703 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16049526-b9a2-45ca-b45f-96b3b6e6ca15-config-data\") pod \"nova-api-0\" (UID: \"16049526-b9a2-45ca-b45f-96b3b6e6ca15\") " pod="openstack/nova-api-0" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.805770 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.904224 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16049526-b9a2-45ca-b45f-96b3b6e6ca15-logs\") pod \"nova-api-0\" (UID: \"16049526-b9a2-45ca-b45f-96b3b6e6ca15\") " pod="openstack/nova-api-0" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.904282 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16049526-b9a2-45ca-b45f-96b3b6e6ca15-internal-tls-certs\") pod \"nova-api-0\" (UID: \"16049526-b9a2-45ca-b45f-96b3b6e6ca15\") " pod="openstack/nova-api-0" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.904354 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8btp\" (UniqueName: \"kubernetes.io/projected/16049526-b9a2-45ca-b45f-96b3b6e6ca15-kube-api-access-p8btp\") pod \"nova-api-0\" (UID: \"16049526-b9a2-45ca-b45f-96b3b6e6ca15\") " pod="openstack/nova-api-0" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.904520 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16049526-b9a2-45ca-b45f-96b3b6e6ca15-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"16049526-b9a2-45ca-b45f-96b3b6e6ca15\") " pod="openstack/nova-api-0" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.904574 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16049526-b9a2-45ca-b45f-96b3b6e6ca15-public-tls-certs\") pod \"nova-api-0\" (UID: \"16049526-b9a2-45ca-b45f-96b3b6e6ca15\") " pod="openstack/nova-api-0" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.904649 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16049526-b9a2-45ca-b45f-96b3b6e6ca15-config-data\") pod \"nova-api-0\" (UID: \"16049526-b9a2-45ca-b45f-96b3b6e6ca15\") " pod="openstack/nova-api-0" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.904674 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16049526-b9a2-45ca-b45f-96b3b6e6ca15-logs\") pod \"nova-api-0\" (UID: \"16049526-b9a2-45ca-b45f-96b3b6e6ca15\") " pod="openstack/nova-api-0" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.913710 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16049526-b9a2-45ca-b45f-96b3b6e6ca15-config-data\") pod \"nova-api-0\" (UID: \"16049526-b9a2-45ca-b45f-96b3b6e6ca15\") " pod="openstack/nova-api-0" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.918177 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16049526-b9a2-45ca-b45f-96b3b6e6ca15-internal-tls-certs\") pod \"nova-api-0\" (UID: \"16049526-b9a2-45ca-b45f-96b3b6e6ca15\") " pod="openstack/nova-api-0" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.925185 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16049526-b9a2-45ca-b45f-96b3b6e6ca15-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"16049526-b9a2-45ca-b45f-96b3b6e6ca15\") " pod="openstack/nova-api-0" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.929523 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16049526-b9a2-45ca-b45f-96b3b6e6ca15-public-tls-certs\") pod \"nova-api-0\" (UID: \"16049526-b9a2-45ca-b45f-96b3b6e6ca15\") " pod="openstack/nova-api-0" Jan 01 08:49:25 crc kubenswrapper[4867]: I0101 08:49:25.948442 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8btp\" (UniqueName: \"kubernetes.io/projected/16049526-b9a2-45ca-b45f-96b3b6e6ca15-kube-api-access-p8btp\") pod \"nova-api-0\" (UID: \"16049526-b9a2-45ca-b45f-96b3b6e6ca15\") " pod="openstack/nova-api-0" Jan 01 08:49:26 crc kubenswrapper[4867]: I0101 08:49:26.003682 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 01 08:49:26 crc kubenswrapper[4867]: I0101 08:49:26.534082 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 01 08:49:26 crc kubenswrapper[4867]: I0101 08:49:26.600339 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"16049526-b9a2-45ca-b45f-96b3b6e6ca15","Type":"ContainerStarted","Data":"d254d82dc8b1a200330d122bf49a300d087a17674a6ea7a187c4751b44631f1e"} Jan 01 08:49:26 crc kubenswrapper[4867]: I0101 08:49:26.602529 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"958cf7f7-f879-4664-9498-8d57bd09a610","Type":"ContainerStarted","Data":"67bf446583bfd6b3f377ac4de4caa213bc70618f4e24ded64b8e90ef4b31ba35"} Jan 01 08:49:26 crc kubenswrapper[4867]: I0101 08:49:26.624424 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 01 08:49:26 crc kubenswrapper[4867]: I0101 08:49:26.867732 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-z9fx6"] Jan 01 08:49:26 crc kubenswrapper[4867]: I0101 08:49:26.869158 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-z9fx6" Jan 01 08:49:26 crc kubenswrapper[4867]: I0101 08:49:26.877259 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 01 08:49:26 crc kubenswrapper[4867]: I0101 08:49:26.877872 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 01 08:49:26 crc kubenswrapper[4867]: I0101 08:49:26.894077 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-z9fx6"] Jan 01 08:49:27 crc kubenswrapper[4867]: I0101 08:49:27.029457 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn8c4\" (UniqueName: \"kubernetes.io/projected/33fdfdaa-2411-42a9-8c71-6062c9cc143d-kube-api-access-dn8c4\") pod \"nova-cell1-cell-mapping-z9fx6\" (UID: \"33fdfdaa-2411-42a9-8c71-6062c9cc143d\") " pod="openstack/nova-cell1-cell-mapping-z9fx6" Jan 01 08:49:27 crc kubenswrapper[4867]: I0101 08:49:27.029522 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33fdfdaa-2411-42a9-8c71-6062c9cc143d-scripts\") pod \"nova-cell1-cell-mapping-z9fx6\" (UID: \"33fdfdaa-2411-42a9-8c71-6062c9cc143d\") " pod="openstack/nova-cell1-cell-mapping-z9fx6" Jan 01 08:49:27 crc kubenswrapper[4867]: I0101 08:49:27.029589 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33fdfdaa-2411-42a9-8c71-6062c9cc143d-config-data\") pod \"nova-cell1-cell-mapping-z9fx6\" (UID: \"33fdfdaa-2411-42a9-8c71-6062c9cc143d\") " pod="openstack/nova-cell1-cell-mapping-z9fx6" Jan 01 08:49:27 crc kubenswrapper[4867]: I0101 08:49:27.029705 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33fdfdaa-2411-42a9-8c71-6062c9cc143d-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-z9fx6\" (UID: \"33fdfdaa-2411-42a9-8c71-6062c9cc143d\") " pod="openstack/nova-cell1-cell-mapping-z9fx6" Jan 01 08:49:27 crc kubenswrapper[4867]: I0101 08:49:27.131875 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33fdfdaa-2411-42a9-8c71-6062c9cc143d-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-z9fx6\" (UID: \"33fdfdaa-2411-42a9-8c71-6062c9cc143d\") " pod="openstack/nova-cell1-cell-mapping-z9fx6" Jan 01 08:49:27 crc kubenswrapper[4867]: I0101 08:49:27.132246 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn8c4\" (UniqueName: \"kubernetes.io/projected/33fdfdaa-2411-42a9-8c71-6062c9cc143d-kube-api-access-dn8c4\") pod \"nova-cell1-cell-mapping-z9fx6\" (UID: \"33fdfdaa-2411-42a9-8c71-6062c9cc143d\") " pod="openstack/nova-cell1-cell-mapping-z9fx6" Jan 01 08:49:27 crc kubenswrapper[4867]: I0101 08:49:27.132288 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33fdfdaa-2411-42a9-8c71-6062c9cc143d-scripts\") pod \"nova-cell1-cell-mapping-z9fx6\" (UID: \"33fdfdaa-2411-42a9-8c71-6062c9cc143d\") " pod="openstack/nova-cell1-cell-mapping-z9fx6" Jan 01 08:49:27 crc kubenswrapper[4867]: I0101 08:49:27.132353 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33fdfdaa-2411-42a9-8c71-6062c9cc143d-config-data\") pod \"nova-cell1-cell-mapping-z9fx6\" (UID: \"33fdfdaa-2411-42a9-8c71-6062c9cc143d\") " pod="openstack/nova-cell1-cell-mapping-z9fx6" Jan 01 08:49:27 crc kubenswrapper[4867]: I0101 08:49:27.142661 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33fdfdaa-2411-42a9-8c71-6062c9cc143d-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-z9fx6\" (UID: \"33fdfdaa-2411-42a9-8c71-6062c9cc143d\") " pod="openstack/nova-cell1-cell-mapping-z9fx6" Jan 01 08:49:27 crc kubenswrapper[4867]: I0101 08:49:27.160317 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33fdfdaa-2411-42a9-8c71-6062c9cc143d-config-data\") pod \"nova-cell1-cell-mapping-z9fx6\" (UID: \"33fdfdaa-2411-42a9-8c71-6062c9cc143d\") " pod="openstack/nova-cell1-cell-mapping-z9fx6" Jan 01 08:49:27 crc kubenswrapper[4867]: I0101 08:49:27.160527 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33fdfdaa-2411-42a9-8c71-6062c9cc143d-scripts\") pod \"nova-cell1-cell-mapping-z9fx6\" (UID: \"33fdfdaa-2411-42a9-8c71-6062c9cc143d\") " pod="openstack/nova-cell1-cell-mapping-z9fx6" Jan 01 08:49:27 crc kubenswrapper[4867]: I0101 08:49:27.162475 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e39c244-f85a-4705-8796-128001d4cde3" path="/var/lib/kubelet/pods/5e39c244-f85a-4705-8796-128001d4cde3/volumes" Jan 01 08:49:27 crc kubenswrapper[4867]: I0101 08:49:27.167445 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn8c4\" (UniqueName: \"kubernetes.io/projected/33fdfdaa-2411-42a9-8c71-6062c9cc143d-kube-api-access-dn8c4\") pod \"nova-cell1-cell-mapping-z9fx6\" (UID: \"33fdfdaa-2411-42a9-8c71-6062c9cc143d\") " pod="openstack/nova-cell1-cell-mapping-z9fx6" Jan 01 08:49:27 crc kubenswrapper[4867]: I0101 08:49:27.380782 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-z9fx6" Jan 01 08:49:27 crc kubenswrapper[4867]: I0101 08:49:27.618393 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"16049526-b9a2-45ca-b45f-96b3b6e6ca15","Type":"ContainerStarted","Data":"f96bdb5bf99cd7c3974adfbc0f51845be58c810f5a63687749ba2d7e7f0fa74d"} Jan 01 08:49:27 crc kubenswrapper[4867]: I0101 08:49:27.618789 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"16049526-b9a2-45ca-b45f-96b3b6e6ca15","Type":"ContainerStarted","Data":"668e80000d014d311a0a375ae96f35a0115172bc72c4a500687be0dea45f5b55"} Jan 01 08:49:27 crc kubenswrapper[4867]: I0101 08:49:27.627556 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"958cf7f7-f879-4664-9498-8d57bd09a610","Type":"ContainerStarted","Data":"a3f1a9852c6f6db9480e1cb17e44fe26e2d36f9875680d25d7d5ef37c3ace029"} Jan 01 08:49:27 crc kubenswrapper[4867]: I0101 08:49:27.627659 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="958cf7f7-f879-4664-9498-8d57bd09a610" containerName="ceilometer-central-agent" containerID="cri-o://1052e9dffa2555f13954495051a973379522cfbe86b7caf00c4011162166caba" gracePeriod=30 Jan 01 08:49:27 crc kubenswrapper[4867]: I0101 08:49:27.627748 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="958cf7f7-f879-4664-9498-8d57bd09a610" containerName="proxy-httpd" containerID="cri-o://a3f1a9852c6f6db9480e1cb17e44fe26e2d36f9875680d25d7d5ef37c3ace029" gracePeriod=30 Jan 01 08:49:27 crc kubenswrapper[4867]: I0101 08:49:27.627780 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="958cf7f7-f879-4664-9498-8d57bd09a610" containerName="sg-core" containerID="cri-o://67bf446583bfd6b3f377ac4de4caa213bc70618f4e24ded64b8e90ef4b31ba35" gracePeriod=30 Jan 01 08:49:27 crc kubenswrapper[4867]: I0101 08:49:27.627810 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="958cf7f7-f879-4664-9498-8d57bd09a610" containerName="ceilometer-notification-agent" containerID="cri-o://2095c10ddf76eff95d5faccf408c3d246129513eeadd81646d10e32416ecd156" gracePeriod=30 Jan 01 08:49:27 crc kubenswrapper[4867]: I0101 08:49:27.639873 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.639852991 podStartE2EDuration="2.639852991s" podCreationTimestamp="2026-01-01 08:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:49:27.637335971 +0000 UTC m=+1376.772604750" watchObservedRunningTime="2026-01-01 08:49:27.639852991 +0000 UTC m=+1376.775121760" Jan 01 08:49:27 crc kubenswrapper[4867]: I0101 08:49:27.668065 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.202330116 podStartE2EDuration="5.668043956s" podCreationTimestamp="2026-01-01 08:49:22 +0000 UTC" firstStartedPulling="2026-01-01 08:49:23.455061999 +0000 UTC m=+1372.590330768" lastFinishedPulling="2026-01-01 08:49:26.920775839 +0000 UTC m=+1376.056044608" observedRunningTime="2026-01-01 08:49:27.660565328 +0000 UTC m=+1376.795834107" watchObservedRunningTime="2026-01-01 08:49:27.668043956 +0000 UTC m=+1376.803312735" Jan 01 08:49:27 crc kubenswrapper[4867]: I0101 08:49:27.835858 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-z9fx6"] Jan 01 08:49:27 crc kubenswrapper[4867]: W0101 08:49:27.844215 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33fdfdaa_2411_42a9_8c71_6062c9cc143d.slice/crio-f9feb8caf57e115846d81b0097850063d940ea12e8551a3097d69694dc9596a2 WatchSource:0}: Error finding container f9feb8caf57e115846d81b0097850063d940ea12e8551a3097d69694dc9596a2: Status 404 returned error can't find the container with id f9feb8caf57e115846d81b0097850063d940ea12e8551a3097d69694dc9596a2 Jan 01 08:49:28 crc kubenswrapper[4867]: I0101 08:49:28.644535 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-z9fx6" event={"ID":"33fdfdaa-2411-42a9-8c71-6062c9cc143d","Type":"ContainerStarted","Data":"59083f0140173ffc9cb86f628d124ca92657b3c9e015fa63f212aaaa828beee0"} Jan 01 08:49:28 crc kubenswrapper[4867]: I0101 08:49:28.644924 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-z9fx6" event={"ID":"33fdfdaa-2411-42a9-8c71-6062c9cc143d","Type":"ContainerStarted","Data":"f9feb8caf57e115846d81b0097850063d940ea12e8551a3097d69694dc9596a2"} Jan 01 08:49:28 crc kubenswrapper[4867]: I0101 08:49:28.652031 4867 generic.go:334] "Generic (PLEG): container finished" podID="958cf7f7-f879-4664-9498-8d57bd09a610" containerID="a3f1a9852c6f6db9480e1cb17e44fe26e2d36f9875680d25d7d5ef37c3ace029" exitCode=0 Jan 01 08:49:28 crc kubenswrapper[4867]: I0101 08:49:28.652150 4867 generic.go:334] "Generic (PLEG): container finished" podID="958cf7f7-f879-4664-9498-8d57bd09a610" containerID="67bf446583bfd6b3f377ac4de4caa213bc70618f4e24ded64b8e90ef4b31ba35" exitCode=2 Jan 01 08:49:28 crc kubenswrapper[4867]: I0101 08:49:28.652328 4867 generic.go:334] "Generic (PLEG): container finished" podID="958cf7f7-f879-4664-9498-8d57bd09a610" containerID="2095c10ddf76eff95d5faccf408c3d246129513eeadd81646d10e32416ecd156" exitCode=0 Jan 01 08:49:28 crc kubenswrapper[4867]: I0101 08:49:28.652204 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"958cf7f7-f879-4664-9498-8d57bd09a610","Type":"ContainerDied","Data":"a3f1a9852c6f6db9480e1cb17e44fe26e2d36f9875680d25d7d5ef37c3ace029"} Jan 01 08:49:28 crc kubenswrapper[4867]: I0101 08:49:28.652422 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"958cf7f7-f879-4664-9498-8d57bd09a610","Type":"ContainerDied","Data":"67bf446583bfd6b3f377ac4de4caa213bc70618f4e24ded64b8e90ef4b31ba35"} Jan 01 08:49:28 crc kubenswrapper[4867]: I0101 08:49:28.652448 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"958cf7f7-f879-4664-9498-8d57bd09a610","Type":"ContainerDied","Data":"2095c10ddf76eff95d5faccf408c3d246129513eeadd81646d10e32416ecd156"} Jan 01 08:49:28 crc kubenswrapper[4867]: I0101 08:49:28.684116 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-z9fx6" podStartSLOduration=2.684093757 podStartE2EDuration="2.684093757s" podCreationTimestamp="2026-01-01 08:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:49:28.67126591 +0000 UTC m=+1377.806534719" watchObservedRunningTime="2026-01-01 08:49:28.684093757 +0000 UTC m=+1377.819362566" Jan 01 08:49:28 crc kubenswrapper[4867]: I0101 08:49:28.938014 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fbdfbb78f-5g78q" Jan 01 08:49:29 crc kubenswrapper[4867]: I0101 08:49:29.038477 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77f475f9d5-jjvqz"] Jan 01 08:49:29 crc kubenswrapper[4867]: I0101 08:49:29.039077 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77f475f9d5-jjvqz" podUID="519d8b68-1fa4-425c-adc6-0a0687e3b165" containerName="dnsmasq-dns" containerID="cri-o://f71cb1eabae5450a1744b6879c5683961cc538839ffb45158dbd409f06cc3ea5" gracePeriod=10 Jan 01 08:49:29 crc kubenswrapper[4867]: I0101 08:49:29.549833 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77f475f9d5-jjvqz" Jan 01 08:49:29 crc kubenswrapper[4867]: I0101 08:49:29.661359 4867 generic.go:334] "Generic (PLEG): container finished" podID="519d8b68-1fa4-425c-adc6-0a0687e3b165" containerID="f71cb1eabae5450a1744b6879c5683961cc538839ffb45158dbd409f06cc3ea5" exitCode=0 Jan 01 08:49:29 crc kubenswrapper[4867]: I0101 08:49:29.661405 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77f475f9d5-jjvqz" Jan 01 08:49:29 crc kubenswrapper[4867]: I0101 08:49:29.661421 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77f475f9d5-jjvqz" event={"ID":"519d8b68-1fa4-425c-adc6-0a0687e3b165","Type":"ContainerDied","Data":"f71cb1eabae5450a1744b6879c5683961cc538839ffb45158dbd409f06cc3ea5"} Jan 01 08:49:29 crc kubenswrapper[4867]: I0101 08:49:29.662915 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77f475f9d5-jjvqz" event={"ID":"519d8b68-1fa4-425c-adc6-0a0687e3b165","Type":"ContainerDied","Data":"b1e6f1789be25a4f226d5042a81af74160c714cc59c4b763cb6b3918929d6e1d"} Jan 01 08:49:29 crc kubenswrapper[4867]: I0101 08:49:29.662937 4867 scope.go:117] "RemoveContainer" containerID="f71cb1eabae5450a1744b6879c5683961cc538839ffb45158dbd409f06cc3ea5" Jan 01 08:49:29 crc kubenswrapper[4867]: I0101 08:49:29.682626 4867 scope.go:117] "RemoveContainer" containerID="44136619ebd82bbba2da47b47d55992993ea904571bfdfc06834af4fe36380f7" Jan 01 08:49:29 crc kubenswrapper[4867]: I0101 08:49:29.701484 4867 scope.go:117] "RemoveContainer" containerID="f71cb1eabae5450a1744b6879c5683961cc538839ffb45158dbd409f06cc3ea5" Jan 01 08:49:29 crc kubenswrapper[4867]: E0101 08:49:29.701950 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f71cb1eabae5450a1744b6879c5683961cc538839ffb45158dbd409f06cc3ea5\": container with ID starting with f71cb1eabae5450a1744b6879c5683961cc538839ffb45158dbd409f06cc3ea5 not found: ID does not exist" containerID="f71cb1eabae5450a1744b6879c5683961cc538839ffb45158dbd409f06cc3ea5" Jan 01 08:49:29 crc kubenswrapper[4867]: I0101 08:49:29.701999 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f71cb1eabae5450a1744b6879c5683961cc538839ffb45158dbd409f06cc3ea5"} err="failed to get container status \"f71cb1eabae5450a1744b6879c5683961cc538839ffb45158dbd409f06cc3ea5\": rpc error: code = NotFound desc = could not find container \"f71cb1eabae5450a1744b6879c5683961cc538839ffb45158dbd409f06cc3ea5\": container with ID starting with f71cb1eabae5450a1744b6879c5683961cc538839ffb45158dbd409f06cc3ea5 not found: ID does not exist" Jan 01 08:49:29 crc kubenswrapper[4867]: I0101 08:49:29.702032 4867 scope.go:117] "RemoveContainer" containerID="44136619ebd82bbba2da47b47d55992993ea904571bfdfc06834af4fe36380f7" Jan 01 08:49:29 crc kubenswrapper[4867]: E0101 08:49:29.702380 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44136619ebd82bbba2da47b47d55992993ea904571bfdfc06834af4fe36380f7\": container with ID starting with 44136619ebd82bbba2da47b47d55992993ea904571bfdfc06834af4fe36380f7 not found: ID does not exist" containerID="44136619ebd82bbba2da47b47d55992993ea904571bfdfc06834af4fe36380f7" Jan 01 08:49:29 crc kubenswrapper[4867]: I0101 08:49:29.702424 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44136619ebd82bbba2da47b47d55992993ea904571bfdfc06834af4fe36380f7"} err="failed to get container status \"44136619ebd82bbba2da47b47d55992993ea904571bfdfc06834af4fe36380f7\": rpc error: code = NotFound desc = could not find container \"44136619ebd82bbba2da47b47d55992993ea904571bfdfc06834af4fe36380f7\": container with ID starting with 44136619ebd82bbba2da47b47d55992993ea904571bfdfc06834af4fe36380f7 not found: ID does not exist" Jan 01 08:49:29 crc kubenswrapper[4867]: I0101 08:49:29.739794 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/519d8b68-1fa4-425c-adc6-0a0687e3b165-ovsdbserver-sb\") pod \"519d8b68-1fa4-425c-adc6-0a0687e3b165\" (UID: \"519d8b68-1fa4-425c-adc6-0a0687e3b165\") " Jan 01 08:49:29 crc kubenswrapper[4867]: I0101 08:49:29.740667 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtdw8\" (UniqueName: \"kubernetes.io/projected/519d8b68-1fa4-425c-adc6-0a0687e3b165-kube-api-access-rtdw8\") pod \"519d8b68-1fa4-425c-adc6-0a0687e3b165\" (UID: \"519d8b68-1fa4-425c-adc6-0a0687e3b165\") " Jan 01 08:49:29 crc kubenswrapper[4867]: I0101 08:49:29.740711 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/519d8b68-1fa4-425c-adc6-0a0687e3b165-config\") pod \"519d8b68-1fa4-425c-adc6-0a0687e3b165\" (UID: \"519d8b68-1fa4-425c-adc6-0a0687e3b165\") " Jan 01 08:49:29 crc kubenswrapper[4867]: I0101 08:49:29.740754 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/519d8b68-1fa4-425c-adc6-0a0687e3b165-dns-svc\") pod \"519d8b68-1fa4-425c-adc6-0a0687e3b165\" (UID: \"519d8b68-1fa4-425c-adc6-0a0687e3b165\") " Jan 01 08:49:29 crc kubenswrapper[4867]: I0101 08:49:29.740804 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/519d8b68-1fa4-425c-adc6-0a0687e3b165-dns-swift-storage-0\") pod \"519d8b68-1fa4-425c-adc6-0a0687e3b165\" (UID: \"519d8b68-1fa4-425c-adc6-0a0687e3b165\") " Jan 01 08:49:29 crc kubenswrapper[4867]: I0101 08:49:29.740859 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/519d8b68-1fa4-425c-adc6-0a0687e3b165-ovsdbserver-nb\") pod \"519d8b68-1fa4-425c-adc6-0a0687e3b165\" (UID: \"519d8b68-1fa4-425c-adc6-0a0687e3b165\") " Jan 01 08:49:29 crc kubenswrapper[4867]: I0101 08:49:29.754221 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/519d8b68-1fa4-425c-adc6-0a0687e3b165-kube-api-access-rtdw8" (OuterVolumeSpecName: "kube-api-access-rtdw8") pod "519d8b68-1fa4-425c-adc6-0a0687e3b165" (UID: "519d8b68-1fa4-425c-adc6-0a0687e3b165"). InnerVolumeSpecName "kube-api-access-rtdw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:49:29 crc kubenswrapper[4867]: I0101 08:49:29.796503 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/519d8b68-1fa4-425c-adc6-0a0687e3b165-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "519d8b68-1fa4-425c-adc6-0a0687e3b165" (UID: "519d8b68-1fa4-425c-adc6-0a0687e3b165"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:49:29 crc kubenswrapper[4867]: I0101 08:49:29.797339 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/519d8b68-1fa4-425c-adc6-0a0687e3b165-config" (OuterVolumeSpecName: "config") pod "519d8b68-1fa4-425c-adc6-0a0687e3b165" (UID: "519d8b68-1fa4-425c-adc6-0a0687e3b165"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:49:29 crc kubenswrapper[4867]: I0101 08:49:29.797380 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/519d8b68-1fa4-425c-adc6-0a0687e3b165-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "519d8b68-1fa4-425c-adc6-0a0687e3b165" (UID: "519d8b68-1fa4-425c-adc6-0a0687e3b165"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:49:29 crc kubenswrapper[4867]: I0101 08:49:29.798536 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/519d8b68-1fa4-425c-adc6-0a0687e3b165-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "519d8b68-1fa4-425c-adc6-0a0687e3b165" (UID: "519d8b68-1fa4-425c-adc6-0a0687e3b165"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:49:29 crc kubenswrapper[4867]: I0101 08:49:29.799654 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/519d8b68-1fa4-425c-adc6-0a0687e3b165-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "519d8b68-1fa4-425c-adc6-0a0687e3b165" (UID: "519d8b68-1fa4-425c-adc6-0a0687e3b165"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:49:29 crc kubenswrapper[4867]: I0101 08:49:29.842999 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/519d8b68-1fa4-425c-adc6-0a0687e3b165-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:29 crc kubenswrapper[4867]: I0101 08:49:29.843041 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtdw8\" (UniqueName: \"kubernetes.io/projected/519d8b68-1fa4-425c-adc6-0a0687e3b165-kube-api-access-rtdw8\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:29 crc kubenswrapper[4867]: I0101 08:49:29.843174 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/519d8b68-1fa4-425c-adc6-0a0687e3b165-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:29 crc kubenswrapper[4867]: I0101 08:49:29.843328 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/519d8b68-1fa4-425c-adc6-0a0687e3b165-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:29 crc kubenswrapper[4867]: I0101 08:49:29.843354 4867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/519d8b68-1fa4-425c-adc6-0a0687e3b165-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:29 crc kubenswrapper[4867]: I0101 08:49:29.843371 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/519d8b68-1fa4-425c-adc6-0a0687e3b165-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:29 crc kubenswrapper[4867]: I0101 08:49:29.999784 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77f475f9d5-jjvqz"] Jan 01 08:49:30 crc kubenswrapper[4867]: I0101 08:49:30.008169 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77f475f9d5-jjvqz"] Jan 01 08:49:30 crc kubenswrapper[4867]: I0101 08:49:30.710947 4867 generic.go:334] "Generic (PLEG): container finished" podID="958cf7f7-f879-4664-9498-8d57bd09a610" containerID="1052e9dffa2555f13954495051a973379522cfbe86b7caf00c4011162166caba" exitCode=0 Jan 01 08:49:30 crc kubenswrapper[4867]: I0101 08:49:30.711083 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"958cf7f7-f879-4664-9498-8d57bd09a610","Type":"ContainerDied","Data":"1052e9dffa2555f13954495051a973379522cfbe86b7caf00c4011162166caba"} Jan 01 08:49:30 crc kubenswrapper[4867]: I0101 08:49:30.799085 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 01 08:49:30 crc kubenswrapper[4867]: I0101 08:49:30.965200 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/958cf7f7-f879-4664-9498-8d57bd09a610-sg-core-conf-yaml\") pod \"958cf7f7-f879-4664-9498-8d57bd09a610\" (UID: \"958cf7f7-f879-4664-9498-8d57bd09a610\") " Jan 01 08:49:30 crc kubenswrapper[4867]: I0101 08:49:30.965310 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/958cf7f7-f879-4664-9498-8d57bd09a610-run-httpd\") pod \"958cf7f7-f879-4664-9498-8d57bd09a610\" (UID: \"958cf7f7-f879-4664-9498-8d57bd09a610\") " Jan 01 08:49:30 crc kubenswrapper[4867]: I0101 08:49:30.965443 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45jpb\" (UniqueName: \"kubernetes.io/projected/958cf7f7-f879-4664-9498-8d57bd09a610-kube-api-access-45jpb\") pod \"958cf7f7-f879-4664-9498-8d57bd09a610\" (UID: \"958cf7f7-f879-4664-9498-8d57bd09a610\") " Jan 01 08:49:30 crc kubenswrapper[4867]: I0101 08:49:30.965529 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/958cf7f7-f879-4664-9498-8d57bd09a610-log-httpd\") pod \"958cf7f7-f879-4664-9498-8d57bd09a610\" (UID: \"958cf7f7-f879-4664-9498-8d57bd09a610\") " Jan 01 08:49:30 crc kubenswrapper[4867]: I0101 08:49:30.965558 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/958cf7f7-f879-4664-9498-8d57bd09a610-ceilometer-tls-certs\") pod \"958cf7f7-f879-4664-9498-8d57bd09a610\" (UID: \"958cf7f7-f879-4664-9498-8d57bd09a610\") " Jan 01 08:49:30 crc kubenswrapper[4867]: I0101 08:49:30.965583 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/958cf7f7-f879-4664-9498-8d57bd09a610-scripts\") pod \"958cf7f7-f879-4664-9498-8d57bd09a610\" (UID: \"958cf7f7-f879-4664-9498-8d57bd09a610\") " Jan 01 08:49:30 crc kubenswrapper[4867]: I0101 08:49:30.965629 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/958cf7f7-f879-4664-9498-8d57bd09a610-config-data\") pod \"958cf7f7-f879-4664-9498-8d57bd09a610\" (UID: \"958cf7f7-f879-4664-9498-8d57bd09a610\") " Jan 01 08:49:30 crc kubenswrapper[4867]: I0101 08:49:30.965686 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/958cf7f7-f879-4664-9498-8d57bd09a610-combined-ca-bundle\") pod \"958cf7f7-f879-4664-9498-8d57bd09a610\" (UID: \"958cf7f7-f879-4664-9498-8d57bd09a610\") " Jan 01 08:49:30 crc kubenswrapper[4867]: I0101 08:49:30.965852 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/958cf7f7-f879-4664-9498-8d57bd09a610-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "958cf7f7-f879-4664-9498-8d57bd09a610" (UID: "958cf7f7-f879-4664-9498-8d57bd09a610"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:49:30 crc kubenswrapper[4867]: I0101 08:49:30.966106 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/958cf7f7-f879-4664-9498-8d57bd09a610-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "958cf7f7-f879-4664-9498-8d57bd09a610" (UID: "958cf7f7-f879-4664-9498-8d57bd09a610"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:49:30 crc kubenswrapper[4867]: I0101 08:49:30.966651 4867 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/958cf7f7-f879-4664-9498-8d57bd09a610-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:30 crc kubenswrapper[4867]: I0101 08:49:30.966673 4867 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/958cf7f7-f879-4664-9498-8d57bd09a610-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:30 crc kubenswrapper[4867]: I0101 08:49:30.971296 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/958cf7f7-f879-4664-9498-8d57bd09a610-scripts" (OuterVolumeSpecName: "scripts") pod "958cf7f7-f879-4664-9498-8d57bd09a610" (UID: "958cf7f7-f879-4664-9498-8d57bd09a610"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:49:30 crc kubenswrapper[4867]: I0101 08:49:30.978064 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/958cf7f7-f879-4664-9498-8d57bd09a610-kube-api-access-45jpb" (OuterVolumeSpecName: "kube-api-access-45jpb") pod "958cf7f7-f879-4664-9498-8d57bd09a610" (UID: "958cf7f7-f879-4664-9498-8d57bd09a610"). InnerVolumeSpecName "kube-api-access-45jpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:49:30 crc kubenswrapper[4867]: I0101 08:49:30.994815 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/958cf7f7-f879-4664-9498-8d57bd09a610-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "958cf7f7-f879-4664-9498-8d57bd09a610" (UID: "958cf7f7-f879-4664-9498-8d57bd09a610"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.020084 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/958cf7f7-f879-4664-9498-8d57bd09a610-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "958cf7f7-f879-4664-9498-8d57bd09a610" (UID: "958cf7f7-f879-4664-9498-8d57bd09a610"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.054024 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/958cf7f7-f879-4664-9498-8d57bd09a610-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "958cf7f7-f879-4664-9498-8d57bd09a610" (UID: "958cf7f7-f879-4664-9498-8d57bd09a610"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.068338 4867 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/958cf7f7-f879-4664-9498-8d57bd09a610-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.068781 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/958cf7f7-f879-4664-9498-8d57bd09a610-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.068911 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/958cf7f7-f879-4664-9498-8d57bd09a610-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.069004 4867 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/958cf7f7-f879-4664-9498-8d57bd09a610-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.069096 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45jpb\" (UniqueName: \"kubernetes.io/projected/958cf7f7-f879-4664-9498-8d57bd09a610-kube-api-access-45jpb\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.086326 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/958cf7f7-f879-4664-9498-8d57bd09a610-config-data" (OuterVolumeSpecName: "config-data") pod "958cf7f7-f879-4664-9498-8d57bd09a610" (UID: "958cf7f7-f879-4664-9498-8d57bd09a610"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.153970 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="519d8b68-1fa4-425c-adc6-0a0687e3b165" path="/var/lib/kubelet/pods/519d8b68-1fa4-425c-adc6-0a0687e3b165/volumes" Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.172468 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/958cf7f7-f879-4664-9498-8d57bd09a610-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.725954 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"958cf7f7-f879-4664-9498-8d57bd09a610","Type":"ContainerDied","Data":"f268c2fe4b087478349b651ad144a33af98f62756abf2245d72c9a170dd2a785"} Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.726368 4867 scope.go:117] "RemoveContainer" containerID="a3f1a9852c6f6db9480e1cb17e44fe26e2d36f9875680d25d7d5ef37c3ace029" Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.726116 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.766700 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.773957 4867 scope.go:117] "RemoveContainer" containerID="67bf446583bfd6b3f377ac4de4caa213bc70618f4e24ded64b8e90ef4b31ba35" Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.798274 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.803575 4867 scope.go:117] "RemoveContainer" containerID="2095c10ddf76eff95d5faccf408c3d246129513eeadd81646d10e32416ecd156" Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.824047 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:49:31 crc kubenswrapper[4867]: E0101 08:49:31.824493 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="958cf7f7-f879-4664-9498-8d57bd09a610" containerName="ceilometer-notification-agent" Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.824511 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="958cf7f7-f879-4664-9498-8d57bd09a610" containerName="ceilometer-notification-agent" Jan 01 08:49:31 crc kubenswrapper[4867]: E0101 08:49:31.824528 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="958cf7f7-f879-4664-9498-8d57bd09a610" containerName="proxy-httpd" Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.824535 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="958cf7f7-f879-4664-9498-8d57bd09a610" containerName="proxy-httpd" Jan 01 08:49:31 crc kubenswrapper[4867]: E0101 08:49:31.824548 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="519d8b68-1fa4-425c-adc6-0a0687e3b165" containerName="init" Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.824555 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="519d8b68-1fa4-425c-adc6-0a0687e3b165" containerName="init" Jan 01 08:49:31 crc kubenswrapper[4867]: E0101 08:49:31.824567 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="519d8b68-1fa4-425c-adc6-0a0687e3b165" containerName="dnsmasq-dns" Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.824573 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="519d8b68-1fa4-425c-adc6-0a0687e3b165" containerName="dnsmasq-dns" Jan 01 08:49:31 crc kubenswrapper[4867]: E0101 08:49:31.824583 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="958cf7f7-f879-4664-9498-8d57bd09a610" containerName="sg-core" Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.824589 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="958cf7f7-f879-4664-9498-8d57bd09a610" containerName="sg-core" Jan 01 08:49:31 crc kubenswrapper[4867]: E0101 08:49:31.824603 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="958cf7f7-f879-4664-9498-8d57bd09a610" containerName="ceilometer-central-agent" Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.824610 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="958cf7f7-f879-4664-9498-8d57bd09a610" containerName="ceilometer-central-agent" Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.824769 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="958cf7f7-f879-4664-9498-8d57bd09a610" containerName="ceilometer-central-agent" Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.824790 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="958cf7f7-f879-4664-9498-8d57bd09a610" containerName="ceilometer-notification-agent" Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.824810 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="958cf7f7-f879-4664-9498-8d57bd09a610" containerName="sg-core" Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.824820 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="519d8b68-1fa4-425c-adc6-0a0687e3b165" containerName="dnsmasq-dns" Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.824827 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="958cf7f7-f879-4664-9498-8d57bd09a610" containerName="proxy-httpd" Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.826384 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.833551 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.833930 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.835773 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.841723 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.846833 4867 scope.go:117] "RemoveContainer" containerID="1052e9dffa2555f13954495051a973379522cfbe86b7caf00c4011162166caba" Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.986284 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dad921b-d7dd-4113-85d2-78d6f59944b4-scripts\") pod \"ceilometer-0\" (UID: \"8dad921b-d7dd-4113-85d2-78d6f59944b4\") " pod="openstack/ceilometer-0" Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.986332 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dad921b-d7dd-4113-85d2-78d6f59944b4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8dad921b-d7dd-4113-85d2-78d6f59944b4\") " pod="openstack/ceilometer-0" Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.986350 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8dad921b-d7dd-4113-85d2-78d6f59944b4-log-httpd\") pod \"ceilometer-0\" (UID: \"8dad921b-d7dd-4113-85d2-78d6f59944b4\") " pod="openstack/ceilometer-0" Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.986370 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8dad921b-d7dd-4113-85d2-78d6f59944b4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8dad921b-d7dd-4113-85d2-78d6f59944b4\") " pod="openstack/ceilometer-0" Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.986388 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dad921b-d7dd-4113-85d2-78d6f59944b4-config-data\") pod \"ceilometer-0\" (UID: \"8dad921b-d7dd-4113-85d2-78d6f59944b4\") " pod="openstack/ceilometer-0" Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.986420 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5ggw\" (UniqueName: \"kubernetes.io/projected/8dad921b-d7dd-4113-85d2-78d6f59944b4-kube-api-access-c5ggw\") pod \"ceilometer-0\" (UID: \"8dad921b-d7dd-4113-85d2-78d6f59944b4\") " pod="openstack/ceilometer-0" Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.986438 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dad921b-d7dd-4113-85d2-78d6f59944b4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8dad921b-d7dd-4113-85d2-78d6f59944b4\") " pod="openstack/ceilometer-0" Jan 01 08:49:31 crc kubenswrapper[4867]: I0101 08:49:31.986497 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8dad921b-d7dd-4113-85d2-78d6f59944b4-run-httpd\") pod \"ceilometer-0\" (UID: \"8dad921b-d7dd-4113-85d2-78d6f59944b4\") " pod="openstack/ceilometer-0" Jan 01 08:49:32 crc kubenswrapper[4867]: I0101 08:49:32.092051 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dad921b-d7dd-4113-85d2-78d6f59944b4-scripts\") pod \"ceilometer-0\" (UID: \"8dad921b-d7dd-4113-85d2-78d6f59944b4\") " pod="openstack/ceilometer-0" Jan 01 08:49:32 crc kubenswrapper[4867]: I0101 08:49:32.093781 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dad921b-d7dd-4113-85d2-78d6f59944b4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8dad921b-d7dd-4113-85d2-78d6f59944b4\") " pod="openstack/ceilometer-0" Jan 01 08:49:32 crc kubenswrapper[4867]: I0101 08:49:32.093841 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8dad921b-d7dd-4113-85d2-78d6f59944b4-log-httpd\") pod \"ceilometer-0\" (UID: \"8dad921b-d7dd-4113-85d2-78d6f59944b4\") " pod="openstack/ceilometer-0" Jan 01 08:49:32 crc kubenswrapper[4867]: I0101 08:49:32.093942 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8dad921b-d7dd-4113-85d2-78d6f59944b4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8dad921b-d7dd-4113-85d2-78d6f59944b4\") " pod="openstack/ceilometer-0" Jan 01 08:49:32 crc kubenswrapper[4867]: I0101 08:49:32.093989 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dad921b-d7dd-4113-85d2-78d6f59944b4-config-data\") pod \"ceilometer-0\" (UID: \"8dad921b-d7dd-4113-85d2-78d6f59944b4\") " pod="openstack/ceilometer-0" Jan 01 08:49:32 crc kubenswrapper[4867]: I0101 08:49:32.094066 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5ggw\" (UniqueName: \"kubernetes.io/projected/8dad921b-d7dd-4113-85d2-78d6f59944b4-kube-api-access-c5ggw\") pod \"ceilometer-0\" (UID: \"8dad921b-d7dd-4113-85d2-78d6f59944b4\") " pod="openstack/ceilometer-0" Jan 01 08:49:32 crc kubenswrapper[4867]: I0101 08:49:32.094102 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dad921b-d7dd-4113-85d2-78d6f59944b4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8dad921b-d7dd-4113-85d2-78d6f59944b4\") " pod="openstack/ceilometer-0" Jan 01 08:49:32 crc kubenswrapper[4867]: I0101 08:49:32.094808 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8dad921b-d7dd-4113-85d2-78d6f59944b4-log-httpd\") pod \"ceilometer-0\" (UID: \"8dad921b-d7dd-4113-85d2-78d6f59944b4\") " pod="openstack/ceilometer-0" Jan 01 08:49:32 crc kubenswrapper[4867]: I0101 08:49:32.095352 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8dad921b-d7dd-4113-85d2-78d6f59944b4-run-httpd\") pod \"ceilometer-0\" (UID: \"8dad921b-d7dd-4113-85d2-78d6f59944b4\") " pod="openstack/ceilometer-0" Jan 01 08:49:32 crc kubenswrapper[4867]: I0101 08:49:32.096067 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8dad921b-d7dd-4113-85d2-78d6f59944b4-run-httpd\") pod \"ceilometer-0\" (UID: \"8dad921b-d7dd-4113-85d2-78d6f59944b4\") " pod="openstack/ceilometer-0" Jan 01 08:49:32 crc kubenswrapper[4867]: I0101 08:49:32.106709 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dad921b-d7dd-4113-85d2-78d6f59944b4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8dad921b-d7dd-4113-85d2-78d6f59944b4\") " pod="openstack/ceilometer-0" Jan 01 08:49:32 crc kubenswrapper[4867]: I0101 08:49:32.106988 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dad921b-d7dd-4113-85d2-78d6f59944b4-scripts\") pod \"ceilometer-0\" (UID: \"8dad921b-d7dd-4113-85d2-78d6f59944b4\") " pod="openstack/ceilometer-0" Jan 01 08:49:32 crc kubenswrapper[4867]: I0101 08:49:32.107590 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dad921b-d7dd-4113-85d2-78d6f59944b4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8dad921b-d7dd-4113-85d2-78d6f59944b4\") " pod="openstack/ceilometer-0" Jan 01 08:49:32 crc kubenswrapper[4867]: I0101 08:49:32.109076 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dad921b-d7dd-4113-85d2-78d6f59944b4-config-data\") pod \"ceilometer-0\" (UID: \"8dad921b-d7dd-4113-85d2-78d6f59944b4\") " pod="openstack/ceilometer-0" Jan 01 08:49:32 crc kubenswrapper[4867]: I0101 08:49:32.112128 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5ggw\" (UniqueName: \"kubernetes.io/projected/8dad921b-d7dd-4113-85d2-78d6f59944b4-kube-api-access-c5ggw\") pod \"ceilometer-0\" (UID: \"8dad921b-d7dd-4113-85d2-78d6f59944b4\") " pod="openstack/ceilometer-0" Jan 01 08:49:32 crc kubenswrapper[4867]: I0101 08:49:32.119807 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8dad921b-d7dd-4113-85d2-78d6f59944b4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8dad921b-d7dd-4113-85d2-78d6f59944b4\") " pod="openstack/ceilometer-0" Jan 01 08:49:32 crc kubenswrapper[4867]: I0101 08:49:32.156632 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 01 08:49:32 crc kubenswrapper[4867]: I0101 08:49:32.674123 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:49:32 crc kubenswrapper[4867]: W0101 08:49:32.678000 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8dad921b_d7dd_4113_85d2_78d6f59944b4.slice/crio-82fad26114b9d1d2173180067eb4d1901d30fad6a5254f634a2ac2616775e407 WatchSource:0}: Error finding container 82fad26114b9d1d2173180067eb4d1901d30fad6a5254f634a2ac2616775e407: Status 404 returned error can't find the container with id 82fad26114b9d1d2173180067eb4d1901d30fad6a5254f634a2ac2616775e407 Jan 01 08:49:32 crc kubenswrapper[4867]: I0101 08:49:32.748834 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8dad921b-d7dd-4113-85d2-78d6f59944b4","Type":"ContainerStarted","Data":"82fad26114b9d1d2173180067eb4d1901d30fad6a5254f634a2ac2616775e407"} Jan 01 08:49:32 crc kubenswrapper[4867]: I0101 08:49:32.752863 4867 generic.go:334] "Generic (PLEG): container finished" podID="33fdfdaa-2411-42a9-8c71-6062c9cc143d" containerID="59083f0140173ffc9cb86f628d124ca92657b3c9e015fa63f212aaaa828beee0" exitCode=0 Jan 01 08:49:32 crc kubenswrapper[4867]: I0101 08:49:32.752934 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-z9fx6" event={"ID":"33fdfdaa-2411-42a9-8c71-6062c9cc143d","Type":"ContainerDied","Data":"59083f0140173ffc9cb86f628d124ca92657b3c9e015fa63f212aaaa828beee0"} Jan 01 08:49:33 crc kubenswrapper[4867]: I0101 08:49:33.145477 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="958cf7f7-f879-4664-9498-8d57bd09a610" path="/var/lib/kubelet/pods/958cf7f7-f879-4664-9498-8d57bd09a610/volumes" Jan 01 08:49:33 crc kubenswrapper[4867]: I0101 08:49:33.764741 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8dad921b-d7dd-4113-85d2-78d6f59944b4","Type":"ContainerStarted","Data":"b766b7e81e4c520bf9e2f42a30c03f5beb3f1fa3a2ef8a0d49676f36a97ed049"} Jan 01 08:49:34 crc kubenswrapper[4867]: I0101 08:49:34.177685 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-z9fx6" Jan 01 08:49:34 crc kubenswrapper[4867]: I0101 08:49:34.255352 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dn8c4\" (UniqueName: \"kubernetes.io/projected/33fdfdaa-2411-42a9-8c71-6062c9cc143d-kube-api-access-dn8c4\") pod \"33fdfdaa-2411-42a9-8c71-6062c9cc143d\" (UID: \"33fdfdaa-2411-42a9-8c71-6062c9cc143d\") " Jan 01 08:49:34 crc kubenswrapper[4867]: I0101 08:49:34.255408 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33fdfdaa-2411-42a9-8c71-6062c9cc143d-combined-ca-bundle\") pod \"33fdfdaa-2411-42a9-8c71-6062c9cc143d\" (UID: \"33fdfdaa-2411-42a9-8c71-6062c9cc143d\") " Jan 01 08:49:34 crc kubenswrapper[4867]: I0101 08:49:34.255436 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33fdfdaa-2411-42a9-8c71-6062c9cc143d-config-data\") pod \"33fdfdaa-2411-42a9-8c71-6062c9cc143d\" (UID: \"33fdfdaa-2411-42a9-8c71-6062c9cc143d\") " Jan 01 08:49:34 crc kubenswrapper[4867]: I0101 08:49:34.255478 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33fdfdaa-2411-42a9-8c71-6062c9cc143d-scripts\") pod \"33fdfdaa-2411-42a9-8c71-6062c9cc143d\" (UID: \"33fdfdaa-2411-42a9-8c71-6062c9cc143d\") " Jan 01 08:49:34 crc kubenswrapper[4867]: I0101 08:49:34.276831 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33fdfdaa-2411-42a9-8c71-6062c9cc143d-kube-api-access-dn8c4" (OuterVolumeSpecName: "kube-api-access-dn8c4") pod "33fdfdaa-2411-42a9-8c71-6062c9cc143d" (UID: "33fdfdaa-2411-42a9-8c71-6062c9cc143d"). InnerVolumeSpecName "kube-api-access-dn8c4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:49:34 crc kubenswrapper[4867]: I0101 08:49:34.277181 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33fdfdaa-2411-42a9-8c71-6062c9cc143d-scripts" (OuterVolumeSpecName: "scripts") pod "33fdfdaa-2411-42a9-8c71-6062c9cc143d" (UID: "33fdfdaa-2411-42a9-8c71-6062c9cc143d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:49:34 crc kubenswrapper[4867]: I0101 08:49:34.292022 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33fdfdaa-2411-42a9-8c71-6062c9cc143d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33fdfdaa-2411-42a9-8c71-6062c9cc143d" (UID: "33fdfdaa-2411-42a9-8c71-6062c9cc143d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:49:34 crc kubenswrapper[4867]: I0101 08:49:34.292500 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33fdfdaa-2411-42a9-8c71-6062c9cc143d-config-data" (OuterVolumeSpecName: "config-data") pod "33fdfdaa-2411-42a9-8c71-6062c9cc143d" (UID: "33fdfdaa-2411-42a9-8c71-6062c9cc143d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:49:34 crc kubenswrapper[4867]: I0101 08:49:34.358353 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dn8c4\" (UniqueName: \"kubernetes.io/projected/33fdfdaa-2411-42a9-8c71-6062c9cc143d-kube-api-access-dn8c4\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:34 crc kubenswrapper[4867]: I0101 08:49:34.358419 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33fdfdaa-2411-42a9-8c71-6062c9cc143d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:34 crc kubenswrapper[4867]: I0101 08:49:34.358448 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33fdfdaa-2411-42a9-8c71-6062c9cc143d-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:34 crc kubenswrapper[4867]: I0101 08:49:34.358471 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33fdfdaa-2411-42a9-8c71-6062c9cc143d-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:34 crc kubenswrapper[4867]: I0101 08:49:34.794339 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8dad921b-d7dd-4113-85d2-78d6f59944b4","Type":"ContainerStarted","Data":"414ccbc2e33650855d1dd8b10146a435965c903f63378a1ad4e86b3cd9a12e1e"} Jan 01 08:49:34 crc kubenswrapper[4867]: I0101 08:49:34.796726 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-z9fx6" event={"ID":"33fdfdaa-2411-42a9-8c71-6062c9cc143d","Type":"ContainerDied","Data":"f9feb8caf57e115846d81b0097850063d940ea12e8551a3097d69694dc9596a2"} Jan 01 08:49:34 crc kubenswrapper[4867]: I0101 08:49:34.796769 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9feb8caf57e115846d81b0097850063d940ea12e8551a3097d69694dc9596a2" Jan 01 08:49:34 crc kubenswrapper[4867]: I0101 08:49:34.796795 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-z9fx6" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.002595 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.003256 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="16049526-b9a2-45ca-b45f-96b3b6e6ca15" containerName="nova-api-log" containerID="cri-o://668e80000d014d311a0a375ae96f35a0115172bc72c4a500687be0dea45f5b55" gracePeriod=30 Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.003401 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="16049526-b9a2-45ca-b45f-96b3b6e6ca15" containerName="nova-api-api" containerID="cri-o://f96bdb5bf99cd7c3974adfbc0f51845be58c810f5a63687749ba2d7e7f0fa74d" gracePeriod=30 Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.026043 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.026290 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="4b9eff86-c80d-4eb0-8a44-1e9c6511c90d" containerName="nova-scheduler-scheduler" containerID="cri-o://9c77d5a97fabc9b4bb9a0cc9407ed27256502d418156abe1b4ff4d50a8b42982" gracePeriod=30 Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.035732 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.036025 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e6957a8b-ec17-4cd0-8dab-5bb710fd0768" containerName="nova-metadata-log" containerID="cri-o://9b4577e665ce732cfa00dd33590202e0ea78c4f187e0fd4acbdf6494ea291d59" gracePeriod=30 Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.036416 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e6957a8b-ec17-4cd0-8dab-5bb710fd0768" containerName="nova-metadata-metadata" containerID="cri-o://356cad7efac4f0b020691384b49bb64e24caa87f420e466b6e5cfc155590a794" gracePeriod=30 Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.621590 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.687637 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16049526-b9a2-45ca-b45f-96b3b6e6ca15-logs\") pod \"16049526-b9a2-45ca-b45f-96b3b6e6ca15\" (UID: \"16049526-b9a2-45ca-b45f-96b3b6e6ca15\") " Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.687748 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16049526-b9a2-45ca-b45f-96b3b6e6ca15-internal-tls-certs\") pod \"16049526-b9a2-45ca-b45f-96b3b6e6ca15\" (UID: \"16049526-b9a2-45ca-b45f-96b3b6e6ca15\") " Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.687842 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8btp\" (UniqueName: \"kubernetes.io/projected/16049526-b9a2-45ca-b45f-96b3b6e6ca15-kube-api-access-p8btp\") pod \"16049526-b9a2-45ca-b45f-96b3b6e6ca15\" (UID: \"16049526-b9a2-45ca-b45f-96b3b6e6ca15\") " Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.687921 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16049526-b9a2-45ca-b45f-96b3b6e6ca15-config-data\") pod \"16049526-b9a2-45ca-b45f-96b3b6e6ca15\" (UID: \"16049526-b9a2-45ca-b45f-96b3b6e6ca15\") " Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.687982 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16049526-b9a2-45ca-b45f-96b3b6e6ca15-combined-ca-bundle\") pod \"16049526-b9a2-45ca-b45f-96b3b6e6ca15\" (UID: \"16049526-b9a2-45ca-b45f-96b3b6e6ca15\") " Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.688009 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16049526-b9a2-45ca-b45f-96b3b6e6ca15-public-tls-certs\") pod \"16049526-b9a2-45ca-b45f-96b3b6e6ca15\" (UID: \"16049526-b9a2-45ca-b45f-96b3b6e6ca15\") " Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.688693 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16049526-b9a2-45ca-b45f-96b3b6e6ca15-logs" (OuterVolumeSpecName: "logs") pod "16049526-b9a2-45ca-b45f-96b3b6e6ca15" (UID: "16049526-b9a2-45ca-b45f-96b3b6e6ca15"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.697049 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16049526-b9a2-45ca-b45f-96b3b6e6ca15-kube-api-access-p8btp" (OuterVolumeSpecName: "kube-api-access-p8btp") pod "16049526-b9a2-45ca-b45f-96b3b6e6ca15" (UID: "16049526-b9a2-45ca-b45f-96b3b6e6ca15"). InnerVolumeSpecName "kube-api-access-p8btp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.733034 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16049526-b9a2-45ca-b45f-96b3b6e6ca15-config-data" (OuterVolumeSpecName: "config-data") pod "16049526-b9a2-45ca-b45f-96b3b6e6ca15" (UID: "16049526-b9a2-45ca-b45f-96b3b6e6ca15"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.745396 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16049526-b9a2-45ca-b45f-96b3b6e6ca15-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16049526-b9a2-45ca-b45f-96b3b6e6ca15" (UID: "16049526-b9a2-45ca-b45f-96b3b6e6ca15"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.785026 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16049526-b9a2-45ca-b45f-96b3b6e6ca15-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "16049526-b9a2-45ca-b45f-96b3b6e6ca15" (UID: "16049526-b9a2-45ca-b45f-96b3b6e6ca15"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.788164 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16049526-b9a2-45ca-b45f-96b3b6e6ca15-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "16049526-b9a2-45ca-b45f-96b3b6e6ca15" (UID: "16049526-b9a2-45ca-b45f-96b3b6e6ca15"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.789349 4867 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16049526-b9a2-45ca-b45f-96b3b6e6ca15-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.789379 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8btp\" (UniqueName: \"kubernetes.io/projected/16049526-b9a2-45ca-b45f-96b3b6e6ca15-kube-api-access-p8btp\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.789392 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16049526-b9a2-45ca-b45f-96b3b6e6ca15-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.789400 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16049526-b9a2-45ca-b45f-96b3b6e6ca15-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.789409 4867 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16049526-b9a2-45ca-b45f-96b3b6e6ca15-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.789416 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16049526-b9a2-45ca-b45f-96b3b6e6ca15-logs\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.809414 4867 generic.go:334] "Generic (PLEG): container finished" podID="16049526-b9a2-45ca-b45f-96b3b6e6ca15" containerID="f96bdb5bf99cd7c3974adfbc0f51845be58c810f5a63687749ba2d7e7f0fa74d" exitCode=0 Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.809439 4867 generic.go:334] "Generic (PLEG): container finished" podID="16049526-b9a2-45ca-b45f-96b3b6e6ca15" containerID="668e80000d014d311a0a375ae96f35a0115172bc72c4a500687be0dea45f5b55" exitCode=143 Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.809481 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"16049526-b9a2-45ca-b45f-96b3b6e6ca15","Type":"ContainerDied","Data":"f96bdb5bf99cd7c3974adfbc0f51845be58c810f5a63687749ba2d7e7f0fa74d"} Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.809503 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"16049526-b9a2-45ca-b45f-96b3b6e6ca15","Type":"ContainerDied","Data":"668e80000d014d311a0a375ae96f35a0115172bc72c4a500687be0dea45f5b55"} Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.809512 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"16049526-b9a2-45ca-b45f-96b3b6e6ca15","Type":"ContainerDied","Data":"d254d82dc8b1a200330d122bf49a300d087a17674a6ea7a187c4751b44631f1e"} Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.809531 4867 scope.go:117] "RemoveContainer" containerID="f96bdb5bf99cd7c3974adfbc0f51845be58c810f5a63687749ba2d7e7f0fa74d" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.809629 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.816201 4867 generic.go:334] "Generic (PLEG): container finished" podID="e6957a8b-ec17-4cd0-8dab-5bb710fd0768" containerID="9b4577e665ce732cfa00dd33590202e0ea78c4f187e0fd4acbdf6494ea291d59" exitCode=143 Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.816264 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e6957a8b-ec17-4cd0-8dab-5bb710fd0768","Type":"ContainerDied","Data":"9b4577e665ce732cfa00dd33590202e0ea78c4f187e0fd4acbdf6494ea291d59"} Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.819116 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8dad921b-d7dd-4113-85d2-78d6f59944b4","Type":"ContainerStarted","Data":"46a9c735094db260247988c6fe2d5ab62a8071af9e424b678da59b0a5a682f02"} Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.836050 4867 scope.go:117] "RemoveContainer" containerID="668e80000d014d311a0a375ae96f35a0115172bc72c4a500687be0dea45f5b55" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.849414 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.865826 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.865994 4867 scope.go:117] "RemoveContainer" containerID="f96bdb5bf99cd7c3974adfbc0f51845be58c810f5a63687749ba2d7e7f0fa74d" Jan 01 08:49:35 crc kubenswrapper[4867]: E0101 08:49:35.866592 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f96bdb5bf99cd7c3974adfbc0f51845be58c810f5a63687749ba2d7e7f0fa74d\": container with ID starting with f96bdb5bf99cd7c3974adfbc0f51845be58c810f5a63687749ba2d7e7f0fa74d not found: ID does not exist" containerID="f96bdb5bf99cd7c3974adfbc0f51845be58c810f5a63687749ba2d7e7f0fa74d" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.866621 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f96bdb5bf99cd7c3974adfbc0f51845be58c810f5a63687749ba2d7e7f0fa74d"} err="failed to get container status \"f96bdb5bf99cd7c3974adfbc0f51845be58c810f5a63687749ba2d7e7f0fa74d\": rpc error: code = NotFound desc = could not find container \"f96bdb5bf99cd7c3974adfbc0f51845be58c810f5a63687749ba2d7e7f0fa74d\": container with ID starting with f96bdb5bf99cd7c3974adfbc0f51845be58c810f5a63687749ba2d7e7f0fa74d not found: ID does not exist" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.866649 4867 scope.go:117] "RemoveContainer" containerID="668e80000d014d311a0a375ae96f35a0115172bc72c4a500687be0dea45f5b55" Jan 01 08:49:35 crc kubenswrapper[4867]: E0101 08:49:35.867243 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"668e80000d014d311a0a375ae96f35a0115172bc72c4a500687be0dea45f5b55\": container with ID starting with 668e80000d014d311a0a375ae96f35a0115172bc72c4a500687be0dea45f5b55 not found: ID does not exist" containerID="668e80000d014d311a0a375ae96f35a0115172bc72c4a500687be0dea45f5b55" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.867275 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"668e80000d014d311a0a375ae96f35a0115172bc72c4a500687be0dea45f5b55"} err="failed to get container status \"668e80000d014d311a0a375ae96f35a0115172bc72c4a500687be0dea45f5b55\": rpc error: code = NotFound desc = could not find container \"668e80000d014d311a0a375ae96f35a0115172bc72c4a500687be0dea45f5b55\": container with ID starting with 668e80000d014d311a0a375ae96f35a0115172bc72c4a500687be0dea45f5b55 not found: ID does not exist" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.867288 4867 scope.go:117] "RemoveContainer" containerID="f96bdb5bf99cd7c3974adfbc0f51845be58c810f5a63687749ba2d7e7f0fa74d" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.867540 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f96bdb5bf99cd7c3974adfbc0f51845be58c810f5a63687749ba2d7e7f0fa74d"} err="failed to get container status \"f96bdb5bf99cd7c3974adfbc0f51845be58c810f5a63687749ba2d7e7f0fa74d\": rpc error: code = NotFound desc = could not find container \"f96bdb5bf99cd7c3974adfbc0f51845be58c810f5a63687749ba2d7e7f0fa74d\": container with ID starting with f96bdb5bf99cd7c3974adfbc0f51845be58c810f5a63687749ba2d7e7f0fa74d not found: ID does not exist" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.867579 4867 scope.go:117] "RemoveContainer" containerID="668e80000d014d311a0a375ae96f35a0115172bc72c4a500687be0dea45f5b55" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.867848 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"668e80000d014d311a0a375ae96f35a0115172bc72c4a500687be0dea45f5b55"} err="failed to get container status \"668e80000d014d311a0a375ae96f35a0115172bc72c4a500687be0dea45f5b55\": rpc error: code = NotFound desc = could not find container \"668e80000d014d311a0a375ae96f35a0115172bc72c4a500687be0dea45f5b55\": container with ID starting with 668e80000d014d311a0a375ae96f35a0115172bc72c4a500687be0dea45f5b55 not found: ID does not exist" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.879769 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 01 08:49:35 crc kubenswrapper[4867]: E0101 08:49:35.880181 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16049526-b9a2-45ca-b45f-96b3b6e6ca15" containerName="nova-api-log" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.880198 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="16049526-b9a2-45ca-b45f-96b3b6e6ca15" containerName="nova-api-log" Jan 01 08:49:35 crc kubenswrapper[4867]: E0101 08:49:35.880216 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33fdfdaa-2411-42a9-8c71-6062c9cc143d" containerName="nova-manage" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.880224 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="33fdfdaa-2411-42a9-8c71-6062c9cc143d" containerName="nova-manage" Jan 01 08:49:35 crc kubenswrapper[4867]: E0101 08:49:35.880246 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16049526-b9a2-45ca-b45f-96b3b6e6ca15" containerName="nova-api-api" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.880252 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="16049526-b9a2-45ca-b45f-96b3b6e6ca15" containerName="nova-api-api" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.880400 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="16049526-b9a2-45ca-b45f-96b3b6e6ca15" containerName="nova-api-log" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.880423 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="16049526-b9a2-45ca-b45f-96b3b6e6ca15" containerName="nova-api-api" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.880435 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="33fdfdaa-2411-42a9-8c71-6062c9cc143d" containerName="nova-manage" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.881309 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.883810 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.884175 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.884323 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.890151 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cda1d2c0-2470-41f9-9969-776f8883a38b-public-tls-certs\") pod \"nova-api-0\" (UID: \"cda1d2c0-2470-41f9-9969-776f8883a38b\") " pod="openstack/nova-api-0" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.890306 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cda1d2c0-2470-41f9-9969-776f8883a38b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cda1d2c0-2470-41f9-9969-776f8883a38b\") " pod="openstack/nova-api-0" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.890424 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cda1d2c0-2470-41f9-9969-776f8883a38b-logs\") pod \"nova-api-0\" (UID: \"cda1d2c0-2470-41f9-9969-776f8883a38b\") " pod="openstack/nova-api-0" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.890510 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zznmd\" (UniqueName: \"kubernetes.io/projected/cda1d2c0-2470-41f9-9969-776f8883a38b-kube-api-access-zznmd\") pod \"nova-api-0\" (UID: \"cda1d2c0-2470-41f9-9969-776f8883a38b\") " pod="openstack/nova-api-0" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.890621 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cda1d2c0-2470-41f9-9969-776f8883a38b-config-data\") pod \"nova-api-0\" (UID: \"cda1d2c0-2470-41f9-9969-776f8883a38b\") " pod="openstack/nova-api-0" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.890701 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cda1d2c0-2470-41f9-9969-776f8883a38b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"cda1d2c0-2470-41f9-9969-776f8883a38b\") " pod="openstack/nova-api-0" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.891530 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.991983 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cda1d2c0-2470-41f9-9969-776f8883a38b-logs\") pod \"nova-api-0\" (UID: \"cda1d2c0-2470-41f9-9969-776f8883a38b\") " pod="openstack/nova-api-0" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.992032 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zznmd\" (UniqueName: \"kubernetes.io/projected/cda1d2c0-2470-41f9-9969-776f8883a38b-kube-api-access-zznmd\") pod \"nova-api-0\" (UID: \"cda1d2c0-2470-41f9-9969-776f8883a38b\") " pod="openstack/nova-api-0" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.992095 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cda1d2c0-2470-41f9-9969-776f8883a38b-config-data\") pod \"nova-api-0\" (UID: \"cda1d2c0-2470-41f9-9969-776f8883a38b\") " pod="openstack/nova-api-0" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.992122 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cda1d2c0-2470-41f9-9969-776f8883a38b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"cda1d2c0-2470-41f9-9969-776f8883a38b\") " pod="openstack/nova-api-0" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.992150 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cda1d2c0-2470-41f9-9969-776f8883a38b-public-tls-certs\") pod \"nova-api-0\" (UID: \"cda1d2c0-2470-41f9-9969-776f8883a38b\") " pod="openstack/nova-api-0" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.992187 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cda1d2c0-2470-41f9-9969-776f8883a38b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cda1d2c0-2470-41f9-9969-776f8883a38b\") " pod="openstack/nova-api-0" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.992588 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cda1d2c0-2470-41f9-9969-776f8883a38b-logs\") pod \"nova-api-0\" (UID: \"cda1d2c0-2470-41f9-9969-776f8883a38b\") " pod="openstack/nova-api-0" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.995571 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cda1d2c0-2470-41f9-9969-776f8883a38b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cda1d2c0-2470-41f9-9969-776f8883a38b\") " pod="openstack/nova-api-0" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.995620 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cda1d2c0-2470-41f9-9969-776f8883a38b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"cda1d2c0-2470-41f9-9969-776f8883a38b\") " pod="openstack/nova-api-0" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.995851 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cda1d2c0-2470-41f9-9969-776f8883a38b-config-data\") pod \"nova-api-0\" (UID: \"cda1d2c0-2470-41f9-9969-776f8883a38b\") " pod="openstack/nova-api-0" Jan 01 08:49:35 crc kubenswrapper[4867]: I0101 08:49:35.997261 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cda1d2c0-2470-41f9-9969-776f8883a38b-public-tls-certs\") pod \"nova-api-0\" (UID: \"cda1d2c0-2470-41f9-9969-776f8883a38b\") " pod="openstack/nova-api-0" Jan 01 08:49:36 crc kubenswrapper[4867]: I0101 08:49:36.008115 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zznmd\" (UniqueName: \"kubernetes.io/projected/cda1d2c0-2470-41f9-9969-776f8883a38b-kube-api-access-zznmd\") pod \"nova-api-0\" (UID: \"cda1d2c0-2470-41f9-9969-776f8883a38b\") " pod="openstack/nova-api-0" Jan 01 08:49:36 crc kubenswrapper[4867]: I0101 08:49:36.203427 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 01 08:49:36 crc kubenswrapper[4867]: E0101 08:49:36.567495 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9c77d5a97fabc9b4bb9a0cc9407ed27256502d418156abe1b4ff4d50a8b42982" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 01 08:49:36 crc kubenswrapper[4867]: E0101 08:49:36.569782 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9c77d5a97fabc9b4bb9a0cc9407ed27256502d418156abe1b4ff4d50a8b42982" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 01 08:49:36 crc kubenswrapper[4867]: E0101 08:49:36.571179 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9c77d5a97fabc9b4bb9a0cc9407ed27256502d418156abe1b4ff4d50a8b42982" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 01 08:49:36 crc kubenswrapper[4867]: E0101 08:49:36.571242 4867 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="4b9eff86-c80d-4eb0-8a44-1e9c6511c90d" containerName="nova-scheduler-scheduler" Jan 01 08:49:36 crc kubenswrapper[4867]: I0101 08:49:36.718769 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 01 08:49:36 crc kubenswrapper[4867]: W0101 08:49:36.721121 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcda1d2c0_2470_41f9_9969_776f8883a38b.slice/crio-c78780e541a9b5d44089e1b10d2b1c3d526e9e3b73adb115b1bfc1c415ead6a0 WatchSource:0}: Error finding container c78780e541a9b5d44089e1b10d2b1c3d526e9e3b73adb115b1bfc1c415ead6a0: Status 404 returned error can't find the container with id c78780e541a9b5d44089e1b10d2b1c3d526e9e3b73adb115b1bfc1c415ead6a0 Jan 01 08:49:36 crc kubenswrapper[4867]: I0101 08:49:36.842021 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8dad921b-d7dd-4113-85d2-78d6f59944b4","Type":"ContainerStarted","Data":"9dd7ea3e293dda5baf51be491f21418ac4b799705fb9b3ff054db3ba80da00c2"} Jan 01 08:49:36 crc kubenswrapper[4867]: I0101 08:49:36.842156 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 01 08:49:36 crc kubenswrapper[4867]: I0101 08:49:36.848527 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cda1d2c0-2470-41f9-9969-776f8883a38b","Type":"ContainerStarted","Data":"c78780e541a9b5d44089e1b10d2b1c3d526e9e3b73adb115b1bfc1c415ead6a0"} Jan 01 08:49:37 crc kubenswrapper[4867]: I0101 08:49:37.156030 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16049526-b9a2-45ca-b45f-96b3b6e6ca15" path="/var/lib/kubelet/pods/16049526-b9a2-45ca-b45f-96b3b6e6ca15/volumes" Jan 01 08:49:37 crc kubenswrapper[4867]: I0101 08:49:37.883975 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cda1d2c0-2470-41f9-9969-776f8883a38b","Type":"ContainerStarted","Data":"c013d238e22e79dc9d0e40fa979e690ede634035cc8946de67a72eabb0c5ea17"} Jan 01 08:49:37 crc kubenswrapper[4867]: I0101 08:49:37.885123 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cda1d2c0-2470-41f9-9969-776f8883a38b","Type":"ContainerStarted","Data":"b66638c98090f5e1aaf6296dd6eb2d5dfcfb3fdb6de51af32ae9f4151cd17179"} Jan 01 08:49:37 crc kubenswrapper[4867]: I0101 08:49:37.928360 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.330229239 podStartE2EDuration="6.928335784s" podCreationTimestamp="2026-01-01 08:49:31 +0000 UTC" firstStartedPulling="2026-01-01 08:49:32.683937469 +0000 UTC m=+1381.819206238" lastFinishedPulling="2026-01-01 08:49:36.282044004 +0000 UTC m=+1385.417312783" observedRunningTime="2026-01-01 08:49:36.878669707 +0000 UTC m=+1386.013938476" watchObservedRunningTime="2026-01-01 08:49:37.928335784 +0000 UTC m=+1387.063604583" Jan 01 08:49:37 crc kubenswrapper[4867]: I0101 08:49:37.941459 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.941433108 podStartE2EDuration="2.941433108s" podCreationTimestamp="2026-01-01 08:49:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:49:37.919295652 +0000 UTC m=+1387.054564461" watchObservedRunningTime="2026-01-01 08:49:37.941433108 +0000 UTC m=+1387.076701917" Jan 01 08:49:38 crc kubenswrapper[4867]: I0101 08:49:38.182289 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="e6957a8b-ec17-4cd0-8dab-5bb710fd0768" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": read tcp 10.217.0.2:38580->10.217.0.194:8775: read: connection reset by peer" Jan 01 08:49:38 crc kubenswrapper[4867]: I0101 08:49:38.183413 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="e6957a8b-ec17-4cd0-8dab-5bb710fd0768" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": read tcp 10.217.0.2:38576->10.217.0.194:8775: read: connection reset by peer" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:38.652646 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:38.661632 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6957a8b-ec17-4cd0-8dab-5bb710fd0768-nova-metadata-tls-certs\") pod \"e6957a8b-ec17-4cd0-8dab-5bb710fd0768\" (UID: \"e6957a8b-ec17-4cd0-8dab-5bb710fd0768\") " Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:38.661679 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6957a8b-ec17-4cd0-8dab-5bb710fd0768-logs\") pod \"e6957a8b-ec17-4cd0-8dab-5bb710fd0768\" (UID: \"e6957a8b-ec17-4cd0-8dab-5bb710fd0768\") " Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:38.661718 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rxsx\" (UniqueName: \"kubernetes.io/projected/e6957a8b-ec17-4cd0-8dab-5bb710fd0768-kube-api-access-7rxsx\") pod \"e6957a8b-ec17-4cd0-8dab-5bb710fd0768\" (UID: \"e6957a8b-ec17-4cd0-8dab-5bb710fd0768\") " Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:38.661739 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6957a8b-ec17-4cd0-8dab-5bb710fd0768-config-data\") pod \"e6957a8b-ec17-4cd0-8dab-5bb710fd0768\" (UID: \"e6957a8b-ec17-4cd0-8dab-5bb710fd0768\") " Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:38.661770 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6957a8b-ec17-4cd0-8dab-5bb710fd0768-combined-ca-bundle\") pod \"e6957a8b-ec17-4cd0-8dab-5bb710fd0768\" (UID: \"e6957a8b-ec17-4cd0-8dab-5bb710fd0768\") " Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:38.663432 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6957a8b-ec17-4cd0-8dab-5bb710fd0768-logs" (OuterVolumeSpecName: "logs") pod "e6957a8b-ec17-4cd0-8dab-5bb710fd0768" (UID: "e6957a8b-ec17-4cd0-8dab-5bb710fd0768"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:38.667867 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6957a8b-ec17-4cd0-8dab-5bb710fd0768-kube-api-access-7rxsx" (OuterVolumeSpecName: "kube-api-access-7rxsx") pod "e6957a8b-ec17-4cd0-8dab-5bb710fd0768" (UID: "e6957a8b-ec17-4cd0-8dab-5bb710fd0768"). InnerVolumeSpecName "kube-api-access-7rxsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:38.726230 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6957a8b-ec17-4cd0-8dab-5bb710fd0768-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "e6957a8b-ec17-4cd0-8dab-5bb710fd0768" (UID: "e6957a8b-ec17-4cd0-8dab-5bb710fd0768"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:38.727917 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6957a8b-ec17-4cd0-8dab-5bb710fd0768-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6957a8b-ec17-4cd0-8dab-5bb710fd0768" (UID: "e6957a8b-ec17-4cd0-8dab-5bb710fd0768"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:38.743531 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6957a8b-ec17-4cd0-8dab-5bb710fd0768-config-data" (OuterVolumeSpecName: "config-data") pod "e6957a8b-ec17-4cd0-8dab-5bb710fd0768" (UID: "e6957a8b-ec17-4cd0-8dab-5bb710fd0768"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:38.764151 4867 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6957a8b-ec17-4cd0-8dab-5bb710fd0768-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:38.764186 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6957a8b-ec17-4cd0-8dab-5bb710fd0768-logs\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:38.764200 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rxsx\" (UniqueName: \"kubernetes.io/projected/e6957a8b-ec17-4cd0-8dab-5bb710fd0768-kube-api-access-7rxsx\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:38.764215 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6957a8b-ec17-4cd0-8dab-5bb710fd0768-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:38.764228 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6957a8b-ec17-4cd0-8dab-5bb710fd0768-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:38.895768 4867 generic.go:334] "Generic (PLEG): container finished" podID="e6957a8b-ec17-4cd0-8dab-5bb710fd0768" containerID="356cad7efac4f0b020691384b49bb64e24caa87f420e466b6e5cfc155590a794" exitCode=0 Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:38.895841 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e6957a8b-ec17-4cd0-8dab-5bb710fd0768","Type":"ContainerDied","Data":"356cad7efac4f0b020691384b49bb64e24caa87f420e466b6e5cfc155590a794"} Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:38.895862 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:38.895914 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e6957a8b-ec17-4cd0-8dab-5bb710fd0768","Type":"ContainerDied","Data":"f1540f8c963fbb8fa383f1b97e139442889ad9a71a0a7c1633c40196f1c3ec94"} Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:38.895941 4867 scope.go:117] "RemoveContainer" containerID="356cad7efac4f0b020691384b49bb64e24caa87f420e466b6e5cfc155590a794" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:38.932854 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:38.938032 4867 scope.go:117] "RemoveContainer" containerID="9b4577e665ce732cfa00dd33590202e0ea78c4f187e0fd4acbdf6494ea291d59" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:38.969511 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:39.003456 4867 scope.go:117] "RemoveContainer" containerID="356cad7efac4f0b020691384b49bb64e24caa87f420e466b6e5cfc155590a794" Jan 01 08:49:39 crc kubenswrapper[4867]: E0101 08:49:39.004045 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"356cad7efac4f0b020691384b49bb64e24caa87f420e466b6e5cfc155590a794\": container with ID starting with 356cad7efac4f0b020691384b49bb64e24caa87f420e466b6e5cfc155590a794 not found: ID does not exist" containerID="356cad7efac4f0b020691384b49bb64e24caa87f420e466b6e5cfc155590a794" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:39.004082 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"356cad7efac4f0b020691384b49bb64e24caa87f420e466b6e5cfc155590a794"} err="failed to get container status \"356cad7efac4f0b020691384b49bb64e24caa87f420e466b6e5cfc155590a794\": rpc error: code = NotFound desc = could not find container \"356cad7efac4f0b020691384b49bb64e24caa87f420e466b6e5cfc155590a794\": container with ID starting with 356cad7efac4f0b020691384b49bb64e24caa87f420e466b6e5cfc155590a794 not found: ID does not exist" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:39.004108 4867 scope.go:117] "RemoveContainer" containerID="9b4577e665ce732cfa00dd33590202e0ea78c4f187e0fd4acbdf6494ea291d59" Jan 01 08:49:39 crc kubenswrapper[4867]: E0101 08:49:39.004876 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b4577e665ce732cfa00dd33590202e0ea78c4f187e0fd4acbdf6494ea291d59\": container with ID starting with 9b4577e665ce732cfa00dd33590202e0ea78c4f187e0fd4acbdf6494ea291d59 not found: ID does not exist" containerID="9b4577e665ce732cfa00dd33590202e0ea78c4f187e0fd4acbdf6494ea291d59" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:39.004932 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b4577e665ce732cfa00dd33590202e0ea78c4f187e0fd4acbdf6494ea291d59"} err="failed to get container status \"9b4577e665ce732cfa00dd33590202e0ea78c4f187e0fd4acbdf6494ea291d59\": rpc error: code = NotFound desc = could not find container \"9b4577e665ce732cfa00dd33590202e0ea78c4f187e0fd4acbdf6494ea291d59\": container with ID starting with 9b4577e665ce732cfa00dd33590202e0ea78c4f187e0fd4acbdf6494ea291d59 not found: ID does not exist" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:39.004972 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 01 08:49:39 crc kubenswrapper[4867]: E0101 08:49:39.005472 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6957a8b-ec17-4cd0-8dab-5bb710fd0768" containerName="nova-metadata-log" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:39.005487 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6957a8b-ec17-4cd0-8dab-5bb710fd0768" containerName="nova-metadata-log" Jan 01 08:49:39 crc kubenswrapper[4867]: E0101 08:49:39.005512 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6957a8b-ec17-4cd0-8dab-5bb710fd0768" containerName="nova-metadata-metadata" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:39.005520 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6957a8b-ec17-4cd0-8dab-5bb710fd0768" containerName="nova-metadata-metadata" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:39.005758 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6957a8b-ec17-4cd0-8dab-5bb710fd0768" containerName="nova-metadata-log" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:39.005773 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6957a8b-ec17-4cd0-8dab-5bb710fd0768" containerName="nova-metadata-metadata" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:39.007159 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:39.009316 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:39.011138 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:39.018248 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:39.089257 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7003b80-53fa-4550-8f18-486a0f7988c9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e7003b80-53fa-4550-8f18-486a0f7988c9\") " pod="openstack/nova-metadata-0" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:39.089290 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qk5x\" (UniqueName: \"kubernetes.io/projected/e7003b80-53fa-4550-8f18-486a0f7988c9-kube-api-access-5qk5x\") pod \"nova-metadata-0\" (UID: \"e7003b80-53fa-4550-8f18-486a0f7988c9\") " pod="openstack/nova-metadata-0" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:39.089313 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7003b80-53fa-4550-8f18-486a0f7988c9-config-data\") pod \"nova-metadata-0\" (UID: \"e7003b80-53fa-4550-8f18-486a0f7988c9\") " pod="openstack/nova-metadata-0" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:39.089450 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7003b80-53fa-4550-8f18-486a0f7988c9-logs\") pod \"nova-metadata-0\" (UID: \"e7003b80-53fa-4550-8f18-486a0f7988c9\") " pod="openstack/nova-metadata-0" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:39.089539 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7003b80-53fa-4550-8f18-486a0f7988c9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e7003b80-53fa-4550-8f18-486a0f7988c9\") " pod="openstack/nova-metadata-0" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:39.138274 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6957a8b-ec17-4cd0-8dab-5bb710fd0768" path="/var/lib/kubelet/pods/e6957a8b-ec17-4cd0-8dab-5bb710fd0768/volumes" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:39.192391 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7003b80-53fa-4550-8f18-486a0f7988c9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e7003b80-53fa-4550-8f18-486a0f7988c9\") " pod="openstack/nova-metadata-0" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:39.192419 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qk5x\" (UniqueName: \"kubernetes.io/projected/e7003b80-53fa-4550-8f18-486a0f7988c9-kube-api-access-5qk5x\") pod \"nova-metadata-0\" (UID: \"e7003b80-53fa-4550-8f18-486a0f7988c9\") " pod="openstack/nova-metadata-0" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:39.192449 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7003b80-53fa-4550-8f18-486a0f7988c9-config-data\") pod \"nova-metadata-0\" (UID: \"e7003b80-53fa-4550-8f18-486a0f7988c9\") " pod="openstack/nova-metadata-0" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:39.192475 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7003b80-53fa-4550-8f18-486a0f7988c9-logs\") pod \"nova-metadata-0\" (UID: \"e7003b80-53fa-4550-8f18-486a0f7988c9\") " pod="openstack/nova-metadata-0" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:39.192529 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7003b80-53fa-4550-8f18-486a0f7988c9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e7003b80-53fa-4550-8f18-486a0f7988c9\") " pod="openstack/nova-metadata-0" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:39.195282 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7003b80-53fa-4550-8f18-486a0f7988c9-logs\") pod \"nova-metadata-0\" (UID: \"e7003b80-53fa-4550-8f18-486a0f7988c9\") " pod="openstack/nova-metadata-0" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:39.198012 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7003b80-53fa-4550-8f18-486a0f7988c9-config-data\") pod \"nova-metadata-0\" (UID: \"e7003b80-53fa-4550-8f18-486a0f7988c9\") " pod="openstack/nova-metadata-0" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:39.198258 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7003b80-53fa-4550-8f18-486a0f7988c9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e7003b80-53fa-4550-8f18-486a0f7988c9\") " pod="openstack/nova-metadata-0" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:39.198699 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7003b80-53fa-4550-8f18-486a0f7988c9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e7003b80-53fa-4550-8f18-486a0f7988c9\") " pod="openstack/nova-metadata-0" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:39.214489 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qk5x\" (UniqueName: \"kubernetes.io/projected/e7003b80-53fa-4550-8f18-486a0f7988c9-kube-api-access-5qk5x\") pod \"nova-metadata-0\" (UID: \"e7003b80-53fa-4550-8f18-486a0f7988c9\") " pod="openstack/nova-metadata-0" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:39.322767 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:39.765975 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 01 08:49:39 crc kubenswrapper[4867]: W0101 08:49:39.776623 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7003b80_53fa_4550_8f18_486a0f7988c9.slice/crio-31cdc5c3e7f0ac669624cccf9fc0cf73f710405f6bdf9b7380ffbd4d7e0196d0 WatchSource:0}: Error finding container 31cdc5c3e7f0ac669624cccf9fc0cf73f710405f6bdf9b7380ffbd4d7e0196d0: Status 404 returned error can't find the container with id 31cdc5c3e7f0ac669624cccf9fc0cf73f710405f6bdf9b7380ffbd4d7e0196d0 Jan 01 08:49:39 crc kubenswrapper[4867]: I0101 08:49:39.910337 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e7003b80-53fa-4550-8f18-486a0f7988c9","Type":"ContainerStarted","Data":"31cdc5c3e7f0ac669624cccf9fc0cf73f710405f6bdf9b7380ffbd4d7e0196d0"} Jan 01 08:49:40 crc kubenswrapper[4867]: I0101 08:49:40.562879 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 01 08:49:40 crc kubenswrapper[4867]: I0101 08:49:40.620658 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b9eff86-c80d-4eb0-8a44-1e9c6511c90d-combined-ca-bundle\") pod \"4b9eff86-c80d-4eb0-8a44-1e9c6511c90d\" (UID: \"4b9eff86-c80d-4eb0-8a44-1e9c6511c90d\") " Jan 01 08:49:40 crc kubenswrapper[4867]: I0101 08:49:40.621281 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b9eff86-c80d-4eb0-8a44-1e9c6511c90d-config-data\") pod \"4b9eff86-c80d-4eb0-8a44-1e9c6511c90d\" (UID: \"4b9eff86-c80d-4eb0-8a44-1e9c6511c90d\") " Jan 01 08:49:40 crc kubenswrapper[4867]: I0101 08:49:40.621350 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m74kx\" (UniqueName: \"kubernetes.io/projected/4b9eff86-c80d-4eb0-8a44-1e9c6511c90d-kube-api-access-m74kx\") pod \"4b9eff86-c80d-4eb0-8a44-1e9c6511c90d\" (UID: \"4b9eff86-c80d-4eb0-8a44-1e9c6511c90d\") " Jan 01 08:49:40 crc kubenswrapper[4867]: I0101 08:49:40.625590 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b9eff86-c80d-4eb0-8a44-1e9c6511c90d-kube-api-access-m74kx" (OuterVolumeSpecName: "kube-api-access-m74kx") pod "4b9eff86-c80d-4eb0-8a44-1e9c6511c90d" (UID: "4b9eff86-c80d-4eb0-8a44-1e9c6511c90d"). InnerVolumeSpecName "kube-api-access-m74kx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:49:40 crc kubenswrapper[4867]: I0101 08:49:40.677012 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b9eff86-c80d-4eb0-8a44-1e9c6511c90d-config-data" (OuterVolumeSpecName: "config-data") pod "4b9eff86-c80d-4eb0-8a44-1e9c6511c90d" (UID: "4b9eff86-c80d-4eb0-8a44-1e9c6511c90d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:49:40 crc kubenswrapper[4867]: I0101 08:49:40.678186 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b9eff86-c80d-4eb0-8a44-1e9c6511c90d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b9eff86-c80d-4eb0-8a44-1e9c6511c90d" (UID: "4b9eff86-c80d-4eb0-8a44-1e9c6511c90d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:49:40 crc kubenswrapper[4867]: I0101 08:49:40.724388 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b9eff86-c80d-4eb0-8a44-1e9c6511c90d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:40 crc kubenswrapper[4867]: I0101 08:49:40.724433 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b9eff86-c80d-4eb0-8a44-1e9c6511c90d-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:40 crc kubenswrapper[4867]: I0101 08:49:40.724445 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m74kx\" (UniqueName: \"kubernetes.io/projected/4b9eff86-c80d-4eb0-8a44-1e9c6511c90d-kube-api-access-m74kx\") on node \"crc\" DevicePath \"\"" Jan 01 08:49:40 crc kubenswrapper[4867]: I0101 08:49:40.924488 4867 generic.go:334] "Generic (PLEG): container finished" podID="4b9eff86-c80d-4eb0-8a44-1e9c6511c90d" containerID="9c77d5a97fabc9b4bb9a0cc9407ed27256502d418156abe1b4ff4d50a8b42982" exitCode=0 Jan 01 08:49:40 crc kubenswrapper[4867]: I0101 08:49:40.924547 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4b9eff86-c80d-4eb0-8a44-1e9c6511c90d","Type":"ContainerDied","Data":"9c77d5a97fabc9b4bb9a0cc9407ed27256502d418156abe1b4ff4d50a8b42982"} Jan 01 08:49:40 crc kubenswrapper[4867]: I0101 08:49:40.924572 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 01 08:49:40 crc kubenswrapper[4867]: I0101 08:49:40.924615 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4b9eff86-c80d-4eb0-8a44-1e9c6511c90d","Type":"ContainerDied","Data":"d8f4660105e8f8775e189d85f67c1273646ebfb074d208d234896b0c50e9cf88"} Jan 01 08:49:40 crc kubenswrapper[4867]: I0101 08:49:40.924647 4867 scope.go:117] "RemoveContainer" containerID="9c77d5a97fabc9b4bb9a0cc9407ed27256502d418156abe1b4ff4d50a8b42982" Jan 01 08:49:40 crc kubenswrapper[4867]: I0101 08:49:40.927806 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e7003b80-53fa-4550-8f18-486a0f7988c9","Type":"ContainerStarted","Data":"3cd677561763860feb64840f8907414cdd4cd64aae8107b33f87bbbd3b84da9d"} Jan 01 08:49:40 crc kubenswrapper[4867]: I0101 08:49:40.927842 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e7003b80-53fa-4550-8f18-486a0f7988c9","Type":"ContainerStarted","Data":"d63d8143a81a71b834c747786bff3a5cc7e1868dc867d35f729e24492192e1ec"} Jan 01 08:49:40 crc kubenswrapper[4867]: I0101 08:49:40.959808 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.9597809809999998 podStartE2EDuration="2.959780981s" podCreationTimestamp="2026-01-01 08:49:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:49:40.950721288 +0000 UTC m=+1390.085990057" watchObservedRunningTime="2026-01-01 08:49:40.959780981 +0000 UTC m=+1390.095049750" Jan 01 08:49:40 crc kubenswrapper[4867]: I0101 08:49:40.961549 4867 scope.go:117] "RemoveContainer" containerID="9c77d5a97fabc9b4bb9a0cc9407ed27256502d418156abe1b4ff4d50a8b42982" Jan 01 08:49:40 crc kubenswrapper[4867]: E0101 08:49:40.962087 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c77d5a97fabc9b4bb9a0cc9407ed27256502d418156abe1b4ff4d50a8b42982\": container with ID starting with 9c77d5a97fabc9b4bb9a0cc9407ed27256502d418156abe1b4ff4d50a8b42982 not found: ID does not exist" containerID="9c77d5a97fabc9b4bb9a0cc9407ed27256502d418156abe1b4ff4d50a8b42982" Jan 01 08:49:40 crc kubenswrapper[4867]: I0101 08:49:40.962146 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c77d5a97fabc9b4bb9a0cc9407ed27256502d418156abe1b4ff4d50a8b42982"} err="failed to get container status \"9c77d5a97fabc9b4bb9a0cc9407ed27256502d418156abe1b4ff4d50a8b42982\": rpc error: code = NotFound desc = could not find container \"9c77d5a97fabc9b4bb9a0cc9407ed27256502d418156abe1b4ff4d50a8b42982\": container with ID starting with 9c77d5a97fabc9b4bb9a0cc9407ed27256502d418156abe1b4ff4d50a8b42982 not found: ID does not exist" Jan 01 08:49:40 crc kubenswrapper[4867]: I0101 08:49:40.983661 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 01 08:49:41 crc kubenswrapper[4867]: I0101 08:49:41.018340 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 01 08:49:41 crc kubenswrapper[4867]: I0101 08:49:41.043699 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 01 08:49:41 crc kubenswrapper[4867]: E0101 08:49:41.044160 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b9eff86-c80d-4eb0-8a44-1e9c6511c90d" containerName="nova-scheduler-scheduler" Jan 01 08:49:41 crc kubenswrapper[4867]: I0101 08:49:41.044178 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b9eff86-c80d-4eb0-8a44-1e9c6511c90d" containerName="nova-scheduler-scheduler" Jan 01 08:49:41 crc kubenswrapper[4867]: I0101 08:49:41.044364 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b9eff86-c80d-4eb0-8a44-1e9c6511c90d" containerName="nova-scheduler-scheduler" Jan 01 08:49:41 crc kubenswrapper[4867]: I0101 08:49:41.045002 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 01 08:49:41 crc kubenswrapper[4867]: I0101 08:49:41.047204 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 01 08:49:41 crc kubenswrapper[4867]: I0101 08:49:41.051759 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 01 08:49:41 crc kubenswrapper[4867]: I0101 08:49:41.136234 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpgt6\" (UniqueName: \"kubernetes.io/projected/1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9-kube-api-access-gpgt6\") pod \"nova-scheduler-0\" (UID: \"1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9\") " pod="openstack/nova-scheduler-0" Jan 01 08:49:41 crc kubenswrapper[4867]: I0101 08:49:41.136279 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9-config-data\") pod \"nova-scheduler-0\" (UID: \"1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9\") " pod="openstack/nova-scheduler-0" Jan 01 08:49:41 crc kubenswrapper[4867]: I0101 08:49:41.136299 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9\") " pod="openstack/nova-scheduler-0" Jan 01 08:49:41 crc kubenswrapper[4867]: I0101 08:49:41.145115 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b9eff86-c80d-4eb0-8a44-1e9c6511c90d" path="/var/lib/kubelet/pods/4b9eff86-c80d-4eb0-8a44-1e9c6511c90d/volumes" Jan 01 08:49:41 crc kubenswrapper[4867]: I0101 08:49:41.237622 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpgt6\" (UniqueName: \"kubernetes.io/projected/1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9-kube-api-access-gpgt6\") pod \"nova-scheduler-0\" (UID: \"1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9\") " pod="openstack/nova-scheduler-0" Jan 01 08:49:41 crc kubenswrapper[4867]: I0101 08:49:41.237928 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9-config-data\") pod \"nova-scheduler-0\" (UID: \"1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9\") " pod="openstack/nova-scheduler-0" Jan 01 08:49:41 crc kubenswrapper[4867]: I0101 08:49:41.237956 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9\") " pod="openstack/nova-scheduler-0" Jan 01 08:49:41 crc kubenswrapper[4867]: I0101 08:49:41.243014 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9-config-data\") pod \"nova-scheduler-0\" (UID: \"1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9\") " pod="openstack/nova-scheduler-0" Jan 01 08:49:41 crc kubenswrapper[4867]: I0101 08:49:41.243014 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9\") " pod="openstack/nova-scheduler-0" Jan 01 08:49:41 crc kubenswrapper[4867]: I0101 08:49:41.265147 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpgt6\" (UniqueName: \"kubernetes.io/projected/1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9-kube-api-access-gpgt6\") pod \"nova-scheduler-0\" (UID: \"1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9\") " pod="openstack/nova-scheduler-0" Jan 01 08:49:41 crc kubenswrapper[4867]: I0101 08:49:41.365034 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 01 08:49:41 crc kubenswrapper[4867]: I0101 08:49:41.921041 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 01 08:49:41 crc kubenswrapper[4867]: W0101 08:49:41.929309 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d2e6f4b_31bf_4ad6_89ed_ebdd4f3aa5d9.slice/crio-0a8a2c1f4672ffb058bbef5bc7420b021bb60cf0bcf689b95da09dc5f0b6f793 WatchSource:0}: Error finding container 0a8a2c1f4672ffb058bbef5bc7420b021bb60cf0bcf689b95da09dc5f0b6f793: Status 404 returned error can't find the container with id 0a8a2c1f4672ffb058bbef5bc7420b021bb60cf0bcf689b95da09dc5f0b6f793 Jan 01 08:49:42 crc kubenswrapper[4867]: I0101 08:49:42.961352 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9","Type":"ContainerStarted","Data":"3d733e18f1ee0ab5fdfc275f4b701971bfd4e30736094221d5f2e06640b3bfa5"} Jan 01 08:49:42 crc kubenswrapper[4867]: I0101 08:49:42.961830 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9","Type":"ContainerStarted","Data":"0a8a2c1f4672ffb058bbef5bc7420b021bb60cf0bcf689b95da09dc5f0b6f793"} Jan 01 08:49:42 crc kubenswrapper[4867]: I0101 08:49:42.993627 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.993597851 podStartE2EDuration="2.993597851s" podCreationTimestamp="2026-01-01 08:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 08:49:42.98421495 +0000 UTC m=+1392.119483749" watchObservedRunningTime="2026-01-01 08:49:42.993597851 +0000 UTC m=+1392.128866650" Jan 01 08:49:44 crc kubenswrapper[4867]: I0101 08:49:44.322961 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 01 08:49:44 crc kubenswrapper[4867]: I0101 08:49:44.323438 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 01 08:49:46 crc kubenswrapper[4867]: I0101 08:49:46.204079 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 01 08:49:46 crc kubenswrapper[4867]: I0101 08:49:46.204429 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 01 08:49:46 crc kubenswrapper[4867]: I0101 08:49:46.365400 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 01 08:49:47 crc kubenswrapper[4867]: I0101 08:49:47.223085 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="cda1d2c0-2470-41f9-9969-776f8883a38b" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 01 08:49:47 crc kubenswrapper[4867]: I0101 08:49:47.223097 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="cda1d2c0-2470-41f9-9969-776f8883a38b" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 01 08:49:49 crc kubenswrapper[4867]: I0101 08:49:49.161078 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-45pjd"] Jan 01 08:49:49 crc kubenswrapper[4867]: I0101 08:49:49.164174 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-45pjd" Jan 01 08:49:49 crc kubenswrapper[4867]: I0101 08:49:49.166844 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-45pjd"] Jan 01 08:49:49 crc kubenswrapper[4867]: I0101 08:49:49.320373 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5-utilities\") pod \"redhat-operators-45pjd\" (UID: \"1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5\") " pod="openshift-marketplace/redhat-operators-45pjd" Jan 01 08:49:49 crc kubenswrapper[4867]: I0101 08:49:49.320445 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv7w8\" (UniqueName: \"kubernetes.io/projected/1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5-kube-api-access-dv7w8\") pod \"redhat-operators-45pjd\" (UID: \"1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5\") " pod="openshift-marketplace/redhat-operators-45pjd" Jan 01 08:49:49 crc kubenswrapper[4867]: I0101 08:49:49.320580 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5-catalog-content\") pod \"redhat-operators-45pjd\" (UID: \"1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5\") " pod="openshift-marketplace/redhat-operators-45pjd" Jan 01 08:49:49 crc kubenswrapper[4867]: I0101 08:49:49.322945 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 01 08:49:49 crc kubenswrapper[4867]: I0101 08:49:49.323006 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 01 08:49:49 crc kubenswrapper[4867]: I0101 08:49:49.422283 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5-catalog-content\") pod \"redhat-operators-45pjd\" (UID: \"1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5\") " pod="openshift-marketplace/redhat-operators-45pjd" Jan 01 08:49:49 crc kubenswrapper[4867]: I0101 08:49:49.422402 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5-utilities\") pod \"redhat-operators-45pjd\" (UID: \"1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5\") " pod="openshift-marketplace/redhat-operators-45pjd" Jan 01 08:49:49 crc kubenswrapper[4867]: I0101 08:49:49.422771 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5-catalog-content\") pod \"redhat-operators-45pjd\" (UID: \"1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5\") " pod="openshift-marketplace/redhat-operators-45pjd" Jan 01 08:49:49 crc kubenswrapper[4867]: I0101 08:49:49.422851 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5-utilities\") pod \"redhat-operators-45pjd\" (UID: \"1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5\") " pod="openshift-marketplace/redhat-operators-45pjd" Jan 01 08:49:49 crc kubenswrapper[4867]: I0101 08:49:49.422980 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv7w8\" (UniqueName: \"kubernetes.io/projected/1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5-kube-api-access-dv7w8\") pod \"redhat-operators-45pjd\" (UID: \"1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5\") " pod="openshift-marketplace/redhat-operators-45pjd" Jan 01 08:49:49 crc kubenswrapper[4867]: I0101 08:49:49.463670 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv7w8\" (UniqueName: \"kubernetes.io/projected/1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5-kube-api-access-dv7w8\") pod \"redhat-operators-45pjd\" (UID: \"1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5\") " pod="openshift-marketplace/redhat-operators-45pjd" Jan 01 08:49:49 crc kubenswrapper[4867]: I0101 08:49:49.518159 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-45pjd" Jan 01 08:49:49 crc kubenswrapper[4867]: I0101 08:49:49.971218 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-45pjd"] Jan 01 08:49:49 crc kubenswrapper[4867]: W0101 08:49:49.975120 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f2b5aa2_ef14_47eb_a42c_f6d3481b9bc5.slice/crio-7e46e71fccbb5e3b5a6fe98d366a9870d9357c27354d612cff23c1578accc4cd WatchSource:0}: Error finding container 7e46e71fccbb5e3b5a6fe98d366a9870d9357c27354d612cff23c1578accc4cd: Status 404 returned error can't find the container with id 7e46e71fccbb5e3b5a6fe98d366a9870d9357c27354d612cff23c1578accc4cd Jan 01 08:49:50 crc kubenswrapper[4867]: I0101 08:49:50.042422 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-45pjd" event={"ID":"1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5","Type":"ContainerStarted","Data":"7e46e71fccbb5e3b5a6fe98d366a9870d9357c27354d612cff23c1578accc4cd"} Jan 01 08:49:50 crc kubenswrapper[4867]: I0101 08:49:50.336070 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e7003b80-53fa-4550-8f18-486a0f7988c9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 01 08:49:50 crc kubenswrapper[4867]: I0101 08:49:50.336103 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e7003b80-53fa-4550-8f18-486a0f7988c9" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 01 08:49:51 crc kubenswrapper[4867]: I0101 08:49:51.058059 4867 generic.go:334] "Generic (PLEG): container finished" podID="1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5" containerID="21632bc05639eea1e0763f4c86398d3f0b079d41a488977254284e2e21f2cef2" exitCode=0 Jan 01 08:49:51 crc kubenswrapper[4867]: I0101 08:49:51.058268 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-45pjd" event={"ID":"1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5","Type":"ContainerDied","Data":"21632bc05639eea1e0763f4c86398d3f0b079d41a488977254284e2e21f2cef2"} Jan 01 08:49:51 crc kubenswrapper[4867]: I0101 08:49:51.366423 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 01 08:49:51 crc kubenswrapper[4867]: I0101 08:49:51.394685 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 01 08:49:52 crc kubenswrapper[4867]: I0101 08:49:52.066738 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-45pjd" event={"ID":"1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5","Type":"ContainerStarted","Data":"d0fd6e8332df210ee55845537ef941b1620feacdfee5db808ac3b62132234be8"} Jan 01 08:49:52 crc kubenswrapper[4867]: I0101 08:49:52.098734 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 01 08:49:54 crc kubenswrapper[4867]: E0101 08:49:54.374663 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f2b5aa2_ef14_47eb_a42c_f6d3481b9bc5.slice/crio-conmon-d0fd6e8332df210ee55845537ef941b1620feacdfee5db808ac3b62132234be8.scope\": RecentStats: unable to find data in memory cache]" Jan 01 08:49:55 crc kubenswrapper[4867]: I0101 08:49:55.101675 4867 generic.go:334] "Generic (PLEG): container finished" podID="1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5" containerID="d0fd6e8332df210ee55845537ef941b1620feacdfee5db808ac3b62132234be8" exitCode=0 Jan 01 08:49:55 crc kubenswrapper[4867]: I0101 08:49:55.101737 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-45pjd" event={"ID":"1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5","Type":"ContainerDied","Data":"d0fd6e8332df210ee55845537ef941b1620feacdfee5db808ac3b62132234be8"} Jan 01 08:49:56 crc kubenswrapper[4867]: I0101 08:49:56.115191 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-45pjd" event={"ID":"1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5","Type":"ContainerStarted","Data":"67369888ed967cb902b0306732c526562b2952c5a414315904734ec53e9df661"} Jan 01 08:49:56 crc kubenswrapper[4867]: I0101 08:49:56.151866 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-45pjd" podStartSLOduration=2.572339457 podStartE2EDuration="7.15184419s" podCreationTimestamp="2026-01-01 08:49:49 +0000 UTC" firstStartedPulling="2026-01-01 08:49:51.061481843 +0000 UTC m=+1400.196750652" lastFinishedPulling="2026-01-01 08:49:55.640986606 +0000 UTC m=+1404.776255385" observedRunningTime="2026-01-01 08:49:56.138630132 +0000 UTC m=+1405.273898931" watchObservedRunningTime="2026-01-01 08:49:56.15184419 +0000 UTC m=+1405.287112969" Jan 01 08:49:56 crc kubenswrapper[4867]: I0101 08:49:56.210010 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 01 08:49:56 crc kubenswrapper[4867]: I0101 08:49:56.210769 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 01 08:49:56 crc kubenswrapper[4867]: I0101 08:49:56.210987 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 01 08:49:56 crc kubenswrapper[4867]: I0101 08:49:56.216095 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 01 08:49:57 crc kubenswrapper[4867]: I0101 08:49:57.145133 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 01 08:49:57 crc kubenswrapper[4867]: I0101 08:49:57.158691 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 01 08:49:59 crc kubenswrapper[4867]: I0101 08:49:59.330489 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 01 08:49:59 crc kubenswrapper[4867]: I0101 08:49:59.333998 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 01 08:49:59 crc kubenswrapper[4867]: I0101 08:49:59.342678 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 01 08:49:59 crc kubenswrapper[4867]: I0101 08:49:59.519782 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-45pjd" Jan 01 08:49:59 crc kubenswrapper[4867]: I0101 08:49:59.520186 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-45pjd" Jan 01 08:50:00 crc kubenswrapper[4867]: I0101 08:50:00.287464 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 01 08:50:00 crc kubenswrapper[4867]: I0101 08:50:00.823409 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-45pjd" podUID="1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5" containerName="registry-server" probeResult="failure" output=< Jan 01 08:50:00 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Jan 01 08:50:00 crc kubenswrapper[4867]: > Jan 01 08:50:02 crc kubenswrapper[4867]: I0101 08:50:02.168799 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 01 08:50:09 crc kubenswrapper[4867]: I0101 08:50:09.602035 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-45pjd" Jan 01 08:50:09 crc kubenswrapper[4867]: I0101 08:50:09.650619 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-45pjd" Jan 01 08:50:09 crc kubenswrapper[4867]: I0101 08:50:09.857921 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-45pjd"] Jan 01 08:50:11 crc kubenswrapper[4867]: I0101 08:50:11.287352 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-45pjd" podUID="1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5" containerName="registry-server" containerID="cri-o://67369888ed967cb902b0306732c526562b2952c5a414315904734ec53e9df661" gracePeriod=2 Jan 01 08:50:11 crc kubenswrapper[4867]: I0101 08:50:11.779710 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-45pjd" Jan 01 08:50:11 crc kubenswrapper[4867]: I0101 08:50:11.862274 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5-catalog-content\") pod \"1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5\" (UID: \"1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5\") " Jan 01 08:50:11 crc kubenswrapper[4867]: I0101 08:50:11.862353 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5-utilities\") pod \"1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5\" (UID: \"1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5\") " Jan 01 08:50:11 crc kubenswrapper[4867]: I0101 08:50:11.862491 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dv7w8\" (UniqueName: \"kubernetes.io/projected/1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5-kube-api-access-dv7w8\") pod \"1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5\" (UID: \"1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5\") " Jan 01 08:50:11 crc kubenswrapper[4867]: I0101 08:50:11.863373 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5-utilities" (OuterVolumeSpecName: "utilities") pod "1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5" (UID: "1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:50:11 crc kubenswrapper[4867]: I0101 08:50:11.869966 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5-kube-api-access-dv7w8" (OuterVolumeSpecName: "kube-api-access-dv7w8") pod "1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5" (UID: "1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5"). InnerVolumeSpecName "kube-api-access-dv7w8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:11 crc kubenswrapper[4867]: I0101 08:50:11.973961 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:11 crc kubenswrapper[4867]: I0101 08:50:11.974004 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dv7w8\" (UniqueName: \"kubernetes.io/projected/1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5-kube-api-access-dv7w8\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:11 crc kubenswrapper[4867]: I0101 08:50:11.984589 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5" (UID: "1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:50:12 crc kubenswrapper[4867]: I0101 08:50:12.095524 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:12 crc kubenswrapper[4867]: I0101 08:50:12.304506 4867 generic.go:334] "Generic (PLEG): container finished" podID="1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5" containerID="67369888ed967cb902b0306732c526562b2952c5a414315904734ec53e9df661" exitCode=0 Jan 01 08:50:12 crc kubenswrapper[4867]: I0101 08:50:12.304563 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-45pjd" Jan 01 08:50:12 crc kubenswrapper[4867]: I0101 08:50:12.304581 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-45pjd" event={"ID":"1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5","Type":"ContainerDied","Data":"67369888ed967cb902b0306732c526562b2952c5a414315904734ec53e9df661"} Jan 01 08:50:12 crc kubenswrapper[4867]: I0101 08:50:12.305284 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-45pjd" event={"ID":"1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5","Type":"ContainerDied","Data":"7e46e71fccbb5e3b5a6fe98d366a9870d9357c27354d612cff23c1578accc4cd"} Jan 01 08:50:12 crc kubenswrapper[4867]: I0101 08:50:12.305343 4867 scope.go:117] "RemoveContainer" containerID="67369888ed967cb902b0306732c526562b2952c5a414315904734ec53e9df661" Jan 01 08:50:12 crc kubenswrapper[4867]: I0101 08:50:12.331879 4867 scope.go:117] "RemoveContainer" containerID="d0fd6e8332df210ee55845537ef941b1620feacdfee5db808ac3b62132234be8" Jan 01 08:50:12 crc kubenswrapper[4867]: I0101 08:50:12.376235 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-45pjd"] Jan 01 08:50:12 crc kubenswrapper[4867]: I0101 08:50:12.388367 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-45pjd"] Jan 01 08:50:12 crc kubenswrapper[4867]: I0101 08:50:12.388694 4867 scope.go:117] "RemoveContainer" containerID="21632bc05639eea1e0763f4c86398d3f0b079d41a488977254284e2e21f2cef2" Jan 01 08:50:12 crc kubenswrapper[4867]: I0101 08:50:12.439543 4867 scope.go:117] "RemoveContainer" containerID="67369888ed967cb902b0306732c526562b2952c5a414315904734ec53e9df661" Jan 01 08:50:12 crc kubenswrapper[4867]: E0101 08:50:12.440222 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67369888ed967cb902b0306732c526562b2952c5a414315904734ec53e9df661\": container with ID starting with 67369888ed967cb902b0306732c526562b2952c5a414315904734ec53e9df661 not found: ID does not exist" containerID="67369888ed967cb902b0306732c526562b2952c5a414315904734ec53e9df661" Jan 01 08:50:12 crc kubenswrapper[4867]: I0101 08:50:12.440269 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67369888ed967cb902b0306732c526562b2952c5a414315904734ec53e9df661"} err="failed to get container status \"67369888ed967cb902b0306732c526562b2952c5a414315904734ec53e9df661\": rpc error: code = NotFound desc = could not find container \"67369888ed967cb902b0306732c526562b2952c5a414315904734ec53e9df661\": container with ID starting with 67369888ed967cb902b0306732c526562b2952c5a414315904734ec53e9df661 not found: ID does not exist" Jan 01 08:50:12 crc kubenswrapper[4867]: I0101 08:50:12.440294 4867 scope.go:117] "RemoveContainer" containerID="d0fd6e8332df210ee55845537ef941b1620feacdfee5db808ac3b62132234be8" Jan 01 08:50:12 crc kubenswrapper[4867]: E0101 08:50:12.440690 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0fd6e8332df210ee55845537ef941b1620feacdfee5db808ac3b62132234be8\": container with ID starting with d0fd6e8332df210ee55845537ef941b1620feacdfee5db808ac3b62132234be8 not found: ID does not exist" containerID="d0fd6e8332df210ee55845537ef941b1620feacdfee5db808ac3b62132234be8" Jan 01 08:50:12 crc kubenswrapper[4867]: I0101 08:50:12.440718 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0fd6e8332df210ee55845537ef941b1620feacdfee5db808ac3b62132234be8"} err="failed to get container status \"d0fd6e8332df210ee55845537ef941b1620feacdfee5db808ac3b62132234be8\": rpc error: code = NotFound desc = could not find container \"d0fd6e8332df210ee55845537ef941b1620feacdfee5db808ac3b62132234be8\": container with ID starting with d0fd6e8332df210ee55845537ef941b1620feacdfee5db808ac3b62132234be8 not found: ID does not exist" Jan 01 08:50:12 crc kubenswrapper[4867]: I0101 08:50:12.440733 4867 scope.go:117] "RemoveContainer" containerID="21632bc05639eea1e0763f4c86398d3f0b079d41a488977254284e2e21f2cef2" Jan 01 08:50:12 crc kubenswrapper[4867]: E0101 08:50:12.441121 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21632bc05639eea1e0763f4c86398d3f0b079d41a488977254284e2e21f2cef2\": container with ID starting with 21632bc05639eea1e0763f4c86398d3f0b079d41a488977254284e2e21f2cef2 not found: ID does not exist" containerID="21632bc05639eea1e0763f4c86398d3f0b079d41a488977254284e2e21f2cef2" Jan 01 08:50:12 crc kubenswrapper[4867]: I0101 08:50:12.441147 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21632bc05639eea1e0763f4c86398d3f0b079d41a488977254284e2e21f2cef2"} err="failed to get container status \"21632bc05639eea1e0763f4c86398d3f0b079d41a488977254284e2e21f2cef2\": rpc error: code = NotFound desc = could not find container \"21632bc05639eea1e0763f4c86398d3f0b079d41a488977254284e2e21f2cef2\": container with ID starting with 21632bc05639eea1e0763f4c86398d3f0b079d41a488977254284e2e21f2cef2 not found: ID does not exist" Jan 01 08:50:13 crc kubenswrapper[4867]: I0101 08:50:13.147436 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5" path="/var/lib/kubelet/pods/1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5/volumes" Jan 01 08:50:24 crc kubenswrapper[4867]: I0101 08:50:24.795148 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 01 08:50:24 crc kubenswrapper[4867]: I0101 08:50:24.795818 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="bf6c2c64-e624-4388-b9dc-3d8c7985ac8f" containerName="openstackclient" containerID="cri-o://e698afe95247188c1349dc45263790341fa72c55e4b91893ad3b356253a8a571" gracePeriod=2 Jan 01 08:50:24 crc kubenswrapper[4867]: I0101 08:50:24.821733 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-f3c1-account-create-update-lnb82"] Jan 01 08:50:24 crc kubenswrapper[4867]: I0101 08:50:24.829556 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-69c1-account-create-update-4fm55"] Jan 01 08:50:24 crc kubenswrapper[4867]: I0101 08:50:24.850978 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-f3c1-account-create-update-lnb82"] Jan 01 08:50:24 crc kubenswrapper[4867]: I0101 08:50:24.861969 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-69c1-account-create-update-4fm55"] Jan 01 08:50:24 crc kubenswrapper[4867]: I0101 08:50:24.888070 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-fdllh"] Jan 01 08:50:24 crc kubenswrapper[4867]: I0101 08:50:24.901595 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 01 08:50:24 crc kubenswrapper[4867]: I0101 08:50:24.913973 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-fdllh"] Jan 01 08:50:24 crc kubenswrapper[4867]: I0101 08:50:24.933957 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-f3c1-account-create-update-lpffc"] Jan 01 08:50:24 crc kubenswrapper[4867]: E0101 08:50:24.934647 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5" containerName="extract-utilities" Jan 01 08:50:24 crc kubenswrapper[4867]: I0101 08:50:24.934664 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5" containerName="extract-utilities" Jan 01 08:50:24 crc kubenswrapper[4867]: E0101 08:50:24.934687 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5" containerName="extract-content" Jan 01 08:50:24 crc kubenswrapper[4867]: I0101 08:50:24.934693 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5" containerName="extract-content" Jan 01 08:50:24 crc kubenswrapper[4867]: E0101 08:50:24.934707 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5" containerName="registry-server" Jan 01 08:50:24 crc kubenswrapper[4867]: I0101 08:50:24.934713 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5" containerName="registry-server" Jan 01 08:50:24 crc kubenswrapper[4867]: E0101 08:50:24.934729 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf6c2c64-e624-4388-b9dc-3d8c7985ac8f" containerName="openstackclient" Jan 01 08:50:24 crc kubenswrapper[4867]: I0101 08:50:24.934734 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf6c2c64-e624-4388-b9dc-3d8c7985ac8f" containerName="openstackclient" Jan 01 08:50:24 crc kubenswrapper[4867]: I0101 08:50:24.934972 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf6c2c64-e624-4388-b9dc-3d8c7985ac8f" containerName="openstackclient" Jan 01 08:50:24 crc kubenswrapper[4867]: I0101 08:50:24.934986 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f2b5aa2-ef14-47eb-a42c-f6d3481b9bc5" containerName="registry-server" Jan 01 08:50:24 crc kubenswrapper[4867]: I0101 08:50:24.935589 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f3c1-account-create-update-lpffc" Jan 01 08:50:24 crc kubenswrapper[4867]: I0101 08:50:24.943105 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-69c1-account-create-update-xbwpk"] Jan 01 08:50:24 crc kubenswrapper[4867]: I0101 08:50:24.944320 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-69c1-account-create-update-xbwpk" Jan 01 08:50:24 crc kubenswrapper[4867]: I0101 08:50:24.946942 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 01 08:50:24 crc kubenswrapper[4867]: I0101 08:50:24.947201 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 01 08:50:24 crc kubenswrapper[4867]: I0101 08:50:24.983175 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-f3c1-account-create-update-lpffc"] Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.077754 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2vvv\" (UniqueName: \"kubernetes.io/projected/6fe85b54-84b3-46ab-94b7-597ffd52f997-kube-api-access-q2vvv\") pod \"glance-69c1-account-create-update-xbwpk\" (UID: \"6fe85b54-84b3-46ab-94b7-597ffd52f997\") " pod="openstack/glance-69c1-account-create-update-xbwpk" Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.077854 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22daf9e9-6114-4fc4-951e-da0e7b92c4b8-operator-scripts\") pod \"nova-api-f3c1-account-create-update-lpffc\" (UID: \"22daf9e9-6114-4fc4-951e-da0e7b92c4b8\") " pod="openstack/nova-api-f3c1-account-create-update-lpffc" Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.077890 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fe85b54-84b3-46ab-94b7-597ffd52f997-operator-scripts\") pod \"glance-69c1-account-create-update-xbwpk\" (UID: \"6fe85b54-84b3-46ab-94b7-597ffd52f997\") " pod="openstack/glance-69c1-account-create-update-xbwpk" Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.077927 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp2mm\" (UniqueName: \"kubernetes.io/projected/22daf9e9-6114-4fc4-951e-da0e7b92c4b8-kube-api-access-gp2mm\") pod \"nova-api-f3c1-account-create-update-lpffc\" (UID: \"22daf9e9-6114-4fc4-951e-da0e7b92c4b8\") " pod="openstack/nova-api-f3c1-account-create-update-lpffc" Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.117391 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-7n8cs"] Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.119285 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7n8cs" Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.129497 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.181135 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19551dba-c741-42e0-b228-6cad78717264-operator-scripts\") pod \"root-account-create-update-7n8cs\" (UID: \"19551dba-c741-42e0-b228-6cad78717264\") " pod="openstack/root-account-create-update-7n8cs" Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.181180 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22daf9e9-6114-4fc4-951e-da0e7b92c4b8-operator-scripts\") pod \"nova-api-f3c1-account-create-update-lpffc\" (UID: \"22daf9e9-6114-4fc4-951e-da0e7b92c4b8\") " pod="openstack/nova-api-f3c1-account-create-update-lpffc" Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.181211 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fe85b54-84b3-46ab-94b7-597ffd52f997-operator-scripts\") pod \"glance-69c1-account-create-update-xbwpk\" (UID: \"6fe85b54-84b3-46ab-94b7-597ffd52f997\") " pod="openstack/glance-69c1-account-create-update-xbwpk" Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.181235 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp2mm\" (UniqueName: \"kubernetes.io/projected/22daf9e9-6114-4fc4-951e-da0e7b92c4b8-kube-api-access-gp2mm\") pod \"nova-api-f3c1-account-create-update-lpffc\" (UID: \"22daf9e9-6114-4fc4-951e-da0e7b92c4b8\") " pod="openstack/nova-api-f3c1-account-create-update-lpffc" Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.181317 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcx8x\" (UniqueName: \"kubernetes.io/projected/19551dba-c741-42e0-b228-6cad78717264-kube-api-access-vcx8x\") pod \"root-account-create-update-7n8cs\" (UID: \"19551dba-c741-42e0-b228-6cad78717264\") " pod="openstack/root-account-create-update-7n8cs" Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.181342 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2vvv\" (UniqueName: \"kubernetes.io/projected/6fe85b54-84b3-46ab-94b7-597ffd52f997-kube-api-access-q2vvv\") pod \"glance-69c1-account-create-update-xbwpk\" (UID: \"6fe85b54-84b3-46ab-94b7-597ffd52f997\") " pod="openstack/glance-69c1-account-create-update-xbwpk" Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.191155 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22daf9e9-6114-4fc4-951e-da0e7b92c4b8-operator-scripts\") pod \"nova-api-f3c1-account-create-update-lpffc\" (UID: \"22daf9e9-6114-4fc4-951e-da0e7b92c4b8\") " pod="openstack/nova-api-f3c1-account-create-update-lpffc" Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.191567 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fe85b54-84b3-46ab-94b7-597ffd52f997-operator-scripts\") pod \"glance-69c1-account-create-update-xbwpk\" (UID: \"6fe85b54-84b3-46ab-94b7-597ffd52f997\") " pod="openstack/glance-69c1-account-create-update-xbwpk" Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.195243 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24dd201f-c983-42e0-9fcc-c80c8d38f545" path="/var/lib/kubelet/pods/24dd201f-c983-42e0-9fcc-c80c8d38f545/volumes" Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.196054 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44ddda60-ee4a-453c-82fb-bb99e16fc076" path="/var/lib/kubelet/pods/44ddda60-ee4a-453c-82fb-bb99e16fc076/volumes" Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.198873 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="967c4acd-2c93-49a1-9b42-71e23f0b28d0" path="/var/lib/kubelet/pods/967c4acd-2c93-49a1-9b42-71e23f0b28d0/volumes" Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.199498 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-69c1-account-create-update-xbwpk"] Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.213734 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-7n8cs"] Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.225745 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp2mm\" (UniqueName: \"kubernetes.io/projected/22daf9e9-6114-4fc4-951e-da0e7b92c4b8-kube-api-access-gp2mm\") pod \"nova-api-f3c1-account-create-update-lpffc\" (UID: \"22daf9e9-6114-4fc4-951e-da0e7b92c4b8\") " pod="openstack/nova-api-f3c1-account-create-update-lpffc" Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.243879 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2vvv\" (UniqueName: \"kubernetes.io/projected/6fe85b54-84b3-46ab-94b7-597ffd52f997-kube-api-access-q2vvv\") pod \"glance-69c1-account-create-update-xbwpk\" (UID: \"6fe85b54-84b3-46ab-94b7-597ffd52f997\") " pod="openstack/glance-69c1-account-create-update-xbwpk" Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.265514 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-c514-account-create-update-qfw8m"] Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.271231 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-c514-account-create-update-qfw8m"] Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.285415 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcx8x\" (UniqueName: \"kubernetes.io/projected/19551dba-c741-42e0-b228-6cad78717264-kube-api-access-vcx8x\") pod \"root-account-create-update-7n8cs\" (UID: \"19551dba-c741-42e0-b228-6cad78717264\") " pod="openstack/root-account-create-update-7n8cs" Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.285522 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19551dba-c741-42e0-b228-6cad78717264-operator-scripts\") pod \"root-account-create-update-7n8cs\" (UID: \"19551dba-c741-42e0-b228-6cad78717264\") " pod="openstack/root-account-create-update-7n8cs" Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.286409 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19551dba-c741-42e0-b228-6cad78717264-operator-scripts\") pod \"root-account-create-update-7n8cs\" (UID: \"19551dba-c741-42e0-b228-6cad78717264\") " pod="openstack/root-account-create-update-7n8cs" Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.287747 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.290780 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f3c1-account-create-update-lpffc" Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.336388 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-2ede-account-create-update-2qf6t"] Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.346064 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-2ede-account-create-update-2qf6t"] Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.351935 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcx8x\" (UniqueName: \"kubernetes.io/projected/19551dba-c741-42e0-b228-6cad78717264-kube-api-access-vcx8x\") pod \"root-account-create-update-7n8cs\" (UID: \"19551dba-c741-42e0-b228-6cad78717264\") " pod="openstack/root-account-create-update-7n8cs" Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.383288 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-69c1-account-create-update-xbwpk" Jan 01 08:50:25 crc kubenswrapper[4867]: E0101 08:50:25.398777 4867 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 01 08:50:25 crc kubenswrapper[4867]: E0101 08:50:25.398854 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-config-data podName:1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99 nodeName:}" failed. No retries permitted until 2026-01-01 08:50:25.898833934 +0000 UTC m=+1435.034102703 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-config-data") pod "rabbitmq-cell1-server-0" (UID: "1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99") : configmap "rabbitmq-cell1-config-data" not found Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.425016 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.425447 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="1620c75e-1129-4850-9b27-7666e4cb8ed5" containerName="openstack-network-exporter" containerID="cri-o://cfc5a3301aa0d535569c3eedda51c82ee9f19aa64f208230f2ac95139ea3327b" gracePeriod=300 Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.445377 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-9ba1-account-create-update-cvb54"] Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.446844 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9ba1-account-create-update-cvb54" Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.468838 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-9ba1-account-create-update-cvb54"] Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.472904 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.504351 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7n8cs" Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.516974 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-9ba1-account-create-update-27nl2"] Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.551867 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-9ba1-account-create-update-27nl2"] Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.583667 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.584049 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="654c613f-4f96-41f0-8937-d4be9f7897da" containerName="openstack-network-exporter" containerID="cri-o://af04740eea97da4b3747aedaa2d322eabd244cf11d0911b2ba02cff1211719ab" gracePeriod=300 Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.594308 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="1620c75e-1129-4850-9b27-7666e4cb8ed5" containerName="ovsdbserver-sb" containerID="cri-o://b82144172ec57a88a4a7d071777b91eb41831acba69913c40e0826ab964b9099" gracePeriod=300 Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.608276 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c6fd580-15f1-4929-b211-ecb1dc767e7c-operator-scripts\") pod \"nova-cell0-9ba1-account-create-update-cvb54\" (UID: \"6c6fd580-15f1-4929-b211-ecb1dc767e7c\") " pod="openstack/nova-cell0-9ba1-account-create-update-cvb54" Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.608356 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdrc6\" (UniqueName: \"kubernetes.io/projected/6c6fd580-15f1-4929-b211-ecb1dc767e7c-kube-api-access-vdrc6\") pod \"nova-cell0-9ba1-account-create-update-cvb54\" (UID: \"6c6fd580-15f1-4929-b211-ecb1dc767e7c\") " pod="openstack/nova-cell0-9ba1-account-create-update-cvb54" Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.709851 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c6fd580-15f1-4929-b211-ecb1dc767e7c-operator-scripts\") pod \"nova-cell0-9ba1-account-create-update-cvb54\" (UID: \"6c6fd580-15f1-4929-b211-ecb1dc767e7c\") " pod="openstack/nova-cell0-9ba1-account-create-update-cvb54" Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.710149 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdrc6\" (UniqueName: \"kubernetes.io/projected/6c6fd580-15f1-4929-b211-ecb1dc767e7c-kube-api-access-vdrc6\") pod \"nova-cell0-9ba1-account-create-update-cvb54\" (UID: \"6c6fd580-15f1-4929-b211-ecb1dc767e7c\") " pod="openstack/nova-cell0-9ba1-account-create-update-cvb54" Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.711472 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c6fd580-15f1-4929-b211-ecb1dc767e7c-operator-scripts\") pod \"nova-cell0-9ba1-account-create-update-cvb54\" (UID: \"6c6fd580-15f1-4929-b211-ecb1dc767e7c\") " pod="openstack/nova-cell0-9ba1-account-create-update-cvb54" Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.711530 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5a0b-account-create-update-dfp4s"] Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.741525 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdrc6\" (UniqueName: \"kubernetes.io/projected/6c6fd580-15f1-4929-b211-ecb1dc767e7c-kube-api-access-vdrc6\") pod \"nova-cell0-9ba1-account-create-update-cvb54\" (UID: \"6c6fd580-15f1-4929-b211-ecb1dc767e7c\") " pod="openstack/nova-cell0-9ba1-account-create-update-cvb54" Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.750967 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-p6csz"] Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.773338 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="654c613f-4f96-41f0-8937-d4be9f7897da" containerName="ovsdbserver-nb" containerID="cri-o://799a9220b793a8689047c599a1077afcad897844454e5de218f99838ce959d39" gracePeriod=300 Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.773460 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-p6csz"] Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.790956 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5a0b-account-create-update-dfp4s"] Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.812432 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-cdg7k"] Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.836867 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-cdg7k"] Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.852266 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-1f09-account-create-update-mb5l4"] Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.867306 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-1f09-account-create-update-mb5l4"] Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.867508 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9ba1-account-create-update-cvb54" Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.873020 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.883573 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-hhzrt"] Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.889902 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-hhzrt"] Jan 01 08:50:25 crc kubenswrapper[4867]: E0101 08:50:25.901554 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b82144172ec57a88a4a7d071777b91eb41831acba69913c40e0826ab964b9099 is running failed: container process not found" containerID="b82144172ec57a88a4a7d071777b91eb41831acba69913c40e0826ab964b9099" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 01 08:50:25 crc kubenswrapper[4867]: E0101 08:50:25.904971 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b82144172ec57a88a4a7d071777b91eb41831acba69913c40e0826ab964b9099 is running failed: container process not found" containerID="b82144172ec57a88a4a7d071777b91eb41831acba69913c40e0826ab964b9099" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 01 08:50:25 crc kubenswrapper[4867]: E0101 08:50:25.907576 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b82144172ec57a88a4a7d071777b91eb41831acba69913c40e0826ab964b9099 is running failed: container process not found" containerID="b82144172ec57a88a4a7d071777b91eb41831acba69913c40e0826ab964b9099" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 01 08:50:25 crc kubenswrapper[4867]: E0101 08:50:25.907611 4867 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b82144172ec57a88a4a7d071777b91eb41831acba69913c40e0826ab964b9099 is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-sb-0" podUID="1620c75e-1129-4850-9b27-7666e4cb8ed5" containerName="ovsdbserver-sb" Jan 01 08:50:25 crc kubenswrapper[4867]: E0101 08:50:25.915225 4867 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 01 08:50:25 crc kubenswrapper[4867]: E0101 08:50:25.915303 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-config-data podName:1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99 nodeName:}" failed. No retries permitted until 2026-01-01 08:50:26.915280364 +0000 UTC m=+1436.050549133 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-config-data") pod "rabbitmq-cell1-server-0" (UID: "1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99") : configmap "rabbitmq-cell1-config-data" not found Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.943817 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-93ba-account-create-update-q4kzs"] Jan 01 08:50:25 crc kubenswrapper[4867]: I0101 08:50:25.984515 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-49xz8"] Jan 01 08:50:26 crc kubenswrapper[4867]: E0101 08:50:26.034700 4867 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 01 08:50:26 crc kubenswrapper[4867]: E0101 08:50:26.034778 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/84d7aac6-1073-41c0-acff-169e36ec197d-config-data podName:84d7aac6-1073-41c0-acff-169e36ec197d nodeName:}" failed. No retries permitted until 2026-01-01 08:50:26.53475524 +0000 UTC m=+1435.670023999 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/84d7aac6-1073-41c0-acff-169e36ec197d-config-data") pod "rabbitmq-server-0" (UID: "84d7aac6-1073-41c0-acff-169e36ec197d") : configmap "rabbitmq-config-data" not found Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.055377 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-49xz8"] Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.082575 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-93ba-account-create-update-q4kzs"] Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.101378 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.101964 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="9943de7c-1d29-416f-ba57-ea51bf9e56f3" containerName="ovn-northd" containerID="cri-o://ec53f251aded63efc11dcea8ffde6a118aeb1632f72313429e33668486c985a2" gracePeriod=30 Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.102224 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="9943de7c-1d29-416f-ba57-ea51bf9e56f3" containerName="openstack-network-exporter" containerID="cri-o://92bfa5f8823984895188abd8d532b965b9e2ae8de93cf5f7e5a288490fe32e3c" gracePeriod=30 Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.124469 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-f3c1-account-create-update-lpffc"] Jan 01 08:50:26 crc kubenswrapper[4867]: E0101 08:50:26.140888 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec53f251aded63efc11dcea8ffde6a118aeb1632f72313429e33668486c985a2" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 01 08:50:26 crc kubenswrapper[4867]: E0101 08:50:26.147075 4867 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 01 08:50:26 crc kubenswrapper[4867]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 01 08:50:26 crc kubenswrapper[4867]: Jan 01 08:50:26 crc kubenswrapper[4867]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 01 08:50:26 crc kubenswrapper[4867]: Jan 01 08:50:26 crc kubenswrapper[4867]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 01 08:50:26 crc kubenswrapper[4867]: Jan 01 08:50:26 crc kubenswrapper[4867]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 01 08:50:26 crc kubenswrapper[4867]: Jan 01 08:50:26 crc kubenswrapper[4867]: if [ -n "nova_api" ]; then Jan 01 08:50:26 crc kubenswrapper[4867]: GRANT_DATABASE="nova_api" Jan 01 08:50:26 crc kubenswrapper[4867]: else Jan 01 08:50:26 crc kubenswrapper[4867]: GRANT_DATABASE="*" Jan 01 08:50:26 crc kubenswrapper[4867]: fi Jan 01 08:50:26 crc kubenswrapper[4867]: Jan 01 08:50:26 crc kubenswrapper[4867]: # going for maximum compatibility here: Jan 01 08:50:26 crc kubenswrapper[4867]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 01 08:50:26 crc kubenswrapper[4867]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 01 08:50:26 crc kubenswrapper[4867]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 01 08:50:26 crc kubenswrapper[4867]: # support updates Jan 01 08:50:26 crc kubenswrapper[4867]: Jan 01 08:50:26 crc kubenswrapper[4867]: $MYSQL_CMD < logger="UnhandledError" Jan 01 08:50:26 crc kubenswrapper[4867]: E0101 08:50:26.149300 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-f3c1-account-create-update-lpffc" podUID="22daf9e9-6114-4fc4-951e-da0e7b92c4b8" Jan 01 08:50:26 crc kubenswrapper[4867]: E0101 08:50:26.150371 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec53f251aded63efc11dcea8ffde6a118aeb1632f72313429e33668486c985a2" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 01 08:50:26 crc kubenswrapper[4867]: E0101 08:50:26.159061 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec53f251aded63efc11dcea8ffde6a118aeb1632f72313429e33668486c985a2" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 01 08:50:26 crc kubenswrapper[4867]: E0101 08:50:26.159155 4867 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="9943de7c-1d29-416f-ba57-ea51bf9e56f3" containerName="ovn-northd" Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.197905 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-sx242"] Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.216497 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-sx242"] Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.327998 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-z9fx6"] Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.361309 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-z9fx6"] Jan 01 08:50:26 crc kubenswrapper[4867]: E0101 08:50:26.395338 4867 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 01 08:50:26 crc kubenswrapper[4867]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 01 08:50:26 crc kubenswrapper[4867]: Jan 01 08:50:26 crc kubenswrapper[4867]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 01 08:50:26 crc kubenswrapper[4867]: Jan 01 08:50:26 crc kubenswrapper[4867]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 01 08:50:26 crc kubenswrapper[4867]: Jan 01 08:50:26 crc kubenswrapper[4867]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 01 08:50:26 crc kubenswrapper[4867]: Jan 01 08:50:26 crc kubenswrapper[4867]: if [ -n "glance" ]; then Jan 01 08:50:26 crc kubenswrapper[4867]: GRANT_DATABASE="glance" Jan 01 08:50:26 crc kubenswrapper[4867]: else Jan 01 08:50:26 crc kubenswrapper[4867]: GRANT_DATABASE="*" Jan 01 08:50:26 crc kubenswrapper[4867]: fi Jan 01 08:50:26 crc kubenswrapper[4867]: Jan 01 08:50:26 crc kubenswrapper[4867]: # going for maximum compatibility here: Jan 01 08:50:26 crc kubenswrapper[4867]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 01 08:50:26 crc kubenswrapper[4867]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 01 08:50:26 crc kubenswrapper[4867]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 01 08:50:26 crc kubenswrapper[4867]: # support updates Jan 01 08:50:26 crc kubenswrapper[4867]: Jan 01 08:50:26 crc kubenswrapper[4867]: $MYSQL_CMD < logger="UnhandledError" Jan 01 08:50:26 crc kubenswrapper[4867]: E0101 08:50:26.397156 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"glance-db-secret\\\" not found\"" pod="openstack/glance-69c1-account-create-update-xbwpk" podUID="6fe85b54-84b3-46ab-94b7-597ffd52f997" Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.415958 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-dlqwt"] Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.475966 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-dlqwt"] Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.486180 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-69c1-account-create-update-xbwpk"] Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.490994 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-8jl6r"] Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.501278 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-4gj2t"] Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.501522 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-4gj2t" podUID="63c4f874-d21a-42b7-884a-f070d8dc2150" containerName="openstack-network-exporter" containerID="cri-o://880fc92f7ecbd0a9c36266766043de57fa6004b144c4752222808fdfded81541" gracePeriod=30 Jan 01 08:50:26 crc kubenswrapper[4867]: E0101 08:50:26.549764 4867 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 01 08:50:26 crc kubenswrapper[4867]: E0101 08:50:26.550060 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/84d7aac6-1073-41c0-acff-169e36ec197d-config-data podName:84d7aac6-1073-41c0-acff-169e36ec197d nodeName:}" failed. No retries permitted until 2026-01-01 08:50:27.550043907 +0000 UTC m=+1436.685312676 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/84d7aac6-1073-41c0-acff-169e36ec197d-config-data") pod "rabbitmq-server-0" (UID: "84d7aac6-1073-41c0-acff-169e36ec197d") : configmap "rabbitmq-config-data" not found Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.557957 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-67dd85d5b6-ww7ll"] Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.558177 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-67dd85d5b6-ww7ll" podUID="1822baf8-11aa-4152-a74f-2ce0383c1094" containerName="placement-log" containerID="cri-o://455b0cde75a033b7a0c94fdc6b6b1dd1216e9777beb9c14b66a6998f6b2fa1d5" gracePeriod=30 Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.558255 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-67dd85d5b6-ww7ll" podUID="1822baf8-11aa-4152-a74f-2ce0383c1094" containerName="placement-api" containerID="cri-o://e80411603dc0ac8d446f1e707d73b2bad909e42859006cf6a585616040d3b259" gracePeriod=30 Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.575314 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1620c75e-1129-4850-9b27-7666e4cb8ed5/ovsdbserver-sb/0.log" Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.575384 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.576652 4867 generic.go:334] "Generic (PLEG): container finished" podID="9943de7c-1d29-416f-ba57-ea51bf9e56f3" containerID="92bfa5f8823984895188abd8d532b965b9e2ae8de93cf5f7e5a288490fe32e3c" exitCode=2 Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.576697 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9943de7c-1d29-416f-ba57-ea51bf9e56f3","Type":"ContainerDied","Data":"92bfa5f8823984895188abd8d532b965b9e2ae8de93cf5f7e5a288490fe32e3c"} Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.584239 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-69c1-account-create-update-xbwpk" event={"ID":"6fe85b54-84b3-46ab-94b7-597ffd52f997","Type":"ContainerStarted","Data":"8b37800d130be3203cc2409edf08f6371d1b9d3c13c6090d58baa805f8616ad6"} Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.586252 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f3c1-account-create-update-lpffc" event={"ID":"22daf9e9-6114-4fc4-951e-da0e7b92c4b8","Type":"ContainerStarted","Data":"46df4a5294094fbf8952da3670a54d9847fc3e78401e131458a99dde358a3bfc"} Jan 01 08:50:26 crc kubenswrapper[4867]: E0101 08:50:26.586367 4867 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 01 08:50:26 crc kubenswrapper[4867]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 01 08:50:26 crc kubenswrapper[4867]: Jan 01 08:50:26 crc kubenswrapper[4867]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 01 08:50:26 crc kubenswrapper[4867]: Jan 01 08:50:26 crc kubenswrapper[4867]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 01 08:50:26 crc kubenswrapper[4867]: Jan 01 08:50:26 crc kubenswrapper[4867]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 01 08:50:26 crc kubenswrapper[4867]: Jan 01 08:50:26 crc kubenswrapper[4867]: if [ -n "glance" ]; then Jan 01 08:50:26 crc kubenswrapper[4867]: GRANT_DATABASE="glance" Jan 01 08:50:26 crc kubenswrapper[4867]: else Jan 01 08:50:26 crc kubenswrapper[4867]: GRANT_DATABASE="*" Jan 01 08:50:26 crc kubenswrapper[4867]: fi Jan 01 08:50:26 crc kubenswrapper[4867]: Jan 01 08:50:26 crc kubenswrapper[4867]: # going for maximum compatibility here: Jan 01 08:50:26 crc kubenswrapper[4867]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 01 08:50:26 crc kubenswrapper[4867]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 01 08:50:26 crc kubenswrapper[4867]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 01 08:50:26 crc kubenswrapper[4867]: # support updates Jan 01 08:50:26 crc kubenswrapper[4867]: Jan 01 08:50:26 crc kubenswrapper[4867]: $MYSQL_CMD < logger="UnhandledError" Jan 01 08:50:26 crc kubenswrapper[4867]: E0101 08:50:26.587769 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"glance-db-secret\\\" not found\"" pod="openstack/glance-69c1-account-create-update-xbwpk" podUID="6fe85b54-84b3-46ab-94b7-597ffd52f997" Jan 01 08:50:26 crc kubenswrapper[4867]: E0101 08:50:26.591457 4867 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 01 08:50:26 crc kubenswrapper[4867]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 01 08:50:26 crc kubenswrapper[4867]: Jan 01 08:50:26 crc kubenswrapper[4867]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 01 08:50:26 crc kubenswrapper[4867]: Jan 01 08:50:26 crc kubenswrapper[4867]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 01 08:50:26 crc kubenswrapper[4867]: Jan 01 08:50:26 crc kubenswrapper[4867]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 01 08:50:26 crc kubenswrapper[4867]: Jan 01 08:50:26 crc kubenswrapper[4867]: if [ -n "nova_api" ]; then Jan 01 08:50:26 crc kubenswrapper[4867]: GRANT_DATABASE="nova_api" Jan 01 08:50:26 crc kubenswrapper[4867]: else Jan 01 08:50:26 crc kubenswrapper[4867]: GRANT_DATABASE="*" Jan 01 08:50:26 crc kubenswrapper[4867]: fi Jan 01 08:50:26 crc kubenswrapper[4867]: Jan 01 08:50:26 crc kubenswrapper[4867]: # going for maximum compatibility here: Jan 01 08:50:26 crc kubenswrapper[4867]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 01 08:50:26 crc kubenswrapper[4867]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 01 08:50:26 crc kubenswrapper[4867]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 01 08:50:26 crc kubenswrapper[4867]: # support updates Jan 01 08:50:26 crc kubenswrapper[4867]: Jan 01 08:50:26 crc kubenswrapper[4867]: $MYSQL_CMD < logger="UnhandledError" Jan 01 08:50:26 crc kubenswrapper[4867]: E0101 08:50:26.592527 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-f3c1-account-create-update-lpffc" podUID="22daf9e9-6114-4fc4-951e-da0e7b92c4b8" Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.610502 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_654c613f-4f96-41f0-8937-d4be9f7897da/ovsdbserver-nb/0.log" Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.610557 4867 generic.go:334] "Generic (PLEG): container finished" podID="654c613f-4f96-41f0-8937-d4be9f7897da" containerID="af04740eea97da4b3747aedaa2d322eabd244cf11d0911b2ba02cff1211719ab" exitCode=2 Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.610576 4867 generic.go:334] "Generic (PLEG): container finished" podID="654c613f-4f96-41f0-8937-d4be9f7897da" containerID="799a9220b793a8689047c599a1077afcad897844454e5de218f99838ce959d39" exitCode=143 Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.610651 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"654c613f-4f96-41f0-8937-d4be9f7897da","Type":"ContainerDied","Data":"af04740eea97da4b3747aedaa2d322eabd244cf11d0911b2ba02cff1211719ab"} Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.610685 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"654c613f-4f96-41f0-8937-d4be9f7897da","Type":"ContainerDied","Data":"799a9220b793a8689047c599a1077afcad897844454e5de218f99838ce959d39"} Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.614858 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-smgl6"] Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.628971 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1620c75e-1129-4850-9b27-7666e4cb8ed5/ovsdbserver-sb/0.log" Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.629020 4867 generic.go:334] "Generic (PLEG): container finished" podID="1620c75e-1129-4850-9b27-7666e4cb8ed5" containerID="cfc5a3301aa0d535569c3eedda51c82ee9f19aa64f208230f2ac95139ea3327b" exitCode=2 Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.629040 4867 generic.go:334] "Generic (PLEG): container finished" podID="1620c75e-1129-4850-9b27-7666e4cb8ed5" containerID="b82144172ec57a88a4a7d071777b91eb41831acba69913c40e0826ab964b9099" exitCode=143 Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.629065 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1620c75e-1129-4850-9b27-7666e4cb8ed5","Type":"ContainerDied","Data":"cfc5a3301aa0d535569c3eedda51c82ee9f19aa64f208230f2ac95139ea3327b"} Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.629094 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1620c75e-1129-4850-9b27-7666e4cb8ed5","Type":"ContainerDied","Data":"b82144172ec57a88a4a7d071777b91eb41831acba69913c40e0826ab964b9099"} Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.629107 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1620c75e-1129-4850-9b27-7666e4cb8ed5","Type":"ContainerDied","Data":"6fb001ab3687da1505fb876aa0cf0ffe799d47211ed2d8b9d98e1fcd49d7501b"} Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.629124 4867 scope.go:117] "RemoveContainer" containerID="cfc5a3301aa0d535569c3eedda51c82ee9f19aa64f208230f2ac95139ea3327b" Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.629245 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.640285 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.640569 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="3205b065-c067-4035-8afb-e2bbcc7d8a42" containerName="cinder-scheduler" containerID="cri-o://eb7dcef39a55694c9e76f1f5778b1c287c9ba1f1a1711c0d8fbaaad900a62405" gracePeriod=30 Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.640732 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="3205b065-c067-4035-8afb-e2bbcc7d8a42" containerName="probe" containerID="cri-o://2308efd8efc29d35e443b922f20dee961e0822be16a9b0b3be84cb600b8719cd" gracePeriod=30 Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.660320 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-sn8tf"] Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.758117 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1620c75e-1129-4850-9b27-7666e4cb8ed5-combined-ca-bundle\") pod \"1620c75e-1129-4850-9b27-7666e4cb8ed5\" (UID: \"1620c75e-1129-4850-9b27-7666e4cb8ed5\") " Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.758220 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"1620c75e-1129-4850-9b27-7666e4cb8ed5\" (UID: \"1620c75e-1129-4850-9b27-7666e4cb8ed5\") " Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.758251 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1620c75e-1129-4850-9b27-7666e4cb8ed5-scripts\") pod \"1620c75e-1129-4850-9b27-7666e4cb8ed5\" (UID: \"1620c75e-1129-4850-9b27-7666e4cb8ed5\") " Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.758285 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1620c75e-1129-4850-9b27-7666e4cb8ed5-ovsdbserver-sb-tls-certs\") pod \"1620c75e-1129-4850-9b27-7666e4cb8ed5\" (UID: \"1620c75e-1129-4850-9b27-7666e4cb8ed5\") " Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.758309 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1620c75e-1129-4850-9b27-7666e4cb8ed5-ovsdb-rundir\") pod \"1620c75e-1129-4850-9b27-7666e4cb8ed5\" (UID: \"1620c75e-1129-4850-9b27-7666e4cb8ed5\") " Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.758329 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1620c75e-1129-4850-9b27-7666e4cb8ed5-metrics-certs-tls-certs\") pod \"1620c75e-1129-4850-9b27-7666e4cb8ed5\" (UID: \"1620c75e-1129-4850-9b27-7666e4cb8ed5\") " Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.758362 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1620c75e-1129-4850-9b27-7666e4cb8ed5-config\") pod \"1620c75e-1129-4850-9b27-7666e4cb8ed5\" (UID: \"1620c75e-1129-4850-9b27-7666e4cb8ed5\") " Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.758382 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vccg\" (UniqueName: \"kubernetes.io/projected/1620c75e-1129-4850-9b27-7666e4cb8ed5-kube-api-access-8vccg\") pod \"1620c75e-1129-4850-9b27-7666e4cb8ed5\" (UID: \"1620c75e-1129-4850-9b27-7666e4cb8ed5\") " Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.759717 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1620c75e-1129-4850-9b27-7666e4cb8ed5-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "1620c75e-1129-4850-9b27-7666e4cb8ed5" (UID: "1620c75e-1129-4850-9b27-7666e4cb8ed5"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.764150 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1620c75e-1129-4850-9b27-7666e4cb8ed5-config" (OuterVolumeSpecName: "config") pod "1620c75e-1129-4850-9b27-7666e4cb8ed5" (UID: "1620c75e-1129-4850-9b27-7666e4cb8ed5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.768667 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-sn8tf"] Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.769202 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1620c75e-1129-4850-9b27-7666e4cb8ed5-scripts" (OuterVolumeSpecName: "scripts") pod "1620c75e-1129-4850-9b27-7666e4cb8ed5" (UID: "1620c75e-1129-4850-9b27-7666e4cb8ed5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.783251 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "1620c75e-1129-4850-9b27-7666e4cb8ed5" (UID: "1620c75e-1129-4850-9b27-7666e4cb8ed5"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.784155 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1620c75e-1129-4850-9b27-7666e4cb8ed5-kube-api-access-8vccg" (OuterVolumeSpecName: "kube-api-access-8vccg") pod "1620c75e-1129-4850-9b27-7666e4cb8ed5" (UID: "1620c75e-1129-4850-9b27-7666e4cb8ed5"). InnerVolumeSpecName "kube-api-access-8vccg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.787526 4867 scope.go:117] "RemoveContainer" containerID="b82144172ec57a88a4a7d071777b91eb41831acba69913c40e0826ab964b9099" Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.879065 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1620c75e-1129-4850-9b27-7666e4cb8ed5-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.879330 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vccg\" (UniqueName: \"kubernetes.io/projected/1620c75e-1129-4850-9b27-7666e4cb8ed5-kube-api-access-8vccg\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.879356 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.879366 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1620c75e-1129-4850-9b27-7666e4cb8ed5-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.879376 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1620c75e-1129-4850-9b27-7666e4cb8ed5-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.902199 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1620c75e-1129-4850-9b27-7666e4cb8ed5-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "1620c75e-1129-4850-9b27-7666e4cb8ed5" (UID: "1620c75e-1129-4850-9b27-7666e4cb8ed5"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.908986 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.909304 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6f47f095-abde-4e07-8edf-d0a318043581" containerName="glance-log" containerID="cri-o://c1af335d05f310408a3a3e7e9c132db515267848f5873efa7c468ee6eea3edc6" gracePeriod=30 Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.909803 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6f47f095-abde-4e07-8edf-d0a318043581" containerName="glance-httpd" containerID="cri-o://6fd8f4c7059e184922dd9a91f3056bb550d7c290a243aea1fe9c949fb9fa29c7" gracePeriod=30 Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.917996 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fbdfbb78f-5g78q"] Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.918200 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fbdfbb78f-5g78q" podUID="d579322c-12b7-488b-8220-31ef35016c68" containerName="dnsmasq-dns" containerID="cri-o://9aaec2bdb437295ba5550a821ae8d1f9f3e0bbb9ba3c726894b4022fa400f982" gracePeriod=10 Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.928992 4867 scope.go:117] "RemoveContainer" containerID="cfc5a3301aa0d535569c3eedda51c82ee9f19aa64f208230f2ac95139ea3327b" Jan 01 08:50:26 crc kubenswrapper[4867]: E0101 08:50:26.932329 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfc5a3301aa0d535569c3eedda51c82ee9f19aa64f208230f2ac95139ea3327b\": container with ID starting with cfc5a3301aa0d535569c3eedda51c82ee9f19aa64f208230f2ac95139ea3327b not found: ID does not exist" containerID="cfc5a3301aa0d535569c3eedda51c82ee9f19aa64f208230f2ac95139ea3327b" Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.932427 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfc5a3301aa0d535569c3eedda51c82ee9f19aa64f208230f2ac95139ea3327b"} err="failed to get container status \"cfc5a3301aa0d535569c3eedda51c82ee9f19aa64f208230f2ac95139ea3327b\": rpc error: code = NotFound desc = could not find container \"cfc5a3301aa0d535569c3eedda51c82ee9f19aa64f208230f2ac95139ea3327b\": container with ID starting with cfc5a3301aa0d535569c3eedda51c82ee9f19aa64f208230f2ac95139ea3327b not found: ID does not exist" Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.932470 4867 scope.go:117] "RemoveContainer" containerID="b82144172ec57a88a4a7d071777b91eb41831acba69913c40e0826ab964b9099" Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.932844 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.933147 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ff82f43d-33bd-47f0-9864-83bb3048f9b2" containerName="cinder-api-log" containerID="cri-o://faeef81012a39d5e86ee47c82b3d29f10718732a72e3a4c2371bd4f1d2e7f489" gracePeriod=30 Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.933332 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ff82f43d-33bd-47f0-9864-83bb3048f9b2" containerName="cinder-api" containerID="cri-o://c1322607e2d2d81092b3e995c7264c64ede61c6ce739cb323ee27a1ce97fbebb" gracePeriod=30 Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.940770 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-xsp84"] Jan 01 08:50:26 crc kubenswrapper[4867]: E0101 08:50:26.942166 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b82144172ec57a88a4a7d071777b91eb41831acba69913c40e0826ab964b9099\": container with ID starting with b82144172ec57a88a4a7d071777b91eb41831acba69913c40e0826ab964b9099 not found: ID does not exist" containerID="b82144172ec57a88a4a7d071777b91eb41831acba69913c40e0826ab964b9099" Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.942234 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b82144172ec57a88a4a7d071777b91eb41831acba69913c40e0826ab964b9099"} err="failed to get container status \"b82144172ec57a88a4a7d071777b91eb41831acba69913c40e0826ab964b9099\": rpc error: code = NotFound desc = could not find container \"b82144172ec57a88a4a7d071777b91eb41831acba69913c40e0826ab964b9099\": container with ID starting with b82144172ec57a88a4a7d071777b91eb41831acba69913c40e0826ab964b9099 not found: ID does not exist" Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.942293 4867 scope.go:117] "RemoveContainer" containerID="cfc5a3301aa0d535569c3eedda51c82ee9f19aa64f208230f2ac95139ea3327b" Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.943230 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfc5a3301aa0d535569c3eedda51c82ee9f19aa64f208230f2ac95139ea3327b"} err="failed to get container status \"cfc5a3301aa0d535569c3eedda51c82ee9f19aa64f208230f2ac95139ea3327b\": rpc error: code = NotFound desc = could not find container \"cfc5a3301aa0d535569c3eedda51c82ee9f19aa64f208230f2ac95139ea3327b\": container with ID starting with cfc5a3301aa0d535569c3eedda51c82ee9f19aa64f208230f2ac95139ea3327b not found: ID does not exist" Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.943247 4867 scope.go:117] "RemoveContainer" containerID="b82144172ec57a88a4a7d071777b91eb41831acba69913c40e0826ab964b9099" Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.945624 4867 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.945672 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-xsp84"] Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.951145 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1620c75e-1129-4850-9b27-7666e4cb8ed5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1620c75e-1129-4850-9b27-7666e4cb8ed5" (UID: "1620c75e-1129-4850-9b27-7666e4cb8ed5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.953356 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b82144172ec57a88a4a7d071777b91eb41831acba69913c40e0826ab964b9099"} err="failed to get container status \"b82144172ec57a88a4a7d071777b91eb41831acba69913c40e0826ab964b9099\": rpc error: code = NotFound desc = could not find container \"b82144172ec57a88a4a7d071777b91eb41831acba69913c40e0826ab964b9099\": container with ID starting with b82144172ec57a88a4a7d071777b91eb41831acba69913c40e0826ab964b9099 not found: ID does not exist" Jan 01 08:50:26 crc kubenswrapper[4867]: W0101 08:50:26.974267 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c6fd580_15f1_4929_b211_ecb1dc767e7c.slice/crio-8074aa337305fadd047e17c8b587cc37fb27e4963a3bb6030312dfb8ebf64947 WatchSource:0}: Error finding container 8074aa337305fadd047e17c8b587cc37fb27e4963a3bb6030312dfb8ebf64947: Status 404 returned error can't find the container with id 8074aa337305fadd047e17c8b587cc37fb27e4963a3bb6030312dfb8ebf64947 Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.981474 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1620c75e-1129-4850-9b27-7666e4cb8ed5-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.981502 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1620c75e-1129-4850-9b27-7666e4cb8ed5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:26 crc kubenswrapper[4867]: I0101 08:50:26.981512 4867 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:26 crc kubenswrapper[4867]: E0101 08:50:26.981568 4867 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 01 08:50:26 crc kubenswrapper[4867]: E0101 08:50:26.981607 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-config-data podName:1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99 nodeName:}" failed. No retries permitted until 2026-01-01 08:50:28.981593663 +0000 UTC m=+1438.116862422 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-config-data") pod "rabbitmq-cell1-server-0" (UID: "1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99") : configmap "rabbitmq-cell1-config-data" not found Jan 01 08:50:27 crc kubenswrapper[4867]: E0101 08:50:27.013840 4867 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 01 08:50:27 crc kubenswrapper[4867]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 01 08:50:27 crc kubenswrapper[4867]: Jan 01 08:50:27 crc kubenswrapper[4867]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 01 08:50:27 crc kubenswrapper[4867]: Jan 01 08:50:27 crc kubenswrapper[4867]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 01 08:50:27 crc kubenswrapper[4867]: Jan 01 08:50:27 crc kubenswrapper[4867]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 01 08:50:27 crc kubenswrapper[4867]: Jan 01 08:50:27 crc kubenswrapper[4867]: if [ -n "nova_cell0" ]; then Jan 01 08:50:27 crc kubenswrapper[4867]: GRANT_DATABASE="nova_cell0" Jan 01 08:50:27 crc kubenswrapper[4867]: else Jan 01 08:50:27 crc kubenswrapper[4867]: GRANT_DATABASE="*" Jan 01 08:50:27 crc kubenswrapper[4867]: fi Jan 01 08:50:27 crc kubenswrapper[4867]: Jan 01 08:50:27 crc kubenswrapper[4867]: # going for maximum compatibility here: Jan 01 08:50:27 crc kubenswrapper[4867]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 01 08:50:27 crc kubenswrapper[4867]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 01 08:50:27 crc kubenswrapper[4867]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 01 08:50:27 crc kubenswrapper[4867]: # support updates Jan 01 08:50:27 crc kubenswrapper[4867]: Jan 01 08:50:27 crc kubenswrapper[4867]: $MYSQL_CMD < logger="UnhandledError" Jan 01 08:50:27 crc kubenswrapper[4867]: E0101 08:50:27.015393 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell0-db-secret\\\" not found\"" pod="openstack/nova-cell0-9ba1-account-create-update-cvb54" podUID="6c6fd580-15f1-4929-b211-ecb1dc767e7c" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.020194 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5fb785fd89-9d8g9"] Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.020454 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5fb785fd89-9d8g9" podUID="0973b1fb-6399-4d31-aa7e-2a41a163e4f4" containerName="neutron-api" containerID="cri-o://729b6a580bd1e1ee405c44dd7bf80943fddab7c16924f9f0fb594ae3af67973d" gracePeriod=30 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.020862 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5fb785fd89-9d8g9" podUID="0973b1fb-6399-4d31-aa7e-2a41a163e4f4" containerName="neutron-httpd" containerID="cri-o://bac9c7668db5a75c9609096697c08006409e297a731d4223463f224f07576d59" gracePeriod=30 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.104712 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.105721 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e809a11a-a5d8-49a0-9d9d-cac6a399dd35" containerName="glance-log" containerID="cri-o://c147d7635f762a0bb4d5c3b4b921b293ef2acefe6bfd101dcab003dc2f076886" gracePeriod=30 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.108710 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e809a11a-a5d8-49a0-9d9d-cac6a399dd35" containerName="glance-httpd" containerID="cri-o://9185c2834b9cf63d0aa63913819769f2b534971b2a8528f9b981383d4142d637" gracePeriod=30 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.162374 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03b75a91-f656-4340-9b36-3b95732d5138" path="/var/lib/kubelet/pods/03b75a91-f656-4340-9b36-3b95732d5138/volumes" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.180927 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bb0c901-c8bf-4767-ba12-56111931051e" path="/var/lib/kubelet/pods/1bb0c901-c8bf-4767-ba12-56111931051e/volumes" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.185367 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_654c613f-4f96-41f0-8937-d4be9f7897da/ovsdbserver-nb/0.log" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.185454 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.194064 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23acd1c1-f4b4-4d70-be4e-ea07cbff8053" path="/var/lib/kubelet/pods/23acd1c1-f4b4-4d70-be4e-ea07cbff8053/volumes" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.195992 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1620c75e-1129-4850-9b27-7666e4cb8ed5-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "1620c75e-1129-4850-9b27-7666e4cb8ed5" (UID: "1620c75e-1129-4850-9b27-7666e4cb8ed5"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.208210 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33fdfdaa-2411-42a9-8c71-6062c9cc143d" path="/var/lib/kubelet/pods/33fdfdaa-2411-42a9-8c71-6062c9cc143d/volumes" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.208882 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3428c4a3-12ce-4407-8c12-1fa0241c29a5" path="/var/lib/kubelet/pods/3428c4a3-12ce-4407-8c12-1fa0241c29a5/volumes" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.209485 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d29ab84-247c-4e1b-b199-3fa0bcf59771" path="/var/lib/kubelet/pods/3d29ab84-247c-4e1b-b199-3fa0bcf59771/volumes" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.217826 4867 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1620c75e-1129-4850-9b27-7666e4cb8ed5-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.233863 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48551ac5-5853-40d9-843b-c14538e078d7" path="/var/lib/kubelet/pods/48551ac5-5853-40d9-843b-c14538e078d7/volumes" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.234617 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6640c65c-7090-4961-ba25-038487f6c62b" path="/var/lib/kubelet/pods/6640c65c-7090-4961-ba25-038487f6c62b/volumes" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.235220 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82441228-8114-485a-a020-b8997a64900c" path="/var/lib/kubelet/pods/82441228-8114-485a-a020-b8997a64900c/volumes" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.266114 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e3c0e55-238e-4e4b-b5fb-da86a9948f01" path="/var/lib/kubelet/pods/8e3c0e55-238e-4e4b-b5fb-da86a9948f01/volumes" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.267217 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="978a99d3-4e55-4026-a329-5da06bf36c90" path="/var/lib/kubelet/pods/978a99d3-4e55-4026-a329-5da06bf36c90/volumes" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.268447 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="992836a0-1e44-4e3f-8a2d-139f151eef51" path="/var/lib/kubelet/pods/992836a0-1e44-4e3f-8a2d-139f151eef51/volumes" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.297477 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fd08681-e332-4a24-9e90-d0085dc5e069" path="/var/lib/kubelet/pods/9fd08681-e332-4a24-9e90-d0085dc5e069/volumes" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.322841 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eeb7c774-4ae0-475c-a44a-138a917beac0" path="/var/lib/kubelet/pods/eeb7c774-4ae0-475c-a44a-138a917beac0/volumes" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.324047 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85694b8-9e77-48d3-9338-4b65bfe5d21f" path="/var/lib/kubelet/pods/f85694b8-9e77-48d3-9338-4b65bfe5d21f/volumes" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.349628 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.349679 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-2p22z"] Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.350120 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="account-server" containerID="cri-o://15a53dd61a838436c8cc640222228f29611c350aafbcd4c7ebd7e7d0037f6c08" gracePeriod=30 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.357208 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="container-server" containerID="cri-o://6ea6d3eb9e4320ec6933e61e262598b312bd6de3e889f0b82a2feeb2cb268787" gracePeriod=30 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.357187 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="object-server" containerID="cri-o://8e53230183aa290b06a69f94b13b5239aeff13215b5020e77a00e699fab2b615" gracePeriod=30 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.357370 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="container-updater" containerID="cri-o://c025cb17cbfb32add357e55d8877da3e20770d7e20434859f494dda30db32f6a" gracePeriod=30 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.357422 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="container-auditor" containerID="cri-o://5bafde574302da2dbe87cd0e8e0471bb6c66be5019e2607894d57645ead89abd" gracePeriod=30 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.357452 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="container-replicator" containerID="cri-o://2fdf59f8f13262498df0f837fac962ddddedc8b945cc894391e0ea1e2818f2f8" gracePeriod=30 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.357496 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="object-expirer" containerID="cri-o://b55659f4cef0ce86dc0aded13b57bc4bfb2f19e8bc919819f7fd767199f41066" gracePeriod=30 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.357530 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="swift-recon-cron" containerID="cri-o://772bcfc85f46d71696e64d9fee0b787dd32f23fc9a773f22afa70be5798f659e" gracePeriod=30 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.357562 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="rsync" containerID="cri-o://fce464d833a7c6c6c7e37a1de3b906e406e554021eb3b0eefbac9bedb4d2be70" gracePeriod=30 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.357597 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="account-auditor" containerID="cri-o://93e3d74fe5bc76c92026ce2db2261c34e59a9fd78d02af2361f795c93327e0c2" gracePeriod=30 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.357628 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="account-reaper" containerID="cri-o://e10bde6d2681fb218ae735e5c9c2775d890ddf6556d173f40a07de5845ae619f" gracePeriod=30 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.357668 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="account-replicator" containerID="cri-o://86b047d36ab3b40416494324d7472e6b2de70172c969ce8457a8c077f86da142" gracePeriod=30 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.357789 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="object-updater" containerID="cri-o://fa319fce7bd6ca10e6ad88adaf6f5948cf915cc2e7db3b5bac5c37f79dc9e5b9" gracePeriod=30 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.357870 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="object-auditor" containerID="cri-o://b2f2f0cace6bd82b9e6b696c3fd61a7a16620e0c17129fb46f6aa38276e62493" gracePeriod=30 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.357879 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="object-replicator" containerID="cri-o://5211eb74bef91f578c4b43a263d9289855c0625a768d9b2a86f8757c91730985" gracePeriod=30 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.362946 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/654c613f-4f96-41f0-8937-d4be9f7897da-config\") pod \"654c613f-4f96-41f0-8937-d4be9f7897da\" (UID: \"654c613f-4f96-41f0-8937-d4be9f7897da\") " Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.363033 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/654c613f-4f96-41f0-8937-d4be9f7897da-ovsdbserver-nb-tls-certs\") pod \"654c613f-4f96-41f0-8937-d4be9f7897da\" (UID: \"654c613f-4f96-41f0-8937-d4be9f7897da\") " Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.363054 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/654c613f-4f96-41f0-8937-d4be9f7897da-scripts\") pod \"654c613f-4f96-41f0-8937-d4be9f7897da\" (UID: \"654c613f-4f96-41f0-8937-d4be9f7897da\") " Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.363076 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/654c613f-4f96-41f0-8937-d4be9f7897da-ovsdb-rundir\") pod \"654c613f-4f96-41f0-8937-d4be9f7897da\" (UID: \"654c613f-4f96-41f0-8937-d4be9f7897da\") " Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.363110 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/654c613f-4f96-41f0-8937-d4be9f7897da-combined-ca-bundle\") pod \"654c613f-4f96-41f0-8937-d4be9f7897da\" (UID: \"654c613f-4f96-41f0-8937-d4be9f7897da\") " Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.363211 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/654c613f-4f96-41f0-8937-d4be9f7897da-metrics-certs-tls-certs\") pod \"654c613f-4f96-41f0-8937-d4be9f7897da\" (UID: \"654c613f-4f96-41f0-8937-d4be9f7897da\") " Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.363244 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfvjv\" (UniqueName: \"kubernetes.io/projected/654c613f-4f96-41f0-8937-d4be9f7897da-kube-api-access-mfvjv\") pod \"654c613f-4f96-41f0-8937-d4be9f7897da\" (UID: \"654c613f-4f96-41f0-8937-d4be9f7897da\") " Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.363315 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"654c613f-4f96-41f0-8937-d4be9f7897da\" (UID: \"654c613f-4f96-41f0-8937-d4be9f7897da\") " Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.363891 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/654c613f-4f96-41f0-8937-d4be9f7897da-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "654c613f-4f96-41f0-8937-d4be9f7897da" (UID: "654c613f-4f96-41f0-8937-d4be9f7897da"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.364416 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/654c613f-4f96-41f0-8937-d4be9f7897da-config" (OuterVolumeSpecName: "config") pod "654c613f-4f96-41f0-8937-d4be9f7897da" (UID: "654c613f-4f96-41f0-8937-d4be9f7897da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.365307 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/654c613f-4f96-41f0-8937-d4be9f7897da-scripts" (OuterVolumeSpecName: "scripts") pod "654c613f-4f96-41f0-8937-d4be9f7897da" (UID: "654c613f-4f96-41f0-8937-d4be9f7897da"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.367531 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-2p22z"] Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.386246 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-7bpzf"] Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.395624 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-7bpzf"] Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.402674 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-j5kc9"] Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.416236 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-j5kc9"] Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.419935 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-xsk47"] Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.428936 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-xsk47"] Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.436210 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-69c1-account-create-update-xbwpk"] Jan 01 08:50:27 crc kubenswrapper[4867]: E0101 08:50:27.436689 4867 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 01 08:50:27 crc kubenswrapper[4867]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 01 08:50:27 crc kubenswrapper[4867]: + source /usr/local/bin/container-scripts/functions Jan 01 08:50:27 crc kubenswrapper[4867]: ++ OVNBridge=br-int Jan 01 08:50:27 crc kubenswrapper[4867]: ++ OVNRemote=tcp:localhost:6642 Jan 01 08:50:27 crc kubenswrapper[4867]: ++ OVNEncapType=geneve Jan 01 08:50:27 crc kubenswrapper[4867]: ++ OVNAvailabilityZones= Jan 01 08:50:27 crc kubenswrapper[4867]: ++ EnableChassisAsGateway=true Jan 01 08:50:27 crc kubenswrapper[4867]: ++ PhysicalNetworks= Jan 01 08:50:27 crc kubenswrapper[4867]: ++ OVNHostName= Jan 01 08:50:27 crc kubenswrapper[4867]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 01 08:50:27 crc kubenswrapper[4867]: ++ ovs_dir=/var/lib/openvswitch Jan 01 08:50:27 crc kubenswrapper[4867]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 01 08:50:27 crc kubenswrapper[4867]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 01 08:50:27 crc kubenswrapper[4867]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 01 08:50:27 crc kubenswrapper[4867]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 01 08:50:27 crc kubenswrapper[4867]: + sleep 0.5 Jan 01 08:50:27 crc kubenswrapper[4867]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 01 08:50:27 crc kubenswrapper[4867]: + cleanup_ovsdb_server_semaphore Jan 01 08:50:27 crc kubenswrapper[4867]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 01 08:50:27 crc kubenswrapper[4867]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 01 08:50:27 crc kubenswrapper[4867]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-smgl6" message=< Jan 01 08:50:27 crc kubenswrapper[4867]: Exiting ovsdb-server (5) ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 01 08:50:27 crc kubenswrapper[4867]: + source /usr/local/bin/container-scripts/functions Jan 01 08:50:27 crc kubenswrapper[4867]: ++ OVNBridge=br-int Jan 01 08:50:27 crc kubenswrapper[4867]: ++ OVNRemote=tcp:localhost:6642 Jan 01 08:50:27 crc kubenswrapper[4867]: ++ OVNEncapType=geneve Jan 01 08:50:27 crc kubenswrapper[4867]: ++ OVNAvailabilityZones= Jan 01 08:50:27 crc kubenswrapper[4867]: ++ EnableChassisAsGateway=true Jan 01 08:50:27 crc kubenswrapper[4867]: ++ PhysicalNetworks= Jan 01 08:50:27 crc kubenswrapper[4867]: ++ OVNHostName= Jan 01 08:50:27 crc kubenswrapper[4867]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 01 08:50:27 crc kubenswrapper[4867]: ++ ovs_dir=/var/lib/openvswitch Jan 01 08:50:27 crc kubenswrapper[4867]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 01 08:50:27 crc kubenswrapper[4867]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 01 08:50:27 crc kubenswrapper[4867]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 01 08:50:27 crc kubenswrapper[4867]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 01 08:50:27 crc kubenswrapper[4867]: + sleep 0.5 Jan 01 08:50:27 crc kubenswrapper[4867]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 01 08:50:27 crc kubenswrapper[4867]: + cleanup_ovsdb_server_semaphore Jan 01 08:50:27 crc kubenswrapper[4867]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 01 08:50:27 crc kubenswrapper[4867]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 01 08:50:27 crc kubenswrapper[4867]: > Jan 01 08:50:27 crc kubenswrapper[4867]: E0101 08:50:27.436732 4867 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 01 08:50:27 crc kubenswrapper[4867]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 01 08:50:27 crc kubenswrapper[4867]: + source /usr/local/bin/container-scripts/functions Jan 01 08:50:27 crc kubenswrapper[4867]: ++ OVNBridge=br-int Jan 01 08:50:27 crc kubenswrapper[4867]: ++ OVNRemote=tcp:localhost:6642 Jan 01 08:50:27 crc kubenswrapper[4867]: ++ OVNEncapType=geneve Jan 01 08:50:27 crc kubenswrapper[4867]: ++ OVNAvailabilityZones= Jan 01 08:50:27 crc kubenswrapper[4867]: ++ EnableChassisAsGateway=true Jan 01 08:50:27 crc kubenswrapper[4867]: ++ PhysicalNetworks= Jan 01 08:50:27 crc kubenswrapper[4867]: ++ OVNHostName= Jan 01 08:50:27 crc kubenswrapper[4867]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 01 08:50:27 crc kubenswrapper[4867]: ++ ovs_dir=/var/lib/openvswitch Jan 01 08:50:27 crc kubenswrapper[4867]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 01 08:50:27 crc kubenswrapper[4867]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 01 08:50:27 crc kubenswrapper[4867]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 01 08:50:27 crc kubenswrapper[4867]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 01 08:50:27 crc kubenswrapper[4867]: + sleep 0.5 Jan 01 08:50:27 crc kubenswrapper[4867]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 01 08:50:27 crc kubenswrapper[4867]: + cleanup_ovsdb_server_semaphore Jan 01 08:50:27 crc kubenswrapper[4867]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 01 08:50:27 crc kubenswrapper[4867]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 01 08:50:27 crc kubenswrapper[4867]: > pod="openstack/ovn-controller-ovs-smgl6" podUID="d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e" containerName="ovsdb-server" containerID="cri-o://823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.436766 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-smgl6" podUID="d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e" containerName="ovsdb-server" containerID="cri-o://823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd" gracePeriod=30 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.444324 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-9ba1-account-create-update-cvb54"] Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.450861 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-7n8cs"] Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.463598 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-b4jlt"] Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.463655 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-b4jlt"] Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.465209 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/654c613f-4f96-41f0-8937-d4be9f7897da-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.465232 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/654c613f-4f96-41f0-8937-d4be9f7897da-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.465242 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/654c613f-4f96-41f0-8937-d4be9f7897da-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.470040 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.470324 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e7003b80-53fa-4550-8f18-486a0f7988c9" containerName="nova-metadata-log" containerID="cri-o://d63d8143a81a71b834c747786bff3a5cc7e1868dc867d35f729e24492192e1ec" gracePeriod=30 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.470779 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e7003b80-53fa-4550-8f18-486a0f7988c9" containerName="nova-metadata-metadata" containerID="cri-o://3cd677561763860feb64840f8907414cdd4cd64aae8107b33f87bbbd3b84da9d" gracePeriod=30 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.479527 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "654c613f-4f96-41f0-8937-d4be9f7897da" (UID: "654c613f-4f96-41f0-8937-d4be9f7897da"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.479950 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.489851 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/654c613f-4f96-41f0-8937-d4be9f7897da-kube-api-access-mfvjv" (OuterVolumeSpecName: "kube-api-access-mfvjv") pod "654c613f-4f96-41f0-8937-d4be9f7897da" (UID: "654c613f-4f96-41f0-8937-d4be9f7897da"). InnerVolumeSpecName "kube-api-access-mfvjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.499275 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/654c613f-4f96-41f0-8937-d4be9f7897da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "654c613f-4f96-41f0-8937-d4be9f7897da" (UID: "654c613f-4f96-41f0-8937-d4be9f7897da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.504510 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-f3c1-account-create-update-lpffc"] Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.522975 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-tbs6w"] Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.532546 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-tbs6w"] Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.542818 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-9ba1-account-create-update-cvb54"] Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.548852 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-tf28b"] Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.554407 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-tf28b"] Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.557556 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-4gj2t_63c4f874-d21a-42b7-884a-f070d8dc2150/openstack-network-exporter/0.log" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.557626 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-4gj2t" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.560638 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.560969 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="cda1d2c0-2470-41f9-9969-776f8883a38b" containerName="nova-api-log" containerID="cri-o://b66638c98090f5e1aaf6296dd6eb2d5dfcfb3fdb6de51af32ae9f4151cd17179" gracePeriod=30 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.561115 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="cda1d2c0-2470-41f9-9969-776f8883a38b" containerName="nova-api-api" containerID="cri-o://c013d238e22e79dc9d0e40fa979e690ede634035cc8946de67a72eabb0c5ea17" gracePeriod=30 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.566378 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.571848 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-86c7f77bc7-nt6jq"] Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.572166 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-86c7f77bc7-nt6jq" podUID="28bc6ac4-481b-4809-a61e-f32ff6a17920" containerName="proxy-httpd" containerID="cri-o://4cd0bb63af1fee6659642feb949cce89eb3edf52c7db8a4bc4bddb7af7d72398" gracePeriod=30 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.572319 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-86c7f77bc7-nt6jq" podUID="28bc6ac4-481b-4809-a61e-f32ff6a17920" containerName="proxy-server" containerID="cri-o://e4fe854d98b51a453dcc4f2c161e7293d1516af4b78ad4a4598ea5c3eea480ac" gracePeriod=30 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.588816 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/654c613f-4f96-41f0-8937-d4be9f7897da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.588908 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfvjv\" (UniqueName: \"kubernetes.io/projected/654c613f-4f96-41f0-8937-d4be9f7897da-kube-api-access-mfvjv\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.588938 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 01 08:50:27 crc kubenswrapper[4867]: E0101 08:50:27.589182 4867 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 01 08:50:27 crc kubenswrapper[4867]: E0101 08:50:27.589341 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/84d7aac6-1073-41c0-acff-169e36ec197d-config-data podName:84d7aac6-1073-41c0-acff-169e36ec197d nodeName:}" failed. No retries permitted until 2026-01-01 08:50:29.589319825 +0000 UTC m=+1438.724588594 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/84d7aac6-1073-41c0-acff-169e36ec197d-config-data") pod "rabbitmq-server-0" (UID: "84d7aac6-1073-41c0-acff-169e36ec197d") : configmap "rabbitmq-config-data" not found Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.623758 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-6596d5f4d6-9cxqr"] Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.624015 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-6596d5f4d6-9cxqr" podUID="c6e96caa-b906-4b24-af21-8068ea727bba" containerName="barbican-keystone-listener-log" containerID="cri-o://dc673bc1feba5e02af532b24171ae7075ed044000fad91c5933e93e216ca2214" gracePeriod=30 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.624083 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-6596d5f4d6-9cxqr" podUID="c6e96caa-b906-4b24-af21-8068ea727bba" containerName="barbican-keystone-listener" containerID="cri-o://12ac59ef1025a56a54145198bfc20879e2c8969f62ef2c28de3bb86b0129fd27" gracePeriod=30 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.640422 4867 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.641221 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-7965d77d77-cwbt7"] Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.641560 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-7965d77d77-cwbt7" podUID="22fe2632-f8f6-4ef9-9f4c-72b69bd45932" containerName="barbican-worker-log" containerID="cri-o://65ef15ad242719f3da63fa724d97de1fb1223fd81f2c48a72e0cb2f1c91f8f4b" gracePeriod=30 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.641669 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-7965d77d77-cwbt7" podUID="22fe2632-f8f6-4ef9-9f4c-72b69bd45932" containerName="barbican-worker" containerID="cri-o://f95ad7dcbf76b229ef0f72ae0e667de7d0e25a5f3d7e84f84fd18139ab18e305" gracePeriod=30 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.656892 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-58dc5bfddd-522rc"] Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.657232 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-58dc5bfddd-522rc" podUID="3c2a7f74-c5ce-45fb-a1fa-c19c025aea20" containerName="barbican-api-log" containerID="cri-o://937838972a1573c1df4db392f223b3e988bccb1ee572a68dba6f4b3aed9b91ee" gracePeriod=30 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.657341 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-58dc5bfddd-522rc" podUID="3c2a7f74-c5ce-45fb-a1fa-c19c025aea20" containerName="barbican-api" containerID="cri-o://dee68f8d073a368d94e9708c1869989ddd8ada0a6eb993b2a239618bdb95a0c6" gracePeriod=30 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.668554 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.671550 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-smgl6" podUID="d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e" containerName="ovs-vswitchd" containerID="cri-o://c5d97f4ef6c67417f1c06bc5b592d06096afac0628ba26043d76ab1c8ed2c65b" gracePeriod=29 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.689814 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grtm8\" (UniqueName: \"kubernetes.io/projected/63c4f874-d21a-42b7-884a-f070d8dc2150-kube-api-access-grtm8\") pod \"63c4f874-d21a-42b7-884a-f070d8dc2150\" (UID: \"63c4f874-d21a-42b7-884a-f070d8dc2150\") " Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.689981 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63c4f874-d21a-42b7-884a-f070d8dc2150-config\") pod \"63c4f874-d21a-42b7-884a-f070d8dc2150\" (UID: \"63c4f874-d21a-42b7-884a-f070d8dc2150\") " Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.690013 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/63c4f874-d21a-42b7-884a-f070d8dc2150-ovs-rundir\") pod \"63c4f874-d21a-42b7-884a-f070d8dc2150\" (UID: \"63c4f874-d21a-42b7-884a-f070d8dc2150\") " Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.690071 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/63c4f874-d21a-42b7-884a-f070d8dc2150-ovn-rundir\") pod \"63c4f874-d21a-42b7-884a-f070d8dc2150\" (UID: \"63c4f874-d21a-42b7-884a-f070d8dc2150\") " Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.690118 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/63c4f874-d21a-42b7-884a-f070d8dc2150-metrics-certs-tls-certs\") pod \"63c4f874-d21a-42b7-884a-f070d8dc2150\" (UID: \"63c4f874-d21a-42b7-884a-f070d8dc2150\") " Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.690187 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63c4f874-d21a-42b7-884a-f070d8dc2150-combined-ca-bundle\") pod \"63c4f874-d21a-42b7-884a-f070d8dc2150\" (UID: \"63c4f874-d21a-42b7-884a-f070d8dc2150\") " Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.691163 4867 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.693136 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.695109 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="9c8a7ced-4990-4ea2-baff-8d3adf064a56" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://c05c8abcfa20118c3f417e5d1941b6250e0b465b8219e18b8df907477937b8fd" gracePeriod=30 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.695487 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63c4f874-d21a-42b7-884a-f070d8dc2150-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "63c4f874-d21a-42b7-884a-f070d8dc2150" (UID: "63c4f874-d21a-42b7-884a-f070d8dc2150"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.699524 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63c4f874-d21a-42b7-884a-f070d8dc2150-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "63c4f874-d21a-42b7-884a-f070d8dc2150" (UID: "63c4f874-d21a-42b7-884a-f070d8dc2150"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.701605 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63c4f874-d21a-42b7-884a-f070d8dc2150-config" (OuterVolumeSpecName: "config") pod "63c4f874-d21a-42b7-884a-f070d8dc2150" (UID: "63c4f874-d21a-42b7-884a-f070d8dc2150"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.714695 4867 generic.go:334] "Generic (PLEG): container finished" podID="6f47f095-abde-4e07-8edf-d0a318043581" containerID="c1af335d05f310408a3a3e7e9c132db515267848f5873efa7c468ee6eea3edc6" exitCode=143 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.714778 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6f47f095-abde-4e07-8edf-d0a318043581","Type":"ContainerDied","Data":"c1af335d05f310408a3a3e7e9c132db515267848f5873efa7c468ee6eea3edc6"} Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.736632 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.736842 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9" containerName="nova-scheduler-scheduler" containerID="cri-o://3d733e18f1ee0ab5fdfc275f4b701971bfd4e30736094221d5f2e06640b3bfa5" gracePeriod=30 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.747441 4867 generic.go:334] "Generic (PLEG): container finished" podID="d579322c-12b7-488b-8220-31ef35016c68" containerID="9aaec2bdb437295ba5550a821ae8d1f9f3e0bbb9ba3c726894b4022fa400f982" exitCode=0 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.747495 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbdfbb78f-5g78q" event={"ID":"d579322c-12b7-488b-8220-31ef35016c68","Type":"ContainerDied","Data":"9aaec2bdb437295ba5550a821ae8d1f9f3e0bbb9ba3c726894b4022fa400f982"} Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.754960 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.755209 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb" containerName="nova-cell0-conductor-conductor" containerID="cri-o://e36af79288ec74b9ac3b28d475ec0bec31b44ef20ef075ee5431a9a0e5c8698a" gracePeriod=30 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.765150 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63c4f874-d21a-42b7-884a-f070d8dc2150-kube-api-access-grtm8" (OuterVolumeSpecName: "kube-api-access-grtm8") pod "63c4f874-d21a-42b7-884a-f070d8dc2150" (UID: "63c4f874-d21a-42b7-884a-f070d8dc2150"). InnerVolumeSpecName "kube-api-access-grtm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.782435 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xfnlz"] Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.792034 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_654c613f-4f96-41f0-8937-d4be9f7897da/ovsdbserver-nb/0.log" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.792124 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"654c613f-4f96-41f0-8937-d4be9f7897da","Type":"ContainerDied","Data":"a15319d8cd56cd781759c4e46ec289c46ceed8c5ee394edf13b3b22673a258c4"} Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.792161 4867 scope.go:117] "RemoveContainer" containerID="af04740eea97da4b3747aedaa2d322eabd244cf11d0911b2ba02cff1211719ab" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.792307 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.793857 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grtm8\" (UniqueName: \"kubernetes.io/projected/63c4f874-d21a-42b7-884a-f070d8dc2150-kube-api-access-grtm8\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.793870 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63c4f874-d21a-42b7-884a-f070d8dc2150-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.793879 4867 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/63c4f874-d21a-42b7-884a-f070d8dc2150-ovs-rundir\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.793893 4867 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/63c4f874-d21a-42b7-884a-f070d8dc2150-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.802892 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xfnlz"] Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.809237 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fbpqm"] Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.812454 4867 generic.go:334] "Generic (PLEG): container finished" podID="ff82f43d-33bd-47f0-9864-83bb3048f9b2" containerID="faeef81012a39d5e86ee47c82b3d29f10718732a72e3a4c2371bd4f1d2e7f489" exitCode=143 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.812514 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ff82f43d-33bd-47f0-9864-83bb3048f9b2","Type":"ContainerDied","Data":"faeef81012a39d5e86ee47c82b3d29f10718732a72e3a4c2371bd4f1d2e7f489"} Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.813780 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99" containerName="rabbitmq" containerID="cri-o://bc5390d4bcf01426a28783738a2f8a8259143f42fe7c013e5c96ae09dbf77b55" gracePeriod=604800 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.814998 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/654c613f-4f96-41f0-8937-d4be9f7897da-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "654c613f-4f96-41f0-8937-d4be9f7897da" (UID: "654c613f-4f96-41f0-8937-d4be9f7897da"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.815305 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-4gj2t_63c4f874-d21a-42b7-884a-f070d8dc2150/openstack-network-exporter/0.log" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.815350 4867 generic.go:334] "Generic (PLEG): container finished" podID="63c4f874-d21a-42b7-884a-f070d8dc2150" containerID="880fc92f7ecbd0a9c36266766043de57fa6004b144c4752222808fdfded81541" exitCode=2 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.815435 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-4gj2t" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.816183 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-4gj2t" event={"ID":"63c4f874-d21a-42b7-884a-f070d8dc2150","Type":"ContainerDied","Data":"880fc92f7ecbd0a9c36266766043de57fa6004b144c4752222808fdfded81541"} Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.816214 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-4gj2t" event={"ID":"63c4f874-d21a-42b7-884a-f070d8dc2150","Type":"ContainerDied","Data":"6c446e1789571c09aba9cc08cb6a2a94dffcf35f4eb48e907e19acde767a3fa1"} Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.816227 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fbpqm"] Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.822045 4867 generic.go:334] "Generic (PLEG): container finished" podID="e809a11a-a5d8-49a0-9d9d-cac6a399dd35" containerID="c147d7635f762a0bb4d5c3b4b921b293ef2acefe6bfd101dcab003dc2f076886" exitCode=143 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.822102 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e809a11a-a5d8-49a0-9d9d-cac6a399dd35","Type":"ContainerDied","Data":"c147d7635f762a0bb4d5c3b4b921b293ef2acefe6bfd101dcab003dc2f076886"} Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.823235 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7n8cs" event={"ID":"19551dba-c741-42e0-b228-6cad78717264","Type":"ContainerStarted","Data":"76dd789da56b50cb9d816998bd6433f9f32b14d954aa19464cfe9c312057f888"} Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.833839 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.834153 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="8799ae41-c9cb-409a-ac59-3e6b59bb0198" containerName="nova-cell1-conductor-conductor" containerID="cri-o://f57ce717c258cef589d7a47e6fbf0facf4d6e2d61727c0cbd20f621c798a45bd" gracePeriod=30 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.859374 4867 generic.go:334] "Generic (PLEG): container finished" podID="d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e" containerID="823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd" exitCode=0 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.859459 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-smgl6" event={"ID":"d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e","Type":"ContainerDied","Data":"823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd"} Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.904603 4867 generic.go:334] "Generic (PLEG): container finished" podID="0973b1fb-6399-4d31-aa7e-2a41a163e4f4" containerID="bac9c7668db5a75c9609096697c08006409e297a731d4223463f224f07576d59" exitCode=0 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.904720 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fb785fd89-9d8g9" event={"ID":"0973b1fb-6399-4d31-aa7e-2a41a163e4f4","Type":"ContainerDied","Data":"bac9c7668db5a75c9609096697c08006409e297a731d4223463f224f07576d59"} Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.906166 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/654c613f-4f96-41f0-8937-d4be9f7897da-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.906650 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9ba1-account-create-update-cvb54" event={"ID":"6c6fd580-15f1-4929-b211-ecb1dc767e7c","Type":"ContainerStarted","Data":"8074aa337305fadd047e17c8b587cc37fb27e4963a3bb6030312dfb8ebf64947"} Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.928206 4867 generic.go:334] "Generic (PLEG): container finished" podID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerID="b55659f4cef0ce86dc0aded13b57bc4bfb2f19e8bc919819f7fd767199f41066" exitCode=0 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.928247 4867 generic.go:334] "Generic (PLEG): container finished" podID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerID="fa319fce7bd6ca10e6ad88adaf6f5948cf915cc2e7db3b5bac5c37f79dc9e5b9" exitCode=0 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.928256 4867 generic.go:334] "Generic (PLEG): container finished" podID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerID="b2f2f0cace6bd82b9e6b696c3fd61a7a16620e0c17129fb46f6aa38276e62493" exitCode=0 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.928268 4867 generic.go:334] "Generic (PLEG): container finished" podID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerID="5211eb74bef91f578c4b43a263d9289855c0625a768d9b2a86f8757c91730985" exitCode=0 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.928277 4867 generic.go:334] "Generic (PLEG): container finished" podID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerID="c025cb17cbfb32add357e55d8877da3e20770d7e20434859f494dda30db32f6a" exitCode=0 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.928284 4867 generic.go:334] "Generic (PLEG): container finished" podID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerID="5bafde574302da2dbe87cd0e8e0471bb6c66be5019e2607894d57645ead89abd" exitCode=0 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.928291 4867 generic.go:334] "Generic (PLEG): container finished" podID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerID="2fdf59f8f13262498df0f837fac962ddddedc8b945cc894391e0ea1e2818f2f8" exitCode=0 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.928299 4867 generic.go:334] "Generic (PLEG): container finished" podID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerID="e10bde6d2681fb218ae735e5c9c2775d890ddf6556d173f40a07de5845ae619f" exitCode=0 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.928309 4867 generic.go:334] "Generic (PLEG): container finished" podID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerID="93e3d74fe5bc76c92026ce2db2261c34e59a9fd78d02af2361f795c93327e0c2" exitCode=0 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.928316 4867 generic.go:334] "Generic (PLEG): container finished" podID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerID="86b047d36ab3b40416494324d7472e6b2de70172c969ce8457a8c077f86da142" exitCode=0 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.928414 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1f687f2-3229-401c-b5cb-f79e96311c45","Type":"ContainerDied","Data":"b55659f4cef0ce86dc0aded13b57bc4bfb2f19e8bc919819f7fd767199f41066"} Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.928446 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1f687f2-3229-401c-b5cb-f79e96311c45","Type":"ContainerDied","Data":"fa319fce7bd6ca10e6ad88adaf6f5948cf915cc2e7db3b5bac5c37f79dc9e5b9"} Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.928459 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1f687f2-3229-401c-b5cb-f79e96311c45","Type":"ContainerDied","Data":"b2f2f0cace6bd82b9e6b696c3fd61a7a16620e0c17129fb46f6aa38276e62493"} Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.928470 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1f687f2-3229-401c-b5cb-f79e96311c45","Type":"ContainerDied","Data":"5211eb74bef91f578c4b43a263d9289855c0625a768d9b2a86f8757c91730985"} Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.928481 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1f687f2-3229-401c-b5cb-f79e96311c45","Type":"ContainerDied","Data":"c025cb17cbfb32add357e55d8877da3e20770d7e20434859f494dda30db32f6a"} Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.928491 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1f687f2-3229-401c-b5cb-f79e96311c45","Type":"ContainerDied","Data":"5bafde574302da2dbe87cd0e8e0471bb6c66be5019e2607894d57645ead89abd"} Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.928501 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1f687f2-3229-401c-b5cb-f79e96311c45","Type":"ContainerDied","Data":"2fdf59f8f13262498df0f837fac962ddddedc8b945cc894391e0ea1e2818f2f8"} Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.928513 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1f687f2-3229-401c-b5cb-f79e96311c45","Type":"ContainerDied","Data":"e10bde6d2681fb218ae735e5c9c2775d890ddf6556d173f40a07de5845ae619f"} Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.928525 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1f687f2-3229-401c-b5cb-f79e96311c45","Type":"ContainerDied","Data":"93e3d74fe5bc76c92026ce2db2261c34e59a9fd78d02af2361f795c93327e0c2"} Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.928536 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1f687f2-3229-401c-b5cb-f79e96311c45","Type":"ContainerDied","Data":"86b047d36ab3b40416494324d7472e6b2de70172c969ce8457a8c077f86da142"} Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.928635 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="84d7aac6-1073-41c0-acff-169e36ec197d" containerName="rabbitmq" containerID="cri-o://8eae07fdea9c0953b3fdfc9cbf9df315288333bc30f11ac89880d48e2c61ac1b" gracePeriod=604800 Jan 01 08:50:27 crc kubenswrapper[4867]: E0101 08:50:27.941643 4867 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 01 08:50:27 crc kubenswrapper[4867]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 01 08:50:27 crc kubenswrapper[4867]: Jan 01 08:50:27 crc kubenswrapper[4867]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 01 08:50:27 crc kubenswrapper[4867]: Jan 01 08:50:27 crc kubenswrapper[4867]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 01 08:50:27 crc kubenswrapper[4867]: Jan 01 08:50:27 crc kubenswrapper[4867]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 01 08:50:27 crc kubenswrapper[4867]: Jan 01 08:50:27 crc kubenswrapper[4867]: if [ -n "nova_cell0" ]; then Jan 01 08:50:27 crc kubenswrapper[4867]: GRANT_DATABASE="nova_cell0" Jan 01 08:50:27 crc kubenswrapper[4867]: else Jan 01 08:50:27 crc kubenswrapper[4867]: GRANT_DATABASE="*" Jan 01 08:50:27 crc kubenswrapper[4867]: fi Jan 01 08:50:27 crc kubenswrapper[4867]: Jan 01 08:50:27 crc kubenswrapper[4867]: # going for maximum compatibility here: Jan 01 08:50:27 crc kubenswrapper[4867]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 01 08:50:27 crc kubenswrapper[4867]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 01 08:50:27 crc kubenswrapper[4867]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 01 08:50:27 crc kubenswrapper[4867]: # support updates Jan 01 08:50:27 crc kubenswrapper[4867]: Jan 01 08:50:27 crc kubenswrapper[4867]: $MYSQL_CMD < logger="UnhandledError" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.942143 4867 generic.go:334] "Generic (PLEG): container finished" podID="1822baf8-11aa-4152-a74f-2ce0383c1094" containerID="455b0cde75a033b7a0c94fdc6b6b1dd1216e9777beb9c14b66a6998f6b2fa1d5" exitCode=143 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.942242 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-67dd85d5b6-ww7ll" event={"ID":"1822baf8-11aa-4152-a74f-2ce0383c1094","Type":"ContainerDied","Data":"455b0cde75a033b7a0c94fdc6b6b1dd1216e9777beb9c14b66a6998f6b2fa1d5"} Jan 01 08:50:27 crc kubenswrapper[4867]: E0101 08:50:27.943868 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell0-db-secret\\\" not found\"" pod="openstack/nova-cell0-9ba1-account-create-update-cvb54" podUID="6c6fd580-15f1-4929-b211-ecb1dc767e7c" Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.969803 4867 generic.go:334] "Generic (PLEG): container finished" podID="bf6c2c64-e624-4388-b9dc-3d8c7985ac8f" containerID="e698afe95247188c1349dc45263790341fa72c55e4b91893ad3b356253a8a571" exitCode=137 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.975446 4867 generic.go:334] "Generic (PLEG): container finished" podID="e7003b80-53fa-4550-8f18-486a0f7988c9" containerID="d63d8143a81a71b834c747786bff3a5cc7e1868dc867d35f729e24492192e1ec" exitCode=143 Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.975917 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e7003b80-53fa-4550-8f18-486a0f7988c9","Type":"ContainerDied","Data":"d63d8143a81a71b834c747786bff3a5cc7e1868dc867d35f729e24492192e1ec"} Jan 01 08:50:27 crc kubenswrapper[4867]: I0101 08:50:27.981910 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/654c613f-4f96-41f0-8937-d4be9f7897da-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "654c613f-4f96-41f0-8937-d4be9f7897da" (UID: "654c613f-4f96-41f0-8937-d4be9f7897da"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:27 crc kubenswrapper[4867]: E0101 08:50:27.991605 4867 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 01 08:50:27 crc kubenswrapper[4867]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 01 08:50:27 crc kubenswrapper[4867]: Jan 01 08:50:27 crc kubenswrapper[4867]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 01 08:50:27 crc kubenswrapper[4867]: Jan 01 08:50:27 crc kubenswrapper[4867]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 01 08:50:27 crc kubenswrapper[4867]: Jan 01 08:50:27 crc kubenswrapper[4867]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 01 08:50:27 crc kubenswrapper[4867]: Jan 01 08:50:27 crc kubenswrapper[4867]: if [ -n "glance" ]; then Jan 01 08:50:27 crc kubenswrapper[4867]: GRANT_DATABASE="glance" Jan 01 08:50:27 crc kubenswrapper[4867]: else Jan 01 08:50:27 crc kubenswrapper[4867]: GRANT_DATABASE="*" Jan 01 08:50:27 crc kubenswrapper[4867]: fi Jan 01 08:50:27 crc kubenswrapper[4867]: Jan 01 08:50:27 crc kubenswrapper[4867]: # going for maximum compatibility here: Jan 01 08:50:27 crc kubenswrapper[4867]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 01 08:50:27 crc kubenswrapper[4867]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 01 08:50:27 crc kubenswrapper[4867]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 01 08:50:27 crc kubenswrapper[4867]: # support updates Jan 01 08:50:27 crc kubenswrapper[4867]: Jan 01 08:50:27 crc kubenswrapper[4867]: $MYSQL_CMD < logger="UnhandledError" Jan 01 08:50:27 crc kubenswrapper[4867]: E0101 08:50:27.992182 4867 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 01 08:50:27 crc kubenswrapper[4867]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 01 08:50:27 crc kubenswrapper[4867]: Jan 01 08:50:27 crc kubenswrapper[4867]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 01 08:50:27 crc kubenswrapper[4867]: Jan 01 08:50:27 crc kubenswrapper[4867]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 01 08:50:27 crc kubenswrapper[4867]: Jan 01 08:50:27 crc kubenswrapper[4867]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 01 08:50:27 crc kubenswrapper[4867]: Jan 01 08:50:27 crc kubenswrapper[4867]: if [ -n "nova_api" ]; then Jan 01 08:50:27 crc kubenswrapper[4867]: GRANT_DATABASE="nova_api" Jan 01 08:50:27 crc kubenswrapper[4867]: else Jan 01 08:50:27 crc kubenswrapper[4867]: GRANT_DATABASE="*" Jan 01 08:50:27 crc kubenswrapper[4867]: fi Jan 01 08:50:27 crc kubenswrapper[4867]: Jan 01 08:50:27 crc kubenswrapper[4867]: # going for maximum compatibility here: Jan 01 08:50:27 crc kubenswrapper[4867]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 01 08:50:27 crc kubenswrapper[4867]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 01 08:50:27 crc kubenswrapper[4867]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 01 08:50:27 crc kubenswrapper[4867]: # support updates Jan 01 08:50:27 crc kubenswrapper[4867]: Jan 01 08:50:27 crc kubenswrapper[4867]: $MYSQL_CMD < logger="UnhandledError" Jan 01 08:50:27 crc kubenswrapper[4867]: E0101 08:50:27.992723 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"glance-db-secret\\\" not found\"" pod="openstack/glance-69c1-account-create-update-xbwpk" podUID="6fe85b54-84b3-46ab-94b7-597ffd52f997" Jan 01 08:50:27 crc kubenswrapper[4867]: E0101 08:50:27.995404 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-f3c1-account-create-update-lpffc" podUID="22daf9e9-6114-4fc4-951e-da0e7b92c4b8" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.004094 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbdfbb78f-5g78q" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.009385 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d579322c-12b7-488b-8220-31ef35016c68-dns-swift-storage-0\") pod \"d579322c-12b7-488b-8220-31ef35016c68\" (UID: \"d579322c-12b7-488b-8220-31ef35016c68\") " Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.009418 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d579322c-12b7-488b-8220-31ef35016c68-dns-svc\") pod \"d579322c-12b7-488b-8220-31ef35016c68\" (UID: \"d579322c-12b7-488b-8220-31ef35016c68\") " Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.009675 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d579322c-12b7-488b-8220-31ef35016c68-config\") pod \"d579322c-12b7-488b-8220-31ef35016c68\" (UID: \"d579322c-12b7-488b-8220-31ef35016c68\") " Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.009777 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d579322c-12b7-488b-8220-31ef35016c68-ovsdbserver-nb\") pod \"d579322c-12b7-488b-8220-31ef35016c68\" (UID: \"d579322c-12b7-488b-8220-31ef35016c68\") " Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.009812 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mztd6\" (UniqueName: \"kubernetes.io/projected/d579322c-12b7-488b-8220-31ef35016c68-kube-api-access-mztd6\") pod \"d579322c-12b7-488b-8220-31ef35016c68\" (UID: \"d579322c-12b7-488b-8220-31ef35016c68\") " Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.009840 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d579322c-12b7-488b-8220-31ef35016c68-ovsdbserver-sb\") pod \"d579322c-12b7-488b-8220-31ef35016c68\" (UID: \"d579322c-12b7-488b-8220-31ef35016c68\") " Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.010441 4867 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/654c613f-4f96-41f0-8937-d4be9f7897da-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.088705 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d579322c-12b7-488b-8220-31ef35016c68-kube-api-access-mztd6" (OuterVolumeSpecName: "kube-api-access-mztd6") pod "d579322c-12b7-488b-8220-31ef35016c68" (UID: "d579322c-12b7-488b-8220-31ef35016c68"). InnerVolumeSpecName "kube-api-access-mztd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:28 crc kubenswrapper[4867]: E0101 08:50:28.102263 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e36af79288ec74b9ac3b28d475ec0bec31b44ef20ef075ee5431a9a0e5c8698a" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.111453 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mztd6\" (UniqueName: \"kubernetes.io/projected/d579322c-12b7-488b-8220-31ef35016c68-kube-api-access-mztd6\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:28 crc kubenswrapper[4867]: E0101 08:50:28.111512 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e36af79288ec74b9ac3b28d475ec0bec31b44ef20ef075ee5431a9a0e5c8698a" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 01 08:50:28 crc kubenswrapper[4867]: E0101 08:50:28.112717 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e36af79288ec74b9ac3b28d475ec0bec31b44ef20ef075ee5431a9a0e5c8698a" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 01 08:50:28 crc kubenswrapper[4867]: E0101 08:50:28.112748 4867 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb" containerName="nova-cell0-conductor-conductor" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.113202 4867 scope.go:117] "RemoveContainer" containerID="799a9220b793a8689047c599a1077afcad897844454e5de218f99838ce959d39" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.117550 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63c4f874-d21a-42b7-884a-f070d8dc2150-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63c4f874-d21a-42b7-884a-f070d8dc2150" (UID: "63c4f874-d21a-42b7-884a-f070d8dc2150"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.167551 4867 scope.go:117] "RemoveContainer" containerID="880fc92f7ecbd0a9c36266766043de57fa6004b144c4752222808fdfded81541" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.189280 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d579322c-12b7-488b-8220-31ef35016c68-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d579322c-12b7-488b-8220-31ef35016c68" (UID: "d579322c-12b7-488b-8220-31ef35016c68"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.193273 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="d2662702-83ed-4457-a630-e8a6d07ffb8b" containerName="galera" containerID="cri-o://efcd353d29f3de492430dcf05725698c36d4fc1c75947e3d7d13befdcc5b7a27" gracePeriod=30 Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.193518 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.196367 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d579322c-12b7-488b-8220-31ef35016c68-config" (OuterVolumeSpecName: "config") pod "d579322c-12b7-488b-8220-31ef35016c68" (UID: "d579322c-12b7-488b-8220-31ef35016c68"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.196461 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d579322c-12b7-488b-8220-31ef35016c68-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d579322c-12b7-488b-8220-31ef35016c68" (UID: "d579322c-12b7-488b-8220-31ef35016c68"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.201361 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d579322c-12b7-488b-8220-31ef35016c68-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d579322c-12b7-488b-8220-31ef35016c68" (UID: "d579322c-12b7-488b-8220-31ef35016c68"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.207282 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.213128 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bf6c2c64-e624-4388-b9dc-3d8c7985ac8f-openstack-config\") pod \"bf6c2c64-e624-4388-b9dc-3d8c7985ac8f\" (UID: \"bf6c2c64-e624-4388-b9dc-3d8c7985ac8f\") " Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.213186 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf6c2c64-e624-4388-b9dc-3d8c7985ac8f-combined-ca-bundle\") pod \"bf6c2c64-e624-4388-b9dc-3d8c7985ac8f\" (UID: \"bf6c2c64-e624-4388-b9dc-3d8c7985ac8f\") " Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.213275 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzwkb\" (UniqueName: \"kubernetes.io/projected/bf6c2c64-e624-4388-b9dc-3d8c7985ac8f-kube-api-access-qzwkb\") pod \"bf6c2c64-e624-4388-b9dc-3d8c7985ac8f\" (UID: \"bf6c2c64-e624-4388-b9dc-3d8c7985ac8f\") " Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.213313 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bf6c2c64-e624-4388-b9dc-3d8c7985ac8f-openstack-config-secret\") pod \"bf6c2c64-e624-4388-b9dc-3d8c7985ac8f\" (UID: \"bf6c2c64-e624-4388-b9dc-3d8c7985ac8f\") " Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.213620 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63c4f874-d21a-42b7-884a-f070d8dc2150-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.213638 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d579322c-12b7-488b-8220-31ef35016c68-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.213647 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d579322c-12b7-488b-8220-31ef35016c68-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.213656 4867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d579322c-12b7-488b-8220-31ef35016c68-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.213665 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d579322c-12b7-488b-8220-31ef35016c68-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.213734 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.217834 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d579322c-12b7-488b-8220-31ef35016c68-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d579322c-12b7-488b-8220-31ef35016c68" (UID: "d579322c-12b7-488b-8220-31ef35016c68"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.220303 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63c4f874-d21a-42b7-884a-f070d8dc2150-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "63c4f874-d21a-42b7-884a-f070d8dc2150" (UID: "63c4f874-d21a-42b7-884a-f070d8dc2150"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.227563 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf6c2c64-e624-4388-b9dc-3d8c7985ac8f-kube-api-access-qzwkb" (OuterVolumeSpecName: "kube-api-access-qzwkb") pod "bf6c2c64-e624-4388-b9dc-3d8c7985ac8f" (UID: "bf6c2c64-e624-4388-b9dc-3d8c7985ac8f"). InnerVolumeSpecName "kube-api-access-qzwkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.237147 4867 scope.go:117] "RemoveContainer" containerID="880fc92f7ecbd0a9c36266766043de57fa6004b144c4752222808fdfded81541" Jan 01 08:50:28 crc kubenswrapper[4867]: E0101 08:50:28.238455 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"880fc92f7ecbd0a9c36266766043de57fa6004b144c4752222808fdfded81541\": container with ID starting with 880fc92f7ecbd0a9c36266766043de57fa6004b144c4752222808fdfded81541 not found: ID does not exist" containerID="880fc92f7ecbd0a9c36266766043de57fa6004b144c4752222808fdfded81541" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.238507 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"880fc92f7ecbd0a9c36266766043de57fa6004b144c4752222808fdfded81541"} err="failed to get container status \"880fc92f7ecbd0a9c36266766043de57fa6004b144c4752222808fdfded81541\": rpc error: code = NotFound desc = could not find container \"880fc92f7ecbd0a9c36266766043de57fa6004b144c4752222808fdfded81541\": container with ID starting with 880fc92f7ecbd0a9c36266766043de57fa6004b144c4752222808fdfded81541 not found: ID does not exist" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.249316 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf6c2c64-e624-4388-b9dc-3d8c7985ac8f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf6c2c64-e624-4388-b9dc-3d8c7985ac8f" (UID: "bf6c2c64-e624-4388-b9dc-3d8c7985ac8f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.266415 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf6c2c64-e624-4388-b9dc-3d8c7985ac8f-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "bf6c2c64-e624-4388-b9dc-3d8c7985ac8f" (UID: "bf6c2c64-e624-4388-b9dc-3d8c7985ac8f"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.315478 4867 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/63c4f874-d21a-42b7-884a-f070d8dc2150-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.315509 4867 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bf6c2c64-e624-4388-b9dc-3d8c7985ac8f-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.315518 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf6c2c64-e624-4388-b9dc-3d8c7985ac8f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.315527 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d579322c-12b7-488b-8220-31ef35016c68-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.315538 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzwkb\" (UniqueName: \"kubernetes.io/projected/bf6c2c64-e624-4388-b9dc-3d8c7985ac8f-kube-api-access-qzwkb\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.324149 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf6c2c64-e624-4388-b9dc-3d8c7985ac8f-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "bf6c2c64-e624-4388-b9dc-3d8c7985ac8f" (UID: "bf6c2c64-e624-4388-b9dc-3d8c7985ac8f"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.352095 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.417485 4867 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bf6c2c64-e624-4388-b9dc-3d8c7985ac8f-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.481631 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="84d7aac6-1073-41c0-acff-169e36ec197d" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.537074 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-4gj2t"] Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.546236 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-4gj2t"] Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.740964 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.742352 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-86c7f77bc7-nt6jq" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.828716 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c8a7ced-4990-4ea2-baff-8d3adf064a56-vencrypt-tls-certs\") pod \"9c8a7ced-4990-4ea2-baff-8d3adf064a56\" (UID: \"9c8a7ced-4990-4ea2-baff-8d3adf064a56\") " Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.828805 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kcj4\" (UniqueName: \"kubernetes.io/projected/9c8a7ced-4990-4ea2-baff-8d3adf064a56-kube-api-access-9kcj4\") pod \"9c8a7ced-4990-4ea2-baff-8d3adf064a56\" (UID: \"9c8a7ced-4990-4ea2-baff-8d3adf064a56\") " Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.828832 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c8a7ced-4990-4ea2-baff-8d3adf064a56-nova-novncproxy-tls-certs\") pod \"9c8a7ced-4990-4ea2-baff-8d3adf064a56\" (UID: \"9c8a7ced-4990-4ea2-baff-8d3adf064a56\") " Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.828859 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28bc6ac4-481b-4809-a61e-f32ff6a17920-combined-ca-bundle\") pod \"28bc6ac4-481b-4809-a61e-f32ff6a17920\" (UID: \"28bc6ac4-481b-4809-a61e-f32ff6a17920\") " Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.828918 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/28bc6ac4-481b-4809-a61e-f32ff6a17920-public-tls-certs\") pod \"28bc6ac4-481b-4809-a61e-f32ff6a17920\" (UID: \"28bc6ac4-481b-4809-a61e-f32ff6a17920\") " Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.829024 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c8a7ced-4990-4ea2-baff-8d3adf064a56-config-data\") pod \"9c8a7ced-4990-4ea2-baff-8d3adf064a56\" (UID: \"9c8a7ced-4990-4ea2-baff-8d3adf064a56\") " Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.829083 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwfhn\" (UniqueName: \"kubernetes.io/projected/28bc6ac4-481b-4809-a61e-f32ff6a17920-kube-api-access-lwfhn\") pod \"28bc6ac4-481b-4809-a61e-f32ff6a17920\" (UID: \"28bc6ac4-481b-4809-a61e-f32ff6a17920\") " Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.829101 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28bc6ac4-481b-4809-a61e-f32ff6a17920-log-httpd\") pod \"28bc6ac4-481b-4809-a61e-f32ff6a17920\" (UID: \"28bc6ac4-481b-4809-a61e-f32ff6a17920\") " Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.829455 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28bc6ac4-481b-4809-a61e-f32ff6a17920-config-data\") pod \"28bc6ac4-481b-4809-a61e-f32ff6a17920\" (UID: \"28bc6ac4-481b-4809-a61e-f32ff6a17920\") " Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.829476 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28bc6ac4-481b-4809-a61e-f32ff6a17920-etc-swift\") pod \"28bc6ac4-481b-4809-a61e-f32ff6a17920\" (UID: \"28bc6ac4-481b-4809-a61e-f32ff6a17920\") " Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.829513 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/28bc6ac4-481b-4809-a61e-f32ff6a17920-internal-tls-certs\") pod \"28bc6ac4-481b-4809-a61e-f32ff6a17920\" (UID: \"28bc6ac4-481b-4809-a61e-f32ff6a17920\") " Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.829528 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c8a7ced-4990-4ea2-baff-8d3adf064a56-combined-ca-bundle\") pod \"9c8a7ced-4990-4ea2-baff-8d3adf064a56\" (UID: \"9c8a7ced-4990-4ea2-baff-8d3adf064a56\") " Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.829549 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28bc6ac4-481b-4809-a61e-f32ff6a17920-run-httpd\") pod \"28bc6ac4-481b-4809-a61e-f32ff6a17920\" (UID: \"28bc6ac4-481b-4809-a61e-f32ff6a17920\") " Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.830198 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28bc6ac4-481b-4809-a61e-f32ff6a17920-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "28bc6ac4-481b-4809-a61e-f32ff6a17920" (UID: "28bc6ac4-481b-4809-a61e-f32ff6a17920"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.837353 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28bc6ac4-481b-4809-a61e-f32ff6a17920-kube-api-access-lwfhn" (OuterVolumeSpecName: "kube-api-access-lwfhn") pod "28bc6ac4-481b-4809-a61e-f32ff6a17920" (UID: "28bc6ac4-481b-4809-a61e-f32ff6a17920"). InnerVolumeSpecName "kube-api-access-lwfhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.837821 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28bc6ac4-481b-4809-a61e-f32ff6a17920-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "28bc6ac4-481b-4809-a61e-f32ff6a17920" (UID: "28bc6ac4-481b-4809-a61e-f32ff6a17920"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.838133 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28bc6ac4-481b-4809-a61e-f32ff6a17920-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "28bc6ac4-481b-4809-a61e-f32ff6a17920" (UID: "28bc6ac4-481b-4809-a61e-f32ff6a17920"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.842516 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c8a7ced-4990-4ea2-baff-8d3adf064a56-kube-api-access-9kcj4" (OuterVolumeSpecName: "kube-api-access-9kcj4") pod "9c8a7ced-4990-4ea2-baff-8d3adf064a56" (UID: "9c8a7ced-4990-4ea2-baff-8d3adf064a56"). InnerVolumeSpecName "kube-api-access-9kcj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.879148 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c8a7ced-4990-4ea2-baff-8d3adf064a56-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c8a7ced-4990-4ea2-baff-8d3adf064a56" (UID: "9c8a7ced-4990-4ea2-baff-8d3adf064a56"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.879267 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c8a7ced-4990-4ea2-baff-8d3adf064a56-config-data" (OuterVolumeSpecName: "config-data") pod "9c8a7ced-4990-4ea2-baff-8d3adf064a56" (UID: "9c8a7ced-4990-4ea2-baff-8d3adf064a56"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.893423 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c8a7ced-4990-4ea2-baff-8d3adf064a56-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "9c8a7ced-4990-4ea2-baff-8d3adf064a56" (UID: "9c8a7ced-4990-4ea2-baff-8d3adf064a56"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.921744 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c8a7ced-4990-4ea2-baff-8d3adf064a56-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "9c8a7ced-4990-4ea2-baff-8d3adf064a56" (UID: "9c8a7ced-4990-4ea2-baff-8d3adf064a56"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.921772 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28bc6ac4-481b-4809-a61e-f32ff6a17920-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28bc6ac4-481b-4809-a61e-f32ff6a17920" (UID: "28bc6ac4-481b-4809-a61e-f32ff6a17920"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.931942 4867 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c8a7ced-4990-4ea2-baff-8d3adf064a56-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.931970 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kcj4\" (UniqueName: \"kubernetes.io/projected/9c8a7ced-4990-4ea2-baff-8d3adf064a56-kube-api-access-9kcj4\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.931981 4867 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c8a7ced-4990-4ea2-baff-8d3adf064a56-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.931990 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28bc6ac4-481b-4809-a61e-f32ff6a17920-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.931999 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c8a7ced-4990-4ea2-baff-8d3adf064a56-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.932007 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwfhn\" (UniqueName: \"kubernetes.io/projected/28bc6ac4-481b-4809-a61e-f32ff6a17920-kube-api-access-lwfhn\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.932015 4867 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28bc6ac4-481b-4809-a61e-f32ff6a17920-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.932023 4867 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28bc6ac4-481b-4809-a61e-f32ff6a17920-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.932033 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c8a7ced-4990-4ea2-baff-8d3adf064a56-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.932040 4867 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28bc6ac4-481b-4809-a61e-f32ff6a17920-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.959560 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28bc6ac4-481b-4809-a61e-f32ff6a17920-config-data" (OuterVolumeSpecName: "config-data") pod "28bc6ac4-481b-4809-a61e-f32ff6a17920" (UID: "28bc6ac4-481b-4809-a61e-f32ff6a17920"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.964024 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28bc6ac4-481b-4809-a61e-f32ff6a17920-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "28bc6ac4-481b-4809-a61e-f32ff6a17920" (UID: "28bc6ac4-481b-4809-a61e-f32ff6a17920"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:28 crc kubenswrapper[4867]: I0101 08:50:28.964772 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28bc6ac4-481b-4809-a61e-f32ff6a17920-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "28bc6ac4-481b-4809-a61e-f32ff6a17920" (UID: "28bc6ac4-481b-4809-a61e-f32ff6a17920"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.034205 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28bc6ac4-481b-4809-a61e-f32ff6a17920-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.034232 4867 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/28bc6ac4-481b-4809-a61e-f32ff6a17920-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.034241 4867 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/28bc6ac4-481b-4809-a61e-f32ff6a17920-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:29 crc kubenswrapper[4867]: E0101 08:50:29.034304 4867 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 01 08:50:29 crc kubenswrapper[4867]: E0101 08:50:29.034349 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-config-data podName:1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99 nodeName:}" failed. No retries permitted until 2026-01-01 08:50:33.034335041 +0000 UTC m=+1442.169603810 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-config-data") pod "rabbitmq-cell1-server-0" (UID: "1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99") : configmap "rabbitmq-cell1-config-data" not found Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.040109 4867 generic.go:334] "Generic (PLEG): container finished" podID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerID="fce464d833a7c6c6c7e37a1de3b906e406e554021eb3b0eefbac9bedb4d2be70" exitCode=0 Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.040148 4867 generic.go:334] "Generic (PLEG): container finished" podID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerID="8e53230183aa290b06a69f94b13b5239aeff13215b5020e77a00e699fab2b615" exitCode=0 Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.040157 4867 generic.go:334] "Generic (PLEG): container finished" podID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerID="6ea6d3eb9e4320ec6933e61e262598b312bd6de3e889f0b82a2feeb2cb268787" exitCode=0 Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.040163 4867 generic.go:334] "Generic (PLEG): container finished" podID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerID="15a53dd61a838436c8cc640222228f29611c350aafbcd4c7ebd7e7d0037f6c08" exitCode=0 Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.040218 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1f687f2-3229-401c-b5cb-f79e96311c45","Type":"ContainerDied","Data":"fce464d833a7c6c6c7e37a1de3b906e406e554021eb3b0eefbac9bedb4d2be70"} Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.040242 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1f687f2-3229-401c-b5cb-f79e96311c45","Type":"ContainerDied","Data":"8e53230183aa290b06a69f94b13b5239aeff13215b5020e77a00e699fab2b615"} Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.040251 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1f687f2-3229-401c-b5cb-f79e96311c45","Type":"ContainerDied","Data":"6ea6d3eb9e4320ec6933e61e262598b312bd6de3e889f0b82a2feeb2cb268787"} Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.040260 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1f687f2-3229-401c-b5cb-f79e96311c45","Type":"ContainerDied","Data":"15a53dd61a838436c8cc640222228f29611c350aafbcd4c7ebd7e7d0037f6c08"} Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.053537 4867 generic.go:334] "Generic (PLEG): container finished" podID="19551dba-c741-42e0-b228-6cad78717264" containerID="ec9643ea8268018a8e1ee9aaddca4cf958028c7a397a85ccd620195862527c6b" exitCode=1 Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.053612 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7n8cs" event={"ID":"19551dba-c741-42e0-b228-6cad78717264","Type":"ContainerDied","Data":"ec9643ea8268018a8e1ee9aaddca4cf958028c7a397a85ccd620195862527c6b"} Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.054167 4867 scope.go:117] "RemoveContainer" containerID="ec9643ea8268018a8e1ee9aaddca4cf958028c7a397a85ccd620195862527c6b" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.074328 4867 generic.go:334] "Generic (PLEG): container finished" podID="3c2a7f74-c5ce-45fb-a1fa-c19c025aea20" containerID="937838972a1573c1df4db392f223b3e988bccb1ee572a68dba6f4b3aed9b91ee" exitCode=143 Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.074434 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58dc5bfddd-522rc" event={"ID":"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20","Type":"ContainerDied","Data":"937838972a1573c1df4db392f223b3e988bccb1ee572a68dba6f4b3aed9b91ee"} Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.077132 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.077465 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8dad921b-d7dd-4113-85d2-78d6f59944b4" containerName="ceilometer-central-agent" containerID="cri-o://b766b7e81e4c520bf9e2f42a30c03f5beb3f1fa3a2ef8a0d49676f36a97ed049" gracePeriod=30 Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.077848 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8dad921b-d7dd-4113-85d2-78d6f59944b4" containerName="sg-core" containerID="cri-o://46a9c735094db260247988c6fe2d5ab62a8071af9e424b678da59b0a5a682f02" gracePeriod=30 Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.077864 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8dad921b-d7dd-4113-85d2-78d6f59944b4" containerName="proxy-httpd" containerID="cri-o://9dd7ea3e293dda5baf51be491f21418ac4b799705fb9b3ff054db3ba80da00c2" gracePeriod=30 Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.077927 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8dad921b-d7dd-4113-85d2-78d6f59944b4" containerName="ceilometer-notification-agent" containerID="cri-o://414ccbc2e33650855d1dd8b10146a435965c903f63378a1ad4e86b3cd9a12e1e" gracePeriod=30 Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.093182 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbdfbb78f-5g78q" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.095962 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbdfbb78f-5g78q" event={"ID":"d579322c-12b7-488b-8220-31ef35016c68","Type":"ContainerDied","Data":"bb6bf94584f988aec98816b2c74dc43a9b9138402681d5cf729debf913d051a3"} Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.096022 4867 scope.go:117] "RemoveContainer" containerID="9aaec2bdb437295ba5550a821ae8d1f9f3e0bbb9ba3c726894b4022fa400f982" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.115472 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.115719 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="15b1cd3d-248e-4861-a69a-4c8d284babb3" containerName="kube-state-metrics" containerID="cri-o://241032430a352896d36a6db8af38c17729d1b448351883f302237d33b4869790" gracePeriod=30 Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.122067 4867 generic.go:334] "Generic (PLEG): container finished" podID="22fe2632-f8f6-4ef9-9f4c-72b69bd45932" containerID="65ef15ad242719f3da63fa724d97de1fb1223fd81f2c48a72e0cb2f1c91f8f4b" exitCode=143 Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.122147 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7965d77d77-cwbt7" event={"ID":"22fe2632-f8f6-4ef9-9f4c-72b69bd45932","Type":"ContainerDied","Data":"65ef15ad242719f3da63fa724d97de1fb1223fd81f2c48a72e0cb2f1c91f8f4b"} Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.158012 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="606a07c4-3bbe-4968-a035-6d41b2cf3803" path="/var/lib/kubelet/pods/606a07c4-3bbe-4968-a035-6d41b2cf3803/volumes" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.170637 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63c4f874-d21a-42b7-884a-f070d8dc2150" path="/var/lib/kubelet/pods/63c4f874-d21a-42b7-884a-f070d8dc2150/volumes" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.171380 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="654c613f-4f96-41f0-8937-d4be9f7897da" path="/var/lib/kubelet/pods/654c613f-4f96-41f0-8937-d4be9f7897da/volumes" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.172926 4867 generic.go:334] "Generic (PLEG): container finished" podID="d2662702-83ed-4457-a630-e8a6d07ffb8b" containerID="efcd353d29f3de492430dcf05725698c36d4fc1c75947e3d7d13befdcc5b7a27" exitCode=0 Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.185076 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68297d63-ca47-4d11-8e40-3d6903527773" path="/var/lib/kubelet/pods/68297d63-ca47-4d11-8e40-3d6903527773/volumes" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.185893 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dde095a-4ecb-477d-9699-9867084e2d00" path="/var/lib/kubelet/pods/6dde095a-4ecb-477d-9699-9867084e2d00/volumes" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.186402 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9582f632-f340-4a0f-b436-c959a32e797e" path="/var/lib/kubelet/pods/9582f632-f340-4a0f-b436-c959a32e797e/volumes" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.186878 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9738d77e-f75a-4b30-ac35-4e91438aad75" path="/var/lib/kubelet/pods/9738d77e-f75a-4b30-ac35-4e91438aad75/volumes" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.195362 4867 generic.go:334] "Generic (PLEG): container finished" podID="28bc6ac4-481b-4809-a61e-f32ff6a17920" containerID="e4fe854d98b51a453dcc4f2c161e7293d1516af4b78ad4a4598ea5c3eea480ac" exitCode=0 Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.195397 4867 generic.go:334] "Generic (PLEG): container finished" podID="28bc6ac4-481b-4809-a61e-f32ff6a17920" containerID="4cd0bb63af1fee6659642feb949cce89eb3edf52c7db8a4bc4bddb7af7d72398" exitCode=0 Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.195490 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-86c7f77bc7-nt6jq" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.216737 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aed2e086-295b-468f-93a1-47ce57c3e871" path="/var/lib/kubelet/pods/aed2e086-295b-468f-93a1-47ce57c3e871/volumes" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.217296 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3ae1a6d-6f42-40a0-87a2-488ee05a0c09" path="/var/lib/kubelet/pods/b3ae1a6d-6f42-40a0-87a2-488ee05a0c09/volumes" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.217804 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf6c2c64-e624-4388-b9dc-3d8c7985ac8f" path="/var/lib/kubelet/pods/bf6c2c64-e624-4388-b9dc-3d8c7985ac8f/volumes" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.218845 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8ec6291-8802-442c-af30-08b607472e97" path="/var/lib/kubelet/pods/c8ec6291-8802-442c-af30-08b607472e97/volumes" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.219345 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de595f01-a50d-44f7-a2da-6dbb32c429ec" path="/var/lib/kubelet/pods/de595f01-a50d-44f7-a2da-6dbb32c429ec/volumes" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.219381 4867 scope.go:117] "RemoveContainer" containerID="37310f221b90260f704cc9774670b03490fde04e0d9fb5eac5f18180fd865c4a" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.220307 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d2662702-83ed-4457-a630-e8a6d07ffb8b","Type":"ContainerDied","Data":"efcd353d29f3de492430dcf05725698c36d4fc1c75947e3d7d13befdcc5b7a27"} Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.220336 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.220353 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-86c7f77bc7-nt6jq" event={"ID":"28bc6ac4-481b-4809-a61e-f32ff6a17920","Type":"ContainerDied","Data":"e4fe854d98b51a453dcc4f2c161e7293d1516af4b78ad4a4598ea5c3eea480ac"} Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.220367 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-86c7f77bc7-nt6jq" event={"ID":"28bc6ac4-481b-4809-a61e-f32ff6a17920","Type":"ContainerDied","Data":"4cd0bb63af1fee6659642feb949cce89eb3edf52c7db8a4bc4bddb7af7d72398"} Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.220376 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-86c7f77bc7-nt6jq" event={"ID":"28bc6ac4-481b-4809-a61e-f32ff6a17920","Type":"ContainerDied","Data":"5c05402f0b7a1ed492e73b2aaadc51d362d74f80468ba808d9c123b4db15c8b6"} Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.220527 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="b43ddff2-67cd-4ab7-84c1-763dd002457c" containerName="memcached" containerID="cri-o://4e331c080ef51c9e8e140526532ca6a567a4007701b3c6a5707e70e828973809" gracePeriod=30 Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.235969 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fbdfbb78f-5g78q"] Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.268168 4867 generic.go:334] "Generic (PLEG): container finished" podID="9c8a7ced-4990-4ea2-baff-8d3adf064a56" containerID="c05c8abcfa20118c3f417e5d1941b6250e0b465b8219e18b8df907477937b8fd" exitCode=0 Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.268250 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9c8a7ced-4990-4ea2-baff-8d3adf064a56","Type":"ContainerDied","Data":"c05c8abcfa20118c3f417e5d1941b6250e0b465b8219e18b8df907477937b8fd"} Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.268276 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9c8a7ced-4990-4ea2-baff-8d3adf064a56","Type":"ContainerDied","Data":"9bdbc5e52b0077ef250b50e9ebc4444a080a8eecfae92c07bd1b17a4120968c4"} Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.268481 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.291228 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-d8ca-account-create-update-bdzbd"] Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.323962 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fbdfbb78f-5g78q"] Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.328225 4867 generic.go:334] "Generic (PLEG): container finished" podID="3205b065-c067-4035-8afb-e2bbcc7d8a42" containerID="2308efd8efc29d35e443b922f20dee961e0822be16a9b0b3be84cb600b8719cd" exitCode=0 Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.331757 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3205b065-c067-4035-8afb-e2bbcc7d8a42","Type":"ContainerDied","Data":"2308efd8efc29d35e443b922f20dee961e0822be16a9b0b3be84cb600b8719cd"} Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.341293 4867 generic.go:334] "Generic (PLEG): container finished" podID="c6e96caa-b906-4b24-af21-8068ea727bba" containerID="dc673bc1feba5e02af532b24171ae7075ed044000fad91c5933e93e216ca2214" exitCode=143 Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.341371 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6596d5f4d6-9cxqr" event={"ID":"c6e96caa-b906-4b24-af21-8068ea727bba","Type":"ContainerDied","Data":"dc673bc1feba5e02af532b24171ae7075ed044000fad91c5933e93e216ca2214"} Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.354214 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-d8ca-account-create-update-bdzbd"] Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.357523 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-7fvfm"] Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.363683 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-zqg5l"] Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.373297 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-7fvfm"] Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.378608 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-d8ca-account-create-update-nzwkw"] Jan 01 08:50:29 crc kubenswrapper[4867]: E0101 08:50:29.378933 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63c4f874-d21a-42b7-884a-f070d8dc2150" containerName="openstack-network-exporter" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.378944 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="63c4f874-d21a-42b7-884a-f070d8dc2150" containerName="openstack-network-exporter" Jan 01 08:50:29 crc kubenswrapper[4867]: E0101 08:50:29.378959 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="654c613f-4f96-41f0-8937-d4be9f7897da" containerName="ovsdbserver-nb" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.378965 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="654c613f-4f96-41f0-8937-d4be9f7897da" containerName="ovsdbserver-nb" Jan 01 08:50:29 crc kubenswrapper[4867]: E0101 08:50:29.378977 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="654c613f-4f96-41f0-8937-d4be9f7897da" containerName="openstack-network-exporter" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.378984 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="654c613f-4f96-41f0-8937-d4be9f7897da" containerName="openstack-network-exporter" Jan 01 08:50:29 crc kubenswrapper[4867]: E0101 08:50:29.378995 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d579322c-12b7-488b-8220-31ef35016c68" containerName="dnsmasq-dns" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.379001 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d579322c-12b7-488b-8220-31ef35016c68" containerName="dnsmasq-dns" Jan 01 08:50:29 crc kubenswrapper[4867]: E0101 08:50:29.379009 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28bc6ac4-481b-4809-a61e-f32ff6a17920" containerName="proxy-server" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.379015 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="28bc6ac4-481b-4809-a61e-f32ff6a17920" containerName="proxy-server" Jan 01 08:50:29 crc kubenswrapper[4867]: E0101 08:50:29.379061 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c8a7ced-4990-4ea2-baff-8d3adf064a56" containerName="nova-cell1-novncproxy-novncproxy" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.379068 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c8a7ced-4990-4ea2-baff-8d3adf064a56" containerName="nova-cell1-novncproxy-novncproxy" Jan 01 08:50:29 crc kubenswrapper[4867]: E0101 08:50:29.379081 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1620c75e-1129-4850-9b27-7666e4cb8ed5" containerName="ovsdbserver-sb" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.379087 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1620c75e-1129-4850-9b27-7666e4cb8ed5" containerName="ovsdbserver-sb" Jan 01 08:50:29 crc kubenswrapper[4867]: E0101 08:50:29.379099 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d579322c-12b7-488b-8220-31ef35016c68" containerName="init" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.379104 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d579322c-12b7-488b-8220-31ef35016c68" containerName="init" Jan 01 08:50:29 crc kubenswrapper[4867]: E0101 08:50:29.379364 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1620c75e-1129-4850-9b27-7666e4cb8ed5" containerName="openstack-network-exporter" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.379372 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1620c75e-1129-4850-9b27-7666e4cb8ed5" containerName="openstack-network-exporter" Jan 01 08:50:29 crc kubenswrapper[4867]: E0101 08:50:29.379382 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28bc6ac4-481b-4809-a61e-f32ff6a17920" containerName="proxy-httpd" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.379389 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="28bc6ac4-481b-4809-a61e-f32ff6a17920" containerName="proxy-httpd" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.379612 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="63c4f874-d21a-42b7-884a-f070d8dc2150" containerName="openstack-network-exporter" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.379626 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="654c613f-4f96-41f0-8937-d4be9f7897da" containerName="openstack-network-exporter" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.379657 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="654c613f-4f96-41f0-8937-d4be9f7897da" containerName="ovsdbserver-nb" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.379670 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="28bc6ac4-481b-4809-a61e-f32ff6a17920" containerName="proxy-server" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.379679 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="28bc6ac4-481b-4809-a61e-f32ff6a17920" containerName="proxy-httpd" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.379691 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="1620c75e-1129-4850-9b27-7666e4cb8ed5" containerName="openstack-network-exporter" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.379701 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="1620c75e-1129-4850-9b27-7666e4cb8ed5" containerName="ovsdbserver-sb" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.379714 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c8a7ced-4990-4ea2-baff-8d3adf064a56" containerName="nova-cell1-novncproxy-novncproxy" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.379740 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="d579322c-12b7-488b-8220-31ef35016c68" containerName="dnsmasq-dns" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.380394 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d8ca-account-create-update-nzwkw" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.382969 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.389167 4867 generic.go:334] "Generic (PLEG): container finished" podID="cda1d2c0-2470-41f9-9969-776f8883a38b" containerID="b66638c98090f5e1aaf6296dd6eb2d5dfcfb3fdb6de51af32ae9f4151cd17179" exitCode=143 Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.389225 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cda1d2c0-2470-41f9-9969-776f8883a38b","Type":"ContainerDied","Data":"b66638c98090f5e1aaf6296dd6eb2d5dfcfb3fdb6de51af32ae9f4151cd17179"} Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.391231 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-zqg5l"] Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.399578 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6498f7d58c-nhfz8"] Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.399788 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-6498f7d58c-nhfz8" podUID="985cc3ff-ea2f-4386-a828-180deef97412" containerName="keystone-api" containerID="cri-o://8e0fec353ecc8bde0124bae2920fcbd9124025492a08e22cab9fb8e38095f3a4" gracePeriod=30 Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.409136 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d8ca-account-create-update-nzwkw"] Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.416206 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.440198 4867 scope.go:117] "RemoveContainer" containerID="e4fe854d98b51a453dcc4f2c161e7293d1516af4b78ad4a4598ea5c3eea480ac" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.442251 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.541087 4867 scope.go:117] "RemoveContainer" containerID="4cd0bb63af1fee6659642feb949cce89eb3edf52c7db8a4bc4bddb7af7d72398" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.543392 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-4t8xd"] Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.553463 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9cgp\" (UniqueName: \"kubernetes.io/projected/3f4d9b08-1038-4f16-9217-509166cc2e7b-kube-api-access-d9cgp\") pod \"keystone-d8ca-account-create-update-nzwkw\" (UID: \"3f4d9b08-1038-4f16-9217-509166cc2e7b\") " pod="openstack/keystone-d8ca-account-create-update-nzwkw" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.564223 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f4d9b08-1038-4f16-9217-509166cc2e7b-operator-scripts\") pod \"keystone-d8ca-account-create-update-nzwkw\" (UID: \"3f4d9b08-1038-4f16-9217-509166cc2e7b\") " pod="openstack/keystone-d8ca-account-create-update-nzwkw" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.571271 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-4t8xd"] Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.585530 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-d8ca-account-create-update-nzwkw"] Jan 01 08:50:29 crc kubenswrapper[4867]: E0101 08:50:29.586657 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-d9cgp operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-d8ca-account-create-update-nzwkw" podUID="3f4d9b08-1038-4f16-9217-509166cc2e7b" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.605819 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-7n8cs"] Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.638529 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.645683 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="3bd7d188-bdc2-4aa8-891b-0775de1a5eeb" containerName="galera" containerID="cri-o://b839a4dffb22a75f3657a1d1eebb4e7c86aa3448b01b75268a7fa008e4d35304" gracePeriod=30 Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.645773 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.646693 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.653205 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-86c7f77bc7-nt6jq"] Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.662331 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-86c7f77bc7-nt6jq"] Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.666011 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f4d9b08-1038-4f16-9217-509166cc2e7b-operator-scripts\") pod \"keystone-d8ca-account-create-update-nzwkw\" (UID: \"3f4d9b08-1038-4f16-9217-509166cc2e7b\") " pod="openstack/keystone-d8ca-account-create-update-nzwkw" Jan 01 08:50:29 crc kubenswrapper[4867]: E0101 08:50:29.666109 4867 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 01 08:50:29 crc kubenswrapper[4867]: E0101 08:50:29.666154 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3f4d9b08-1038-4f16-9217-509166cc2e7b-operator-scripts podName:3f4d9b08-1038-4f16-9217-509166cc2e7b nodeName:}" failed. No retries permitted until 2026-01-01 08:50:30.166139293 +0000 UTC m=+1439.301408062 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/3f4d9b08-1038-4f16-9217-509166cc2e7b-operator-scripts") pod "keystone-d8ca-account-create-update-nzwkw" (UID: "3f4d9b08-1038-4f16-9217-509166cc2e7b") : configmap "openstack-scripts" not found Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.666453 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9cgp\" (UniqueName: \"kubernetes.io/projected/3f4d9b08-1038-4f16-9217-509166cc2e7b-kube-api-access-d9cgp\") pod \"keystone-d8ca-account-create-update-nzwkw\" (UID: \"3f4d9b08-1038-4f16-9217-509166cc2e7b\") " pod="openstack/keystone-d8ca-account-create-update-nzwkw" Jan 01 08:50:29 crc kubenswrapper[4867]: E0101 08:50:29.666508 4867 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 01 08:50:29 crc kubenswrapper[4867]: E0101 08:50:29.666788 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/84d7aac6-1073-41c0-acff-169e36ec197d-config-data podName:84d7aac6-1073-41c0-acff-169e36ec197d nodeName:}" failed. No retries permitted until 2026-01-01 08:50:33.666779541 +0000 UTC m=+1442.802048310 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/84d7aac6-1073-41c0-acff-169e36ec197d-config-data") pod "rabbitmq-server-0" (UID: "84d7aac6-1073-41c0-acff-169e36ec197d") : configmap "rabbitmq-config-data" not found Jan 01 08:50:29 crc kubenswrapper[4867]: E0101 08:50:29.670025 4867 projected.go:194] Error preparing data for projected volume kube-api-access-d9cgp for pod openstack/keystone-d8ca-account-create-update-nzwkw: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 01 08:50:29 crc kubenswrapper[4867]: E0101 08:50:29.670078 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4d9b08-1038-4f16-9217-509166cc2e7b-kube-api-access-d9cgp podName:3f4d9b08-1038-4f16-9217-509166cc2e7b nodeName:}" failed. No retries permitted until 2026-01-01 08:50:30.170051532 +0000 UTC m=+1439.305320301 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d9cgp" (UniqueName: "kubernetes.io/projected/3f4d9b08-1038-4f16-9217-509166cc2e7b-kube-api-access-d9cgp") pod "keystone-d8ca-account-create-update-nzwkw" (UID: "3f4d9b08-1038-4f16-9217-509166cc2e7b") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.673779 4867 scope.go:117] "RemoveContainer" containerID="e4fe854d98b51a453dcc4f2c161e7293d1516af4b78ad4a4598ea5c3eea480ac" Jan 01 08:50:29 crc kubenswrapper[4867]: E0101 08:50:29.674123 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4fe854d98b51a453dcc4f2c161e7293d1516af4b78ad4a4598ea5c3eea480ac\": container with ID starting with e4fe854d98b51a453dcc4f2c161e7293d1516af4b78ad4a4598ea5c3eea480ac not found: ID does not exist" containerID="e4fe854d98b51a453dcc4f2c161e7293d1516af4b78ad4a4598ea5c3eea480ac" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.674252 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4fe854d98b51a453dcc4f2c161e7293d1516af4b78ad4a4598ea5c3eea480ac"} err="failed to get container status \"e4fe854d98b51a453dcc4f2c161e7293d1516af4b78ad4a4598ea5c3eea480ac\": rpc error: code = NotFound desc = could not find container \"e4fe854d98b51a453dcc4f2c161e7293d1516af4b78ad4a4598ea5c3eea480ac\": container with ID starting with e4fe854d98b51a453dcc4f2c161e7293d1516af4b78ad4a4598ea5c3eea480ac not found: ID does not exist" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.674280 4867 scope.go:117] "RemoveContainer" containerID="4cd0bb63af1fee6659642feb949cce89eb3edf52c7db8a4bc4bddb7af7d72398" Jan 01 08:50:29 crc kubenswrapper[4867]: E0101 08:50:29.674541 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cd0bb63af1fee6659642feb949cce89eb3edf52c7db8a4bc4bddb7af7d72398\": container with ID starting with 4cd0bb63af1fee6659642feb949cce89eb3edf52c7db8a4bc4bddb7af7d72398 not found: ID does not exist" containerID="4cd0bb63af1fee6659642feb949cce89eb3edf52c7db8a4bc4bddb7af7d72398" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.674568 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cd0bb63af1fee6659642feb949cce89eb3edf52c7db8a4bc4bddb7af7d72398"} err="failed to get container status \"4cd0bb63af1fee6659642feb949cce89eb3edf52c7db8a4bc4bddb7af7d72398\": rpc error: code = NotFound desc = could not find container \"4cd0bb63af1fee6659642feb949cce89eb3edf52c7db8a4bc4bddb7af7d72398\": container with ID starting with 4cd0bb63af1fee6659642feb949cce89eb3edf52c7db8a4bc4bddb7af7d72398 not found: ID does not exist" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.674585 4867 scope.go:117] "RemoveContainer" containerID="e4fe854d98b51a453dcc4f2c161e7293d1516af4b78ad4a4598ea5c3eea480ac" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.674810 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4fe854d98b51a453dcc4f2c161e7293d1516af4b78ad4a4598ea5c3eea480ac"} err="failed to get container status \"e4fe854d98b51a453dcc4f2c161e7293d1516af4b78ad4a4598ea5c3eea480ac\": rpc error: code = NotFound desc = could not find container \"e4fe854d98b51a453dcc4f2c161e7293d1516af4b78ad4a4598ea5c3eea480ac\": container with ID starting with e4fe854d98b51a453dcc4f2c161e7293d1516af4b78ad4a4598ea5c3eea480ac not found: ID does not exist" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.674834 4867 scope.go:117] "RemoveContainer" containerID="4cd0bb63af1fee6659642feb949cce89eb3edf52c7db8a4bc4bddb7af7d72398" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.675102 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cd0bb63af1fee6659642feb949cce89eb3edf52c7db8a4bc4bddb7af7d72398"} err="failed to get container status \"4cd0bb63af1fee6659642feb949cce89eb3edf52c7db8a4bc4bddb7af7d72398\": rpc error: code = NotFound desc = could not find container \"4cd0bb63af1fee6659642feb949cce89eb3edf52c7db8a4bc4bddb7af7d72398\": container with ID starting with 4cd0bb63af1fee6659642feb949cce89eb3edf52c7db8a4bc4bddb7af7d72398 not found: ID does not exist" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.675124 4867 scope.go:117] "RemoveContainer" containerID="c05c8abcfa20118c3f417e5d1941b6250e0b465b8219e18b8df907477937b8fd" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.710777 4867 scope.go:117] "RemoveContainer" containerID="c05c8abcfa20118c3f417e5d1941b6250e0b465b8219e18b8df907477937b8fd" Jan 01 08:50:29 crc kubenswrapper[4867]: E0101 08:50:29.711211 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c05c8abcfa20118c3f417e5d1941b6250e0b465b8219e18b8df907477937b8fd\": container with ID starting with c05c8abcfa20118c3f417e5d1941b6250e0b465b8219e18b8df907477937b8fd not found: ID does not exist" containerID="c05c8abcfa20118c3f417e5d1941b6250e0b465b8219e18b8df907477937b8fd" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.711256 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c05c8abcfa20118c3f417e5d1941b6250e0b465b8219e18b8df907477937b8fd"} err="failed to get container status \"c05c8abcfa20118c3f417e5d1941b6250e0b465b8219e18b8df907477937b8fd\": rpc error: code = NotFound desc = could not find container \"c05c8abcfa20118c3f417e5d1941b6250e0b465b8219e18b8df907477937b8fd\": container with ID starting with c05c8abcfa20118c3f417e5d1941b6250e0b465b8219e18b8df907477937b8fd not found: ID does not exist" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.711282 4867 scope.go:117] "RemoveContainer" containerID="e698afe95247188c1349dc45263790341fa72c55e4b91893ad3b356253a8a571" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.767721 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2662702-83ed-4457-a630-e8a6d07ffb8b-operator-scripts\") pod \"d2662702-83ed-4457-a630-e8a6d07ffb8b\" (UID: \"d2662702-83ed-4457-a630-e8a6d07ffb8b\") " Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.767779 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d2662702-83ed-4457-a630-e8a6d07ffb8b-config-data-generated\") pod \"d2662702-83ed-4457-a630-e8a6d07ffb8b\" (UID: \"d2662702-83ed-4457-a630-e8a6d07ffb8b\") " Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.767815 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2662702-83ed-4457-a630-e8a6d07ffb8b-galera-tls-certs\") pod \"d2662702-83ed-4457-a630-e8a6d07ffb8b\" (UID: \"d2662702-83ed-4457-a630-e8a6d07ffb8b\") " Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.767844 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d2662702-83ed-4457-a630-e8a6d07ffb8b-config-data-default\") pod \"d2662702-83ed-4457-a630-e8a6d07ffb8b\" (UID: \"d2662702-83ed-4457-a630-e8a6d07ffb8b\") " Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.767871 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d2662702-83ed-4457-a630-e8a6d07ffb8b-kolla-config\") pod \"d2662702-83ed-4457-a630-e8a6d07ffb8b\" (UID: \"d2662702-83ed-4457-a630-e8a6d07ffb8b\") " Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.767946 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2662702-83ed-4457-a630-e8a6d07ffb8b-combined-ca-bundle\") pod \"d2662702-83ed-4457-a630-e8a6d07ffb8b\" (UID: \"d2662702-83ed-4457-a630-e8a6d07ffb8b\") " Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.767978 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpxhd\" (UniqueName: \"kubernetes.io/projected/d2662702-83ed-4457-a630-e8a6d07ffb8b-kube-api-access-dpxhd\") pod \"d2662702-83ed-4457-a630-e8a6d07ffb8b\" (UID: \"d2662702-83ed-4457-a630-e8a6d07ffb8b\") " Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.768030 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"d2662702-83ed-4457-a630-e8a6d07ffb8b\" (UID: \"d2662702-83ed-4457-a630-e8a6d07ffb8b\") " Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.768795 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2662702-83ed-4457-a630-e8a6d07ffb8b-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "d2662702-83ed-4457-a630-e8a6d07ffb8b" (UID: "d2662702-83ed-4457-a630-e8a6d07ffb8b"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.769170 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2662702-83ed-4457-a630-e8a6d07ffb8b-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "d2662702-83ed-4457-a630-e8a6d07ffb8b" (UID: "d2662702-83ed-4457-a630-e8a6d07ffb8b"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.769522 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2662702-83ed-4457-a630-e8a6d07ffb8b-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "d2662702-83ed-4457-a630-e8a6d07ffb8b" (UID: "d2662702-83ed-4457-a630-e8a6d07ffb8b"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.769661 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2662702-83ed-4457-a630-e8a6d07ffb8b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d2662702-83ed-4457-a630-e8a6d07ffb8b" (UID: "d2662702-83ed-4457-a630-e8a6d07ffb8b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.783946 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2662702-83ed-4457-a630-e8a6d07ffb8b-kube-api-access-dpxhd" (OuterVolumeSpecName: "kube-api-access-dpxhd") pod "d2662702-83ed-4457-a630-e8a6d07ffb8b" (UID: "d2662702-83ed-4457-a630-e8a6d07ffb8b"). InnerVolumeSpecName "kube-api-access-dpxhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.803316 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "mysql-db") pod "d2662702-83ed-4457-a630-e8a6d07ffb8b" (UID: "d2662702-83ed-4457-a630-e8a6d07ffb8b"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.805282 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2662702-83ed-4457-a630-e8a6d07ffb8b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d2662702-83ed-4457-a630-e8a6d07ffb8b" (UID: "d2662702-83ed-4457-a630-e8a6d07ffb8b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.820128 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.820867 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2662702-83ed-4457-a630-e8a6d07ffb8b-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "d2662702-83ed-4457-a630-e8a6d07ffb8b" (UID: "d2662702-83ed-4457-a630-e8a6d07ffb8b"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.869531 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.869563 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2662702-83ed-4457-a630-e8a6d07ffb8b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.869575 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d2662702-83ed-4457-a630-e8a6d07ffb8b-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.869583 4867 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2662702-83ed-4457-a630-e8a6d07ffb8b-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.869591 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d2662702-83ed-4457-a630-e8a6d07ffb8b-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.869601 4867 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d2662702-83ed-4457-a630-e8a6d07ffb8b-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.869618 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2662702-83ed-4457-a630-e8a6d07ffb8b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.869626 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpxhd\" (UniqueName: \"kubernetes.io/projected/d2662702-83ed-4457-a630-e8a6d07ffb8b-kube-api-access-dpxhd\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.891869 4867 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.912742 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9ba1-account-create-update-cvb54" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.918402 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f3c1-account-create-update-lpffc" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.939076 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-69c1-account-create-update-xbwpk" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.970662 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/15b1cd3d-248e-4861-a69a-4c8d284babb3-kube-state-metrics-tls-certs\") pod \"15b1cd3d-248e-4861-a69a-4c8d284babb3\" (UID: \"15b1cd3d-248e-4861-a69a-4c8d284babb3\") " Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.970955 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15b1cd3d-248e-4861-a69a-4c8d284babb3-combined-ca-bundle\") pod \"15b1cd3d-248e-4861-a69a-4c8d284babb3\" (UID: \"15b1cd3d-248e-4861-a69a-4c8d284babb3\") " Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.971075 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/15b1cd3d-248e-4861-a69a-4c8d284babb3-kube-state-metrics-tls-config\") pod \"15b1cd3d-248e-4861-a69a-4c8d284babb3\" (UID: \"15b1cd3d-248e-4861-a69a-4c8d284babb3\") " Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.971122 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lshtc\" (UniqueName: \"kubernetes.io/projected/15b1cd3d-248e-4861-a69a-4c8d284babb3-kube-api-access-lshtc\") pod \"15b1cd3d-248e-4861-a69a-4c8d284babb3\" (UID: \"15b1cd3d-248e-4861-a69a-4c8d284babb3\") " Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.971561 4867 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.973934 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15b1cd3d-248e-4861-a69a-4c8d284babb3-kube-api-access-lshtc" (OuterVolumeSpecName: "kube-api-access-lshtc") pod "15b1cd3d-248e-4861-a69a-4c8d284babb3" (UID: "15b1cd3d-248e-4861-a69a-4c8d284babb3"). InnerVolumeSpecName "kube-api-access-lshtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.992514 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15b1cd3d-248e-4861-a69a-4c8d284babb3-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "15b1cd3d-248e-4861-a69a-4c8d284babb3" (UID: "15b1cd3d-248e-4861-a69a-4c8d284babb3"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:29 crc kubenswrapper[4867]: I0101 08:50:29.999028 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15b1cd3d-248e-4861-a69a-4c8d284babb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15b1cd3d-248e-4861-a69a-4c8d284babb3" (UID: "15b1cd3d-248e-4861-a69a-4c8d284babb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:30 crc kubenswrapper[4867]: E0101 08:50:30.011028 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b839a4dffb22a75f3657a1d1eebb4e7c86aa3448b01b75268a7fa008e4d35304" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 01 08:50:30 crc kubenswrapper[4867]: E0101 08:50:30.013924 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b839a4dffb22a75f3657a1d1eebb4e7c86aa3448b01b75268a7fa008e4d35304" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 01 08:50:30 crc kubenswrapper[4867]: E0101 08:50:30.015430 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b839a4dffb22a75f3657a1d1eebb4e7c86aa3448b01b75268a7fa008e4d35304" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 01 08:50:30 crc kubenswrapper[4867]: E0101 08:50:30.015459 4867 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="3bd7d188-bdc2-4aa8-891b-0775de1a5eeb" containerName="galera" Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.018068 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15b1cd3d-248e-4861-a69a-4c8d284babb3-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "15b1cd3d-248e-4861-a69a-4c8d284babb3" (UID: "15b1cd3d-248e-4861-a69a-4c8d284babb3"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.072748 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fe85b54-84b3-46ab-94b7-597ffd52f997-operator-scripts\") pod \"6fe85b54-84b3-46ab-94b7-597ffd52f997\" (UID: \"6fe85b54-84b3-46ab-94b7-597ffd52f997\") " Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.073025 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdrc6\" (UniqueName: \"kubernetes.io/projected/6c6fd580-15f1-4929-b211-ecb1dc767e7c-kube-api-access-vdrc6\") pod \"6c6fd580-15f1-4929-b211-ecb1dc767e7c\" (UID: \"6c6fd580-15f1-4929-b211-ecb1dc767e7c\") " Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.073083 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2vvv\" (UniqueName: \"kubernetes.io/projected/6fe85b54-84b3-46ab-94b7-597ffd52f997-kube-api-access-q2vvv\") pod \"6fe85b54-84b3-46ab-94b7-597ffd52f997\" (UID: \"6fe85b54-84b3-46ab-94b7-597ffd52f997\") " Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.073099 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gp2mm\" (UniqueName: \"kubernetes.io/projected/22daf9e9-6114-4fc4-951e-da0e7b92c4b8-kube-api-access-gp2mm\") pod \"22daf9e9-6114-4fc4-951e-da0e7b92c4b8\" (UID: \"22daf9e9-6114-4fc4-951e-da0e7b92c4b8\") " Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.073118 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22daf9e9-6114-4fc4-951e-da0e7b92c4b8-operator-scripts\") pod \"22daf9e9-6114-4fc4-951e-da0e7b92c4b8\" (UID: \"22daf9e9-6114-4fc4-951e-da0e7b92c4b8\") " Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.073782 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c6fd580-15f1-4929-b211-ecb1dc767e7c-operator-scripts\") pod \"6c6fd580-15f1-4929-b211-ecb1dc767e7c\" (UID: \"6c6fd580-15f1-4929-b211-ecb1dc767e7c\") " Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.073443 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fe85b54-84b3-46ab-94b7-597ffd52f997-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6fe85b54-84b3-46ab-94b7-597ffd52f997" (UID: "6fe85b54-84b3-46ab-94b7-597ffd52f997"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.073687 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22daf9e9-6114-4fc4-951e-da0e7b92c4b8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "22daf9e9-6114-4fc4-951e-da0e7b92c4b8" (UID: "22daf9e9-6114-4fc4-951e-da0e7b92c4b8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.074357 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22daf9e9-6114-4fc4-951e-da0e7b92c4b8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.074376 4867 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/15b1cd3d-248e-4861-a69a-4c8d284babb3-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.074387 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15b1cd3d-248e-4861-a69a-4c8d284babb3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.074396 4867 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/15b1cd3d-248e-4861-a69a-4c8d284babb3-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.074411 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fe85b54-84b3-46ab-94b7-597ffd52f997-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.074420 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lshtc\" (UniqueName: \"kubernetes.io/projected/15b1cd3d-248e-4861-a69a-4c8d284babb3-kube-api-access-lshtc\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.074390 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c6fd580-15f1-4929-b211-ecb1dc767e7c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6c6fd580-15f1-4929-b211-ecb1dc767e7c" (UID: "6c6fd580-15f1-4929-b211-ecb1dc767e7c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.076138 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22daf9e9-6114-4fc4-951e-da0e7b92c4b8-kube-api-access-gp2mm" (OuterVolumeSpecName: "kube-api-access-gp2mm") pod "22daf9e9-6114-4fc4-951e-da0e7b92c4b8" (UID: "22daf9e9-6114-4fc4-951e-da0e7b92c4b8"). InnerVolumeSpecName "kube-api-access-gp2mm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.076503 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c6fd580-15f1-4929-b211-ecb1dc767e7c-kube-api-access-vdrc6" (OuterVolumeSpecName: "kube-api-access-vdrc6") pod "6c6fd580-15f1-4929-b211-ecb1dc767e7c" (UID: "6c6fd580-15f1-4929-b211-ecb1dc767e7c"). InnerVolumeSpecName "kube-api-access-vdrc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.077449 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fe85b54-84b3-46ab-94b7-597ffd52f997-kube-api-access-q2vvv" (OuterVolumeSpecName: "kube-api-access-q2vvv") pod "6fe85b54-84b3-46ab-94b7-597ffd52f997" (UID: "6fe85b54-84b3-46ab-94b7-597ffd52f997"). InnerVolumeSpecName "kube-api-access-q2vvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.175792 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f4d9b08-1038-4f16-9217-509166cc2e7b-operator-scripts\") pod \"keystone-d8ca-account-create-update-nzwkw\" (UID: \"3f4d9b08-1038-4f16-9217-509166cc2e7b\") " pod="openstack/keystone-d8ca-account-create-update-nzwkw" Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.175946 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9cgp\" (UniqueName: \"kubernetes.io/projected/3f4d9b08-1038-4f16-9217-509166cc2e7b-kube-api-access-d9cgp\") pod \"keystone-d8ca-account-create-update-nzwkw\" (UID: \"3f4d9b08-1038-4f16-9217-509166cc2e7b\") " pod="openstack/keystone-d8ca-account-create-update-nzwkw" Jan 01 08:50:30 crc kubenswrapper[4867]: E0101 08:50:30.175982 4867 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.176010 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdrc6\" (UniqueName: \"kubernetes.io/projected/6c6fd580-15f1-4929-b211-ecb1dc767e7c-kube-api-access-vdrc6\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:30 crc kubenswrapper[4867]: E0101 08:50:30.176059 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3f4d9b08-1038-4f16-9217-509166cc2e7b-operator-scripts podName:3f4d9b08-1038-4f16-9217-509166cc2e7b nodeName:}" failed. No retries permitted until 2026-01-01 08:50:31.17603852 +0000 UTC m=+1440.311307299 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/3f4d9b08-1038-4f16-9217-509166cc2e7b-operator-scripts") pod "keystone-d8ca-account-create-update-nzwkw" (UID: "3f4d9b08-1038-4f16-9217-509166cc2e7b") : configmap "openstack-scripts" not found Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.176092 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2vvv\" (UniqueName: \"kubernetes.io/projected/6fe85b54-84b3-46ab-94b7-597ffd52f997-kube-api-access-q2vvv\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.176106 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gp2mm\" (UniqueName: \"kubernetes.io/projected/22daf9e9-6114-4fc4-951e-da0e7b92c4b8-kube-api-access-gp2mm\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.176118 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c6fd580-15f1-4929-b211-ecb1dc767e7c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:30 crc kubenswrapper[4867]: E0101 08:50:30.193082 4867 projected.go:194] Error preparing data for projected volume kube-api-access-d9cgp for pod openstack/keystone-d8ca-account-create-update-nzwkw: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 01 08:50:30 crc kubenswrapper[4867]: E0101 08:50:30.193149 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4d9b08-1038-4f16-9217-509166cc2e7b-kube-api-access-d9cgp podName:3f4d9b08-1038-4f16-9217-509166cc2e7b nodeName:}" failed. No retries permitted until 2026-01-01 08:50:31.193129066 +0000 UTC m=+1440.328397835 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9cgp" (UniqueName: "kubernetes.io/projected/3f4d9b08-1038-4f16-9217-509166cc2e7b-kube-api-access-d9cgp") pod "keystone-d8ca-account-create-update-nzwkw" (UID: "3f4d9b08-1038-4f16-9217-509166cc2e7b") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 01 08:50:30 crc kubenswrapper[4867]: E0101 08:50:30.406706 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f57ce717c258cef589d7a47e6fbf0facf4d6e2d61727c0cbd20f621c798a45bd" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 01 08:50:30 crc kubenswrapper[4867]: E0101 08:50:30.408502 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f57ce717c258cef589d7a47e6fbf0facf4d6e2d61727c0cbd20f621c798a45bd" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 01 08:50:30 crc kubenswrapper[4867]: E0101 08:50:30.424215 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f57ce717c258cef589d7a47e6fbf0facf4d6e2d61727c0cbd20f621c798a45bd" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 01 08:50:30 crc kubenswrapper[4867]: E0101 08:50:30.424280 4867 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="8799ae41-c9cb-409a-ac59-3e6b59bb0198" containerName="nova-cell1-conductor-conductor" Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.442883 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="ff82f43d-33bd-47f0-9864-83bb3048f9b2" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.166:8776/healthcheck\": read tcp 10.217.0.2:48784->10.217.0.166:8776: read: connection reset by peer" Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.505937 4867 generic.go:334] "Generic (PLEG): container finished" podID="1822baf8-11aa-4152-a74f-2ce0383c1094" containerID="e80411603dc0ac8d446f1e707d73b2bad909e42859006cf6a585616040d3b259" exitCode=0 Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.506088 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-67dd85d5b6-ww7ll" event={"ID":"1822baf8-11aa-4152-a74f-2ce0383c1094","Type":"ContainerDied","Data":"e80411603dc0ac8d446f1e707d73b2bad909e42859006cf6a585616040d3b259"} Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.568238 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d2662702-83ed-4457-a630-e8a6d07ffb8b","Type":"ContainerDied","Data":"face748ce233053adbf72661aa6e0d193c0f083397b4d512504857ce7c00181a"} Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.568338 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.568376 4867 scope.go:117] "RemoveContainer" containerID="efcd353d29f3de492430dcf05725698c36d4fc1c75947e3d7d13befdcc5b7a27" Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.576605 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9ba1-account-create-update-cvb54" Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.576871 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9ba1-account-create-update-cvb54" event={"ID":"6c6fd580-15f1-4929-b211-ecb1dc767e7c","Type":"ContainerDied","Data":"8074aa337305fadd047e17c8b587cc37fb27e4963a3bb6030312dfb8ebf64947"} Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.581072 4867 generic.go:334] "Generic (PLEG): container finished" podID="15b1cd3d-248e-4861-a69a-4c8d284babb3" containerID="241032430a352896d36a6db8af38c17729d1b448351883f302237d33b4869790" exitCode=2 Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.581128 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"15b1cd3d-248e-4861-a69a-4c8d284babb3","Type":"ContainerDied","Data":"241032430a352896d36a6db8af38c17729d1b448351883f302237d33b4869790"} Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.581151 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"15b1cd3d-248e-4861-a69a-4c8d284babb3","Type":"ContainerDied","Data":"5b2c2937f1d076d55f8da93966befa0dda487e66069dd4617660f8aadafad024"} Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.581205 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.614245 4867 generic.go:334] "Generic (PLEG): container finished" podID="6f47f095-abde-4e07-8edf-d0a318043581" containerID="6fd8f4c7059e184922dd9a91f3056bb550d7c290a243aea1fe9c949fb9fa29c7" exitCode=0 Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.614354 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6f47f095-abde-4e07-8edf-d0a318043581","Type":"ContainerDied","Data":"6fd8f4c7059e184922dd9a91f3056bb550d7c290a243aea1fe9c949fb9fa29c7"} Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.628029 4867 generic.go:334] "Generic (PLEG): container finished" podID="b43ddff2-67cd-4ab7-84c1-763dd002457c" containerID="4e331c080ef51c9e8e140526532ca6a567a4007701b3c6a5707e70e828973809" exitCode=0 Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.628133 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b43ddff2-67cd-4ab7-84c1-763dd002457c","Type":"ContainerDied","Data":"4e331c080ef51c9e8e140526532ca6a567a4007701b3c6a5707e70e828973809"} Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.630593 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="e7003b80-53fa-4550-8f18-486a0f7988c9" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": read tcp 10.217.0.2:37100->10.217.0.206:8775: read: connection reset by peer" Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.630610 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="e7003b80-53fa-4550-8f18-486a0f7988c9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": read tcp 10.217.0.2:37090->10.217.0.206:8775: read: connection reset by peer" Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.631339 4867 generic.go:334] "Generic (PLEG): container finished" podID="19551dba-c741-42e0-b228-6cad78717264" containerID="c17bcf9b54864c61af9437862e149d2b101e45a4f71a347afd0f0194d5754edb" exitCode=1 Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.631386 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7n8cs" event={"ID":"19551dba-c741-42e0-b228-6cad78717264","Type":"ContainerDied","Data":"c17bcf9b54864c61af9437862e149d2b101e45a4f71a347afd0f0194d5754edb"} Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.631736 4867 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-7n8cs" secret="" err="secret \"galera-openstack-dockercfg-2mj76\" not found" Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.631766 4867 scope.go:117] "RemoveContainer" containerID="c17bcf9b54864c61af9437862e149d2b101e45a4f71a347afd0f0194d5754edb" Jan 01 08:50:30 crc kubenswrapper[4867]: E0101 08:50:30.632052 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-create-update pod=root-account-create-update-7n8cs_openstack(19551dba-c741-42e0-b228-6cad78717264)\"" pod="openstack/root-account-create-update-7n8cs" podUID="19551dba-c741-42e0-b228-6cad78717264" Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.646211 4867 generic.go:334] "Generic (PLEG): container finished" podID="8dad921b-d7dd-4113-85d2-78d6f59944b4" containerID="9dd7ea3e293dda5baf51be491f21418ac4b799705fb9b3ff054db3ba80da00c2" exitCode=0 Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.646241 4867 generic.go:334] "Generic (PLEG): container finished" podID="8dad921b-d7dd-4113-85d2-78d6f59944b4" containerID="46a9c735094db260247988c6fe2d5ab62a8071af9e424b678da59b0a5a682f02" exitCode=2 Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.646249 4867 generic.go:334] "Generic (PLEG): container finished" podID="8dad921b-d7dd-4113-85d2-78d6f59944b4" containerID="b766b7e81e4c520bf9e2f42a30c03f5beb3f1fa3a2ef8a0d49676f36a97ed049" exitCode=0 Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.646554 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8dad921b-d7dd-4113-85d2-78d6f59944b4","Type":"ContainerDied","Data":"9dd7ea3e293dda5baf51be491f21418ac4b799705fb9b3ff054db3ba80da00c2"} Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.646607 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8dad921b-d7dd-4113-85d2-78d6f59944b4","Type":"ContainerDied","Data":"46a9c735094db260247988c6fe2d5ab62a8071af9e424b678da59b0a5a682f02"} Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.646620 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8dad921b-d7dd-4113-85d2-78d6f59944b4","Type":"ContainerDied","Data":"b766b7e81e4c520bf9e2f42a30c03f5beb3f1fa3a2ef8a0d49676f36a97ed049"} Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.648409 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-69c1-account-create-update-xbwpk" Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.648441 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-69c1-account-create-update-xbwpk" event={"ID":"6fe85b54-84b3-46ab-94b7-597ffd52f997","Type":"ContainerDied","Data":"8b37800d130be3203cc2409edf08f6371d1b9d3c13c6090d58baa805f8616ad6"} Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.650827 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f3c1-account-create-update-lpffc" event={"ID":"22daf9e9-6114-4fc4-951e-da0e7b92c4b8","Type":"ContainerDied","Data":"46df4a5294094fbf8952da3670a54d9847fc3e78401e131458a99dde358a3bfc"} Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.650863 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f3c1-account-create-update-lpffc" Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.658224 4867 generic.go:334] "Generic (PLEG): container finished" podID="e809a11a-a5d8-49a0-9d9d-cac6a399dd35" containerID="9185c2834b9cf63d0aa63913819769f2b534971b2a8528f9b981383d4142d637" exitCode=0 Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.658302 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d8ca-account-create-update-nzwkw" Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.658303 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e809a11a-a5d8-49a0-9d9d-cac6a399dd35","Type":"ContainerDied","Data":"9185c2834b9cf63d0aa63913819769f2b534971b2a8528f9b981383d4142d637"} Jan 01 08:50:30 crc kubenswrapper[4867]: E0101 08:50:30.804383 4867 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 01 08:50:30 crc kubenswrapper[4867]: E0101 08:50:30.804455 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/19551dba-c741-42e0-b228-6cad78717264-operator-scripts podName:19551dba-c741-42e0-b228-6cad78717264 nodeName:}" failed. No retries permitted until 2026-01-01 08:50:31.304436377 +0000 UTC m=+1440.439705146 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/19551dba-c741-42e0-b228-6cad78717264-operator-scripts") pod "root-account-create-update-7n8cs" (UID: "19551dba-c741-42e0-b228-6cad78717264") : configmap "openstack-scripts" not found Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.822089 4867 scope.go:117] "RemoveContainer" containerID="d291f59607c758f47430417e26a8d57995eef27056410df0cfe7a2699c5e4b06" Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.827715 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-67dd85d5b6-ww7ll" Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.840753 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d8ca-account-create-update-nzwkw" Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.881339 4867 scope.go:117] "RemoveContainer" containerID="241032430a352896d36a6db8af38c17729d1b448351883f302237d33b4869790" Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.917861 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-f3c1-account-create-update-lpffc"] Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.930942 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-f3c1-account-create-update-lpffc"] Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.944829 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.967414 4867 scope.go:117] "RemoveContainer" containerID="241032430a352896d36a6db8af38c17729d1b448351883f302237d33b4869790" Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.967536 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 01 08:50:30 crc kubenswrapper[4867]: E0101 08:50:30.977068 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"241032430a352896d36a6db8af38c17729d1b448351883f302237d33b4869790\": container with ID starting with 241032430a352896d36a6db8af38c17729d1b448351883f302237d33b4869790 not found: ID does not exist" containerID="241032430a352896d36a6db8af38c17729d1b448351883f302237d33b4869790" Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.977106 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"241032430a352896d36a6db8af38c17729d1b448351883f302237d33b4869790"} err="failed to get container status \"241032430a352896d36a6db8af38c17729d1b448351883f302237d33b4869790\": rpc error: code = NotFound desc = could not find container \"241032430a352896d36a6db8af38c17729d1b448351883f302237d33b4869790\": container with ID starting with 241032430a352896d36a6db8af38c17729d1b448351883f302237d33b4869790 not found: ID does not exist" Jan 01 08:50:30 crc kubenswrapper[4867]: I0101 08:50:30.977133 4867 scope.go:117] "RemoveContainer" containerID="ec9643ea8268018a8e1ee9aaddca4cf958028c7a397a85ccd620195862527c6b" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.003335 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-69c1-account-create-update-xbwpk"] Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.008820 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1822baf8-11aa-4152-a74f-2ce0383c1094-internal-tls-certs\") pod \"1822baf8-11aa-4152-a74f-2ce0383c1094\" (UID: \"1822baf8-11aa-4152-a74f-2ce0383c1094\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.008857 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1822baf8-11aa-4152-a74f-2ce0383c1094-config-data\") pod \"1822baf8-11aa-4152-a74f-2ce0383c1094\" (UID: \"1822baf8-11aa-4152-a74f-2ce0383c1094\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.008895 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1822baf8-11aa-4152-a74f-2ce0383c1094-combined-ca-bundle\") pod \"1822baf8-11aa-4152-a74f-2ce0383c1094\" (UID: \"1822baf8-11aa-4152-a74f-2ce0383c1094\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.008936 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cb687\" (UniqueName: \"kubernetes.io/projected/1822baf8-11aa-4152-a74f-2ce0383c1094-kube-api-access-cb687\") pod \"1822baf8-11aa-4152-a74f-2ce0383c1094\" (UID: \"1822baf8-11aa-4152-a74f-2ce0383c1094\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.009028 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1822baf8-11aa-4152-a74f-2ce0383c1094-public-tls-certs\") pod \"1822baf8-11aa-4152-a74f-2ce0383c1094\" (UID: \"1822baf8-11aa-4152-a74f-2ce0383c1094\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.009061 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1822baf8-11aa-4152-a74f-2ce0383c1094-scripts\") pod \"1822baf8-11aa-4152-a74f-2ce0383c1094\" (UID: \"1822baf8-11aa-4152-a74f-2ce0383c1094\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.009134 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1822baf8-11aa-4152-a74f-2ce0383c1094-logs\") pod \"1822baf8-11aa-4152-a74f-2ce0383c1094\" (UID: \"1822baf8-11aa-4152-a74f-2ce0383c1094\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.010694 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1822baf8-11aa-4152-a74f-2ce0383c1094-logs" (OuterVolumeSpecName: "logs") pod "1822baf8-11aa-4152-a74f-2ce0383c1094" (UID: "1822baf8-11aa-4152-a74f-2ce0383c1094"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.018184 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1822baf8-11aa-4152-a74f-2ce0383c1094-kube-api-access-cb687" (OuterVolumeSpecName: "kube-api-access-cb687") pod "1822baf8-11aa-4152-a74f-2ce0383c1094" (UID: "1822baf8-11aa-4152-a74f-2ce0383c1094"). InnerVolumeSpecName "kube-api-access-cb687". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.020123 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1822baf8-11aa-4152-a74f-2ce0383c1094-scripts" (OuterVolumeSpecName: "scripts") pod "1822baf8-11aa-4152-a74f-2ce0383c1094" (UID: "1822baf8-11aa-4152-a74f-2ce0383c1094"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.027844 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-69c1-account-create-update-xbwpk"] Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.030370 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.042966 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.063795 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.084439 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-9ba1-account-create-update-cvb54"] Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.096699 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.107776 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-9ba1-account-create-update-cvb54"] Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.111481 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1822baf8-11aa-4152-a74f-2ce0383c1094-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.111509 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1822baf8-11aa-4152-a74f-2ce0383c1094-logs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.111518 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cb687\" (UniqueName: \"kubernetes.io/projected/1822baf8-11aa-4152-a74f-2ce0383c1094-kube-api-access-cb687\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.117779 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1822baf8-11aa-4152-a74f-2ce0383c1094-config-data" (OuterVolumeSpecName: "config-data") pod "1822baf8-11aa-4152-a74f-2ce0383c1094" (UID: "1822baf8-11aa-4152-a74f-2ce0383c1094"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:31 crc kubenswrapper[4867]: E0101 08:50:31.134981 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec53f251aded63efc11dcea8ffde6a118aeb1632f72313429e33668486c985a2" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 01 08:50:31 crc kubenswrapper[4867]: E0101 08:50:31.146551 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec53f251aded63efc11dcea8ffde6a118aeb1632f72313429e33668486c985a2" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 01 08:50:31 crc kubenswrapper[4867]: E0101 08:50:31.181409 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec53f251aded63efc11dcea8ffde6a118aeb1632f72313429e33668486c985a2" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 01 08:50:31 crc kubenswrapper[4867]: E0101 08:50:31.181482 4867 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="9943de7c-1d29-416f-ba57-ea51bf9e56f3" containerName="ovn-northd" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.203807 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1822baf8-11aa-4152-a74f-2ce0383c1094-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1822baf8-11aa-4152-a74f-2ce0383c1094" (UID: "1822baf8-11aa-4152-a74f-2ce0383c1094"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.216444 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff82f43d-33bd-47f0-9864-83bb3048f9b2-combined-ca-bundle\") pod \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\" (UID: \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.216518 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-httpd-run\") pod \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\" (UID: \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.216623 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-internal-tls-certs\") pod \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\" (UID: \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.216654 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff82f43d-33bd-47f0-9864-83bb3048f9b2-config-data\") pod \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\" (UID: \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.216691 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff82f43d-33bd-47f0-9864-83bb3048f9b2-etc-machine-id\") pod \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\" (UID: \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.216743 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmcs8\" (UniqueName: \"kubernetes.io/projected/ff82f43d-33bd-47f0-9864-83bb3048f9b2-kube-api-access-rmcs8\") pod \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\" (UID: \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.216779 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-config-data\") pod \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\" (UID: \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.216801 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff82f43d-33bd-47f0-9864-83bb3048f9b2-logs\") pod \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\" (UID: \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.216830 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-logs\") pod \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\" (UID: \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.216862 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff82f43d-33bd-47f0-9864-83bb3048f9b2-internal-tls-certs\") pod \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\" (UID: \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.216890 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-scripts\") pod \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\" (UID: \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.216935 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jc5vm\" (UniqueName: \"kubernetes.io/projected/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-kube-api-access-jc5vm\") pod \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\" (UID: \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.216965 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-combined-ca-bundle\") pod \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\" (UID: \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.216992 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff82f43d-33bd-47f0-9864-83bb3048f9b2-public-tls-certs\") pod \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\" (UID: \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.217018 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff82f43d-33bd-47f0-9864-83bb3048f9b2-config-data-custom\") pod \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\" (UID: \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.217087 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\" (UID: \"e809a11a-a5d8-49a0-9d9d-cac6a399dd35\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.217118 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff82f43d-33bd-47f0-9864-83bb3048f9b2-scripts\") pod \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\" (UID: \"ff82f43d-33bd-47f0-9864-83bb3048f9b2\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.217445 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9cgp\" (UniqueName: \"kubernetes.io/projected/3f4d9b08-1038-4f16-9217-509166cc2e7b-kube-api-access-d9cgp\") pod \"keystone-d8ca-account-create-update-nzwkw\" (UID: \"3f4d9b08-1038-4f16-9217-509166cc2e7b\") " pod="openstack/keystone-d8ca-account-create-update-nzwkw" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.217812 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f4d9b08-1038-4f16-9217-509166cc2e7b-operator-scripts\") pod \"keystone-d8ca-account-create-update-nzwkw\" (UID: \"3f4d9b08-1038-4f16-9217-509166cc2e7b\") " pod="openstack/keystone-d8ca-account-create-update-nzwkw" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.218390 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1822baf8-11aa-4152-a74f-2ce0383c1094-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.218413 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1822baf8-11aa-4152-a74f-2ce0383c1094-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.218410 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-logs" (OuterVolumeSpecName: "logs") pod "e809a11a-a5d8-49a0-9d9d-cac6a399dd35" (UID: "e809a11a-a5d8-49a0-9d9d-cac6a399dd35"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:50:31 crc kubenswrapper[4867]: E0101 08:50:31.218474 4867 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.218480 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff82f43d-33bd-47f0-9864-83bb3048f9b2-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ff82f43d-33bd-47f0-9864-83bb3048f9b2" (UID: "ff82f43d-33bd-47f0-9864-83bb3048f9b2"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:50:31 crc kubenswrapper[4867]: E0101 08:50:31.218521 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3f4d9b08-1038-4f16-9217-509166cc2e7b-operator-scripts podName:3f4d9b08-1038-4f16-9217-509166cc2e7b nodeName:}" failed. No retries permitted until 2026-01-01 08:50:33.218506107 +0000 UTC m=+1442.353774876 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/3f4d9b08-1038-4f16-9217-509166cc2e7b-operator-scripts") pod "keystone-d8ca-account-create-update-nzwkw" (UID: "3f4d9b08-1038-4f16-9217-509166cc2e7b") : configmap "openstack-scripts" not found Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.219342 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15b1cd3d-248e-4861-a69a-4c8d284babb3" path="/var/lib/kubelet/pods/15b1cd3d-248e-4861-a69a-4c8d284babb3/volumes" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.220699 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22daf9e9-6114-4fc4-951e-da0e7b92c4b8" path="/var/lib/kubelet/pods/22daf9e9-6114-4fc4-951e-da0e7b92c4b8/volumes" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.221108 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28bc6ac4-481b-4809-a61e-f32ff6a17920" path="/var/lib/kubelet/pods/28bc6ac4-481b-4809-a61e-f32ff6a17920/volumes" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.222684 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e809a11a-a5d8-49a0-9d9d-cac6a399dd35" (UID: "e809a11a-a5d8-49a0-9d9d-cac6a399dd35"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.222798 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff82f43d-33bd-47f0-9864-83bb3048f9b2-logs" (OuterVolumeSpecName: "logs") pod "ff82f43d-33bd-47f0-9864-83bb3048f9b2" (UID: "ff82f43d-33bd-47f0-9864-83bb3048f9b2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:50:31 crc kubenswrapper[4867]: E0101 08:50:31.231577 4867 projected.go:194] Error preparing data for projected volume kube-api-access-d9cgp for pod openstack/keystone-d8ca-account-create-update-nzwkw: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 01 08:50:31 crc kubenswrapper[4867]: E0101 08:50:31.231634 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4d9b08-1038-4f16-9217-509166cc2e7b-kube-api-access-d9cgp podName:3f4d9b08-1038-4f16-9217-509166cc2e7b nodeName:}" failed. No retries permitted until 2026-01-01 08:50:33.231616022 +0000 UTC m=+1442.366884791 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9cgp" (UniqueName: "kubernetes.io/projected/3f4d9b08-1038-4f16-9217-509166cc2e7b-kube-api-access-d9cgp") pod "keystone-d8ca-account-create-update-nzwkw" (UID: "3f4d9b08-1038-4f16-9217-509166cc2e7b") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.236401 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-scripts" (OuterVolumeSpecName: "scripts") pod "e809a11a-a5d8-49a0-9d9d-cac6a399dd35" (UID: "e809a11a-a5d8-49a0-9d9d-cac6a399dd35"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.238430 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff82f43d-33bd-47f0-9864-83bb3048f9b2-kube-api-access-rmcs8" (OuterVolumeSpecName: "kube-api-access-rmcs8") pod "ff82f43d-33bd-47f0-9864-83bb3048f9b2" (UID: "ff82f43d-33bd-47f0-9864-83bb3048f9b2"). InnerVolumeSpecName "kube-api-access-rmcs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.240206 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30fc9439-d9d2-4a19-9ffd-2b80a7269e77" path="/var/lib/kubelet/pods/30fc9439-d9d2-4a19-9ffd-2b80a7269e77/volumes" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.241664 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c6fd580-15f1-4929-b211-ecb1dc767e7c" path="/var/lib/kubelet/pods/6c6fd580-15f1-4929-b211-ecb1dc767e7c/volumes" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.242077 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fe85b54-84b3-46ab-94b7-597ffd52f997" path="/var/lib/kubelet/pods/6fe85b54-84b3-46ab-94b7-597ffd52f997/volumes" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.242548 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83a36dad-781c-47b3-a1f2-d8aa5d7182fb" path="/var/lib/kubelet/pods/83a36dad-781c-47b3-a1f2-d8aa5d7182fb/volumes" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.243205 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86fbff8a-ec9f-4575-be56-2e32acdf53ad" path="/var/lib/kubelet/pods/86fbff8a-ec9f-4575-be56-2e32acdf53ad/volumes" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.257623 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-kube-api-access-jc5vm" (OuterVolumeSpecName: "kube-api-access-jc5vm") pod "e809a11a-a5d8-49a0-9d9d-cac6a399dd35" (UID: "e809a11a-a5d8-49a0-9d9d-cac6a399dd35"). InnerVolumeSpecName "kube-api-access-jc5vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.258097 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c8a7ced-4990-4ea2-baff-8d3adf064a56" path="/var/lib/kubelet/pods/9c8a7ced-4990-4ea2-baff-8d3adf064a56/volumes" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.258259 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff82f43d-33bd-47f0-9864-83bb3048f9b2-scripts" (OuterVolumeSpecName: "scripts") pod "ff82f43d-33bd-47f0-9864-83bb3048f9b2" (UID: "ff82f43d-33bd-47f0-9864-83bb3048f9b2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.258932 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2662702-83ed-4457-a630-e8a6d07ffb8b" path="/var/lib/kubelet/pods/d2662702-83ed-4457-a630-e8a6d07ffb8b/volumes" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.259662 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d579322c-12b7-488b-8220-31ef35016c68" path="/var/lib/kubelet/pods/d579322c-12b7-488b-8220-31ef35016c68/volumes" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.264644 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1cfea17-e2e5-4785-ac84-a3c14a0cf1d0" path="/var/lib/kubelet/pods/f1cfea17-e2e5-4785-ac84-a3c14a0cf1d0/volumes" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.272231 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff82f43d-33bd-47f0-9864-83bb3048f9b2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ff82f43d-33bd-47f0-9864-83bb3048f9b2" (UID: "ff82f43d-33bd-47f0-9864-83bb3048f9b2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.273173 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "e809a11a-a5d8-49a0-9d9d-cac6a399dd35" (UID: "e809a11a-a5d8-49a0-9d9d-cac6a399dd35"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.329718 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff82f43d-33bd-47f0-9864-83bb3048f9b2-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:31 crc kubenswrapper[4867]: E0101 08:50:31.329795 4867 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 01 08:50:31 crc kubenswrapper[4867]: E0101 08:50:31.330718 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/19551dba-c741-42e0-b228-6cad78717264-operator-scripts podName:19551dba-c741-42e0-b228-6cad78717264 nodeName:}" failed. No retries permitted until 2026-01-01 08:50:32.33069159 +0000 UTC m=+1441.465960389 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/19551dba-c741-42e0-b228-6cad78717264-operator-scripts") pod "root-account-create-update-7n8cs" (UID: "19551dba-c741-42e0-b228-6cad78717264") : configmap "openstack-scripts" not found Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.330972 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.330988 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff82f43d-33bd-47f0-9864-83bb3048f9b2-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.330998 4867 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.331006 4867 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff82f43d-33bd-47f0-9864-83bb3048f9b2-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.331015 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmcs8\" (UniqueName: \"kubernetes.io/projected/ff82f43d-33bd-47f0-9864-83bb3048f9b2-kube-api-access-rmcs8\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.331024 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff82f43d-33bd-47f0-9864-83bb3048f9b2-logs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.331032 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-logs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.331040 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.331048 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jc5vm\" (UniqueName: \"kubernetes.io/projected/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-kube-api-access-jc5vm\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:31 crc kubenswrapper[4867]: E0101 08:50:31.371810 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d733e18f1ee0ab5fdfc275f4b701971bfd4e30736094221d5f2e06640b3bfa5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 01 08:50:31 crc kubenswrapper[4867]: E0101 08:50:31.374545 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d733e18f1ee0ab5fdfc275f4b701971bfd4e30736094221d5f2e06640b3bfa5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 01 08:50:31 crc kubenswrapper[4867]: E0101 08:50:31.375885 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d733e18f1ee0ab5fdfc275f4b701971bfd4e30736094221d5f2e06640b3bfa5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 01 08:50:31 crc kubenswrapper[4867]: E0101 08:50:31.375924 4867 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9" containerName="nova-scheduler-scheduler" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.392318 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1822baf8-11aa-4152-a74f-2ce0383c1094-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1822baf8-11aa-4152-a74f-2ce0383c1094" (UID: "1822baf8-11aa-4152-a74f-2ce0383c1094"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.402577 4867 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.434435 4867 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.434463 4867 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1822baf8-11aa-4152-a74f-2ce0383c1094-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.462023 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff82f43d-33bd-47f0-9864-83bb3048f9b2-config-data" (OuterVolumeSpecName: "config-data") pod "ff82f43d-33bd-47f0-9864-83bb3048f9b2" (UID: "ff82f43d-33bd-47f0-9864-83bb3048f9b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.462495 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1822baf8-11aa-4152-a74f-2ce0383c1094-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1822baf8-11aa-4152-a74f-2ce0383c1094" (UID: "1822baf8-11aa-4152-a74f-2ce0383c1094"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.486656 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-config-data" (OuterVolumeSpecName: "config-data") pod "e809a11a-a5d8-49a0-9d9d-cac6a399dd35" (UID: "e809a11a-a5d8-49a0-9d9d-cac6a399dd35"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.489122 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff82f43d-33bd-47f0-9864-83bb3048f9b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff82f43d-33bd-47f0-9864-83bb3048f9b2" (UID: "ff82f43d-33bd-47f0-9864-83bb3048f9b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.497981 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff82f43d-33bd-47f0-9864-83bb3048f9b2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ff82f43d-33bd-47f0-9864-83bb3048f9b2" (UID: "ff82f43d-33bd-47f0-9864-83bb3048f9b2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.506116 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e809a11a-a5d8-49a0-9d9d-cac6a399dd35" (UID: "e809a11a-a5d8-49a0-9d9d-cac6a399dd35"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.520604 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e809a11a-a5d8-49a0-9d9d-cac6a399dd35" (UID: "e809a11a-a5d8-49a0-9d9d-cac6a399dd35"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.524048 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff82f43d-33bd-47f0-9864-83bb3048f9b2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ff82f43d-33bd-47f0-9864-83bb3048f9b2" (UID: "ff82f43d-33bd-47f0-9864-83bb3048f9b2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.536223 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff82f43d-33bd-47f0-9864-83bb3048f9b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.536254 4867 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.536265 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff82f43d-33bd-47f0-9864-83bb3048f9b2-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.536273 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.536281 4867 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff82f43d-33bd-47f0-9864-83bb3048f9b2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.536288 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e809a11a-a5d8-49a0-9d9d-cac6a399dd35-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.536296 4867 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff82f43d-33bd-47f0-9864-83bb3048f9b2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.536303 4867 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1822baf8-11aa-4152-a74f-2ce0383c1094-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.670676 4867 generic.go:334] "Generic (PLEG): container finished" podID="3bd7d188-bdc2-4aa8-891b-0775de1a5eeb" containerID="b839a4dffb22a75f3657a1d1eebb4e7c86aa3448b01b75268a7fa008e4d35304" exitCode=0 Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.672908 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.678863 4867 generic.go:334] "Generic (PLEG): container finished" podID="1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9" containerID="3d733e18f1ee0ab5fdfc275f4b701971bfd4e30736094221d5f2e06640b3bfa5" exitCode=0 Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.693518 4867 generic.go:334] "Generic (PLEG): container finished" podID="e7003b80-53fa-4550-8f18-486a0f7988c9" containerID="3cd677561763860feb64840f8907414cdd4cd64aae8107b33f87bbbd3b84da9d" exitCode=0 Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.706505 4867 generic.go:334] "Generic (PLEG): container finished" podID="c6e96caa-b906-4b24-af21-8068ea727bba" containerID="12ac59ef1025a56a54145198bfc20879e2c8969f62ef2c28de3bb86b0129fd27" exitCode=0 Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.709338 4867 generic.go:334] "Generic (PLEG): container finished" podID="22fe2632-f8f6-4ef9-9f4c-72b69bd45932" containerID="f95ad7dcbf76b229ef0f72ae0e667de7d0e25a5f3d7e84f84fd18139ab18e305" exitCode=0 Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.727603 4867 generic.go:334] "Generic (PLEG): container finished" podID="ff82f43d-33bd-47f0-9864-83bb3048f9b2" containerID="c1322607e2d2d81092b3e995c7264c64ede61c6ce739cb323ee27a1ce97fbebb" exitCode=0 Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.727739 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.734116 4867 generic.go:334] "Generic (PLEG): container finished" podID="3c2a7f74-c5ce-45fb-a1fa-c19c025aea20" containerID="dee68f8d073a368d94e9708c1869989ddd8ada0a6eb993b2a239618bdb95a0c6" exitCode=0 Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.736199 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d8ca-account-create-update-nzwkw" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.738163 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-67dd85d5b6-ww7ll" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.802224 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb","Type":"ContainerDied","Data":"b839a4dffb22a75f3657a1d1eebb4e7c86aa3448b01b75268a7fa008e4d35304"} Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.802292 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e809a11a-a5d8-49a0-9d9d-cac6a399dd35","Type":"ContainerDied","Data":"bf8f8652376973036beb2d302c94d8c532f851e8ac6d076c012daf285f7fcf11"} Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.802330 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9","Type":"ContainerDied","Data":"3d733e18f1ee0ab5fdfc275f4b701971bfd4e30736094221d5f2e06640b3bfa5"} Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.802346 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6f47f095-abde-4e07-8edf-d0a318043581","Type":"ContainerDied","Data":"822f82508c998db7e356ecb226620c5945ed607a34c00995e9b70039a61c4c4d"} Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.802360 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="822f82508c998db7e356ecb226620c5945ed607a34c00995e9b70039a61c4c4d" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.802416 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e7003b80-53fa-4550-8f18-486a0f7988c9","Type":"ContainerDied","Data":"3cd677561763860feb64840f8907414cdd4cd64aae8107b33f87bbbd3b84da9d"} Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.802400 4867 scope.go:117] "RemoveContainer" containerID="9185c2834b9cf63d0aa63913819769f2b534971b2a8528f9b981383d4142d637" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.802458 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e7003b80-53fa-4550-8f18-486a0f7988c9","Type":"ContainerDied","Data":"31cdc5c3e7f0ac669624cccf9fc0cf73f710405f6bdf9b7380ffbd4d7e0196d0"} Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.802472 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31cdc5c3e7f0ac669624cccf9fc0cf73f710405f6bdf9b7380ffbd4d7e0196d0" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.802485 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b43ddff2-67cd-4ab7-84c1-763dd002457c","Type":"ContainerDied","Data":"b94ed9bd15f06982404e253657bd423c9017c54e4c5ccb6ba3d583c7fce6e16b"} Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.802497 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b94ed9bd15f06982404e253657bd423c9017c54e4c5ccb6ba3d583c7fce6e16b" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.802510 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6596d5f4d6-9cxqr" event={"ID":"c6e96caa-b906-4b24-af21-8068ea727bba","Type":"ContainerDied","Data":"12ac59ef1025a56a54145198bfc20879e2c8969f62ef2c28de3bb86b0129fd27"} Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.802527 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7965d77d77-cwbt7" event={"ID":"22fe2632-f8f6-4ef9-9f4c-72b69bd45932","Type":"ContainerDied","Data":"f95ad7dcbf76b229ef0f72ae0e667de7d0e25a5f3d7e84f84fd18139ab18e305"} Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.802543 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ff82f43d-33bd-47f0-9864-83bb3048f9b2","Type":"ContainerDied","Data":"c1322607e2d2d81092b3e995c7264c64ede61c6ce739cb323ee27a1ce97fbebb"} Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.802568 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ff82f43d-33bd-47f0-9864-83bb3048f9b2","Type":"ContainerDied","Data":"6c9faa3db976983e95098561d1c91bd4834cacbb33c6fce0bc3a0ed585a19543"} Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.802599 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58dc5bfddd-522rc" event={"ID":"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20","Type":"ContainerDied","Data":"dee68f8d073a368d94e9708c1869989ddd8ada0a6eb993b2a239618bdb95a0c6"} Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.802615 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58dc5bfddd-522rc" event={"ID":"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20","Type":"ContainerDied","Data":"c3bdf5b34305427f14a8a9029700bb4c7abd99e962ae92de63d6edf014b4bfcb"} Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.802628 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3bdf5b34305427f14a8a9029700bb4c7abd99e962ae92de63d6edf014b4bfcb" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.802658 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-67dd85d5b6-ww7ll" event={"ID":"1822baf8-11aa-4152-a74f-2ce0383c1094","Type":"ContainerDied","Data":"9109b287283ce25f0cf31d32541a7913dcbf7e7e7f86d7286073c204ccaf08bc"} Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.811645 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.821041 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.844066 4867 scope.go:117] "RemoveContainer" containerID="c147d7635f762a0bb4d5c3b4b921b293ef2acefe6bfd101dcab003dc2f076886" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.853806 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.884118 4867 scope.go:117] "RemoveContainer" containerID="c1322607e2d2d81092b3e995c7264c64ede61c6ce739cb323ee27a1ce97fbebb" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.899824 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58dc5bfddd-522rc" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.909527 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-d8ca-account-create-update-nzwkw"] Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.918259 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-d8ca-account-create-update-nzwkw"] Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.926500 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-67dd85d5b6-ww7ll"] Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.933128 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-67dd85d5b6-ww7ll"] Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.943494 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b43ddff2-67cd-4ab7-84c1-763dd002457c-config-data\") pod \"b43ddff2-67cd-4ab7-84c1-763dd002457c\" (UID: \"b43ddff2-67cd-4ab7-84c1-763dd002457c\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.943574 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b43ddff2-67cd-4ab7-84c1-763dd002457c-kolla-config\") pod \"b43ddff2-67cd-4ab7-84c1-763dd002457c\" (UID: \"b43ddff2-67cd-4ab7-84c1-763dd002457c\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.943613 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7003b80-53fa-4550-8f18-486a0f7988c9-combined-ca-bundle\") pod \"e7003b80-53fa-4550-8f18-486a0f7988c9\" (UID: \"e7003b80-53fa-4550-8f18-486a0f7988c9\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.943697 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lk76d\" (UniqueName: \"kubernetes.io/projected/b43ddff2-67cd-4ab7-84c1-763dd002457c-kube-api-access-lk76d\") pod \"b43ddff2-67cd-4ab7-84c1-763dd002457c\" (UID: \"b43ddff2-67cd-4ab7-84c1-763dd002457c\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.943927 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7003b80-53fa-4550-8f18-486a0f7988c9-config-data\") pod \"e7003b80-53fa-4550-8f18-486a0f7988c9\" (UID: \"e7003b80-53fa-4550-8f18-486a0f7988c9\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.945196 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x97bq\" (UniqueName: \"kubernetes.io/projected/6f47f095-abde-4e07-8edf-d0a318043581-kube-api-access-x97bq\") pod \"6f47f095-abde-4e07-8edf-d0a318043581\" (UID: \"6f47f095-abde-4e07-8edf-d0a318043581\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.945333 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b43ddff2-67cd-4ab7-84c1-763dd002457c-config-data" (OuterVolumeSpecName: "config-data") pod "b43ddff2-67cd-4ab7-84c1-763dd002457c" (UID: "b43ddff2-67cd-4ab7-84c1-763dd002457c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.945493 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b43ddff2-67cd-4ab7-84c1-763dd002457c-combined-ca-bundle\") pod \"b43ddff2-67cd-4ab7-84c1-763dd002457c\" (UID: \"b43ddff2-67cd-4ab7-84c1-763dd002457c\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.945533 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qk5x\" (UniqueName: \"kubernetes.io/projected/e7003b80-53fa-4550-8f18-486a0f7988c9-kube-api-access-5qk5x\") pod \"e7003b80-53fa-4550-8f18-486a0f7988c9\" (UID: \"e7003b80-53fa-4550-8f18-486a0f7988c9\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.945557 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f47f095-abde-4e07-8edf-d0a318043581-logs\") pod \"6f47f095-abde-4e07-8edf-d0a318043581\" (UID: \"6f47f095-abde-4e07-8edf-d0a318043581\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.945601 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-config-data\") pod \"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20\" (UID: \"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.945632 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b43ddff2-67cd-4ab7-84c1-763dd002457c-memcached-tls-certs\") pod \"b43ddff2-67cd-4ab7-84c1-763dd002457c\" (UID: \"b43ddff2-67cd-4ab7-84c1-763dd002457c\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.945635 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b43ddff2-67cd-4ab7-84c1-763dd002457c-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "b43ddff2-67cd-4ab7-84c1-763dd002457c" (UID: "b43ddff2-67cd-4ab7-84c1-763dd002457c"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.945662 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f47f095-abde-4e07-8edf-d0a318043581-scripts\") pod \"6f47f095-abde-4e07-8edf-d0a318043581\" (UID: \"6f47f095-abde-4e07-8edf-d0a318043581\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.945687 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f47f095-abde-4e07-8edf-d0a318043581-config-data\") pod \"6f47f095-abde-4e07-8edf-d0a318043581\" (UID: \"6f47f095-abde-4e07-8edf-d0a318043581\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.945716 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f47f095-abde-4e07-8edf-d0a318043581-combined-ca-bundle\") pod \"6f47f095-abde-4e07-8edf-d0a318043581\" (UID: \"6f47f095-abde-4e07-8edf-d0a318043581\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.945740 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7003b80-53fa-4550-8f18-486a0f7988c9-logs\") pod \"e7003b80-53fa-4550-8f18-486a0f7988c9\" (UID: \"e7003b80-53fa-4550-8f18-486a0f7988c9\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.945765 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7t7ll\" (UniqueName: \"kubernetes.io/projected/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-kube-api-access-7t7ll\") pod \"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20\" (UID: \"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.945790 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-combined-ca-bundle\") pod \"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20\" (UID: \"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.946065 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f47f095-abde-4e07-8edf-d0a318043581-logs" (OuterVolumeSpecName: "logs") pod "6f47f095-abde-4e07-8edf-d0a318043581" (UID: "6f47f095-abde-4e07-8edf-d0a318043581"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.946109 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7003b80-53fa-4550-8f18-486a0f7988c9-nova-metadata-tls-certs\") pod \"e7003b80-53fa-4550-8f18-486a0f7988c9\" (UID: \"e7003b80-53fa-4550-8f18-486a0f7988c9\") " Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.946578 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f4d9b08-1038-4f16-9217-509166cc2e7b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.946595 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9cgp\" (UniqueName: \"kubernetes.io/projected/3f4d9b08-1038-4f16-9217-509166cc2e7b-kube-api-access-d9cgp\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.946609 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b43ddff2-67cd-4ab7-84c1-763dd002457c-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.946622 4867 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b43ddff2-67cd-4ab7-84c1-763dd002457c-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.946633 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f47f095-abde-4e07-8edf-d0a318043581-logs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.950269 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b43ddff2-67cd-4ab7-84c1-763dd002457c-kube-api-access-lk76d" (OuterVolumeSpecName: "kube-api-access-lk76d") pod "b43ddff2-67cd-4ab7-84c1-763dd002457c" (UID: "b43ddff2-67cd-4ab7-84c1-763dd002457c"). InnerVolumeSpecName "kube-api-access-lk76d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.951187 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f47f095-abde-4e07-8edf-d0a318043581-kube-api-access-x97bq" (OuterVolumeSpecName: "kube-api-access-x97bq") pod "6f47f095-abde-4e07-8edf-d0a318043581" (UID: "6f47f095-abde-4e07-8edf-d0a318043581"). InnerVolumeSpecName "kube-api-access-x97bq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.954305 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7003b80-53fa-4550-8f18-486a0f7988c9-kube-api-access-5qk5x" (OuterVolumeSpecName: "kube-api-access-5qk5x") pod "e7003b80-53fa-4550-8f18-486a0f7988c9" (UID: "e7003b80-53fa-4550-8f18-486a0f7988c9"). InnerVolumeSpecName "kube-api-access-5qk5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.954733 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7003b80-53fa-4550-8f18-486a0f7988c9-logs" (OuterVolumeSpecName: "logs") pod "e7003b80-53fa-4550-8f18-486a0f7988c9" (UID: "e7003b80-53fa-4550-8f18-486a0f7988c9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.959345 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-kube-api-access-7t7ll" (OuterVolumeSpecName: "kube-api-access-7t7ll") pod "3c2a7f74-c5ce-45fb-a1fa-c19c025aea20" (UID: "3c2a7f74-c5ce-45fb-a1fa-c19c025aea20"). InnerVolumeSpecName "kube-api-access-7t7ll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.962174 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7965d77d77-cwbt7" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.963679 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.974271 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7003b80-53fa-4550-8f18-486a0f7988c9-config-data" (OuterVolumeSpecName: "config-data") pod "e7003b80-53fa-4550-8f18-486a0f7988c9" (UID: "e7003b80-53fa-4550-8f18-486a0f7988c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.988934 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f47f095-abde-4e07-8edf-d0a318043581-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f47f095-abde-4e07-8edf-d0a318043581" (UID: "6f47f095-abde-4e07-8edf-d0a318043581"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.994311 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f47f095-abde-4e07-8edf-d0a318043581-scripts" (OuterVolumeSpecName: "scripts") pod "6f47f095-abde-4e07-8edf-d0a318043581" (UID: "6f47f095-abde-4e07-8edf-d0a318043581"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:31 crc kubenswrapper[4867]: I0101 08:50:31.996757 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6596d5f4d6-9cxqr" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.017905 4867 scope.go:117] "RemoveContainer" containerID="faeef81012a39d5e86ee47c82b3d29f10718732a72e3a4c2371bd4f1d2e7f489" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.037365 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7003b80-53fa-4550-8f18-486a0f7988c9-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "e7003b80-53fa-4550-8f18-486a0f7988c9" (UID: "e7003b80-53fa-4550-8f18-486a0f7988c9"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.049714 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7003b80-53fa-4550-8f18-486a0f7988c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7003b80-53fa-4550-8f18-486a0f7988c9" (UID: "e7003b80-53fa-4550-8f18-486a0f7988c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.050260 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6f47f095-abde-4e07-8edf-d0a318043581-httpd-run\") pod \"6f47f095-abde-4e07-8edf-d0a318043581\" (UID: \"6f47f095-abde-4e07-8edf-d0a318043581\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.050302 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-internal-tls-certs\") pod \"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20\" (UID: \"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.050333 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7003b80-53fa-4550-8f18-486a0f7988c9-combined-ca-bundle\") pod \"e7003b80-53fa-4550-8f18-486a0f7988c9\" (UID: \"e7003b80-53fa-4550-8f18-486a0f7988c9\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.050369 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-config-data-custom\") pod \"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20\" (UID: \"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.050424 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"6f47f095-abde-4e07-8edf-d0a318043581\" (UID: \"6f47f095-abde-4e07-8edf-d0a318043581\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.050444 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f47f095-abde-4e07-8edf-d0a318043581-public-tls-certs\") pod \"6f47f095-abde-4e07-8edf-d0a318043581\" (UID: \"6f47f095-abde-4e07-8edf-d0a318043581\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.050486 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-logs\") pod \"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20\" (UID: \"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.050518 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-public-tls-certs\") pod \"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20\" (UID: \"3c2a7f74-c5ce-45fb-a1fa-c19c025aea20\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.050747 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-galera-tls-certs\") pod \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\" (UID: \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.050780 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-operator-scripts\") pod \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\" (UID: \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.050806 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6e96caa-b906-4b24-af21-8068ea727bba-logs\") pod \"c6e96caa-b906-4b24-af21-8068ea727bba\" (UID: \"c6e96caa-b906-4b24-af21-8068ea727bba\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.050844 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22fe2632-f8f6-4ef9-9f4c-72b69bd45932-config-data-custom\") pod \"22fe2632-f8f6-4ef9-9f4c-72b69bd45932\" (UID: \"22fe2632-f8f6-4ef9-9f4c-72b69bd45932\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.050864 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-combined-ca-bundle\") pod \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\" (UID: \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.050930 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22fe2632-f8f6-4ef9-9f4c-72b69bd45932-combined-ca-bundle\") pod \"22fe2632-f8f6-4ef9-9f4c-72b69bd45932\" (UID: \"22fe2632-f8f6-4ef9-9f4c-72b69bd45932\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.050952 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\" (UID: \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.051033 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dd62t\" (UniqueName: \"kubernetes.io/projected/22fe2632-f8f6-4ef9-9f4c-72b69bd45932-kube-api-access-dd62t\") pod \"22fe2632-f8f6-4ef9-9f4c-72b69bd45932\" (UID: \"22fe2632-f8f6-4ef9-9f4c-72b69bd45932\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.051056 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fg6sm\" (UniqueName: \"kubernetes.io/projected/c6e96caa-b906-4b24-af21-8068ea727bba-kube-api-access-fg6sm\") pod \"c6e96caa-b906-4b24-af21-8068ea727bba\" (UID: \"c6e96caa-b906-4b24-af21-8068ea727bba\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.051083 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x94zz\" (UniqueName: \"kubernetes.io/projected/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-kube-api-access-x94zz\") pod \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\" (UID: \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.051114 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6e96caa-b906-4b24-af21-8068ea727bba-config-data-custom\") pod \"c6e96caa-b906-4b24-af21-8068ea727bba\" (UID: \"c6e96caa-b906-4b24-af21-8068ea727bba\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.051175 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f47f095-abde-4e07-8edf-d0a318043581-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6f47f095-abde-4e07-8edf-d0a318043581" (UID: "6f47f095-abde-4e07-8edf-d0a318043581"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.051488 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lk76d\" (UniqueName: \"kubernetes.io/projected/b43ddff2-67cd-4ab7-84c1-763dd002457c-kube-api-access-lk76d\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.051502 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7003b80-53fa-4550-8f18-486a0f7988c9-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.051512 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x97bq\" (UniqueName: \"kubernetes.io/projected/6f47f095-abde-4e07-8edf-d0a318043581-kube-api-access-x97bq\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.051520 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qk5x\" (UniqueName: \"kubernetes.io/projected/e7003b80-53fa-4550-8f18-486a0f7988c9-kube-api-access-5qk5x\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.051529 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f47f095-abde-4e07-8edf-d0a318043581-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.051538 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f47f095-abde-4e07-8edf-d0a318043581-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.051546 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7003b80-53fa-4550-8f18-486a0f7988c9-logs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.051555 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7t7ll\" (UniqueName: \"kubernetes.io/projected/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-kube-api-access-7t7ll\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.051566 4867 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7003b80-53fa-4550-8f18-486a0f7988c9-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.051574 4867 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6f47f095-abde-4e07-8edf-d0a318043581-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.052338 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f47f095-abde-4e07-8edf-d0a318043581-config-data" (OuterVolumeSpecName: "config-data") pod "6f47f095-abde-4e07-8edf-d0a318043581" (UID: "6f47f095-abde-4e07-8edf-d0a318043581"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: W0101 08:50:32.052399 4867 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/e7003b80-53fa-4550-8f18-486a0f7988c9/volumes/kubernetes.io~secret/combined-ca-bundle Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.052409 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7003b80-53fa-4550-8f18-486a0f7988c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7003b80-53fa-4550-8f18-486a0f7988c9" (UID: "e7003b80-53fa-4550-8f18-486a0f7988c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.058122 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c2a7f74-c5ce-45fb-a1fa-c19c025aea20" (UID: "3c2a7f74-c5ce-45fb-a1fa-c19c025aea20"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.061267 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22fe2632-f8f6-4ef9-9f4c-72b69bd45932-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "22fe2632-f8f6-4ef9-9f4c-72b69bd45932" (UID: "22fe2632-f8f6-4ef9-9f4c-72b69bd45932"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.064755 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-logs" (OuterVolumeSpecName: "logs") pod "3c2a7f74-c5ce-45fb-a1fa-c19c025aea20" (UID: "3c2a7f74-c5ce-45fb-a1fa-c19c025aea20"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.065120 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6e96caa-b906-4b24-af21-8068ea727bba-logs" (OuterVolumeSpecName: "logs") pod "c6e96caa-b906-4b24-af21-8068ea727bba" (UID: "c6e96caa-b906-4b24-af21-8068ea727bba"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.066173 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-kube-api-access-x94zz" (OuterVolumeSpecName: "kube-api-access-x94zz") pod "3bd7d188-bdc2-4aa8-891b-0775de1a5eeb" (UID: "3bd7d188-bdc2-4aa8-891b-0775de1a5eeb"). InnerVolumeSpecName "kube-api-access-x94zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.066635 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3bd7d188-bdc2-4aa8-891b-0775de1a5eeb" (UID: "3bd7d188-bdc2-4aa8-891b-0775de1a5eeb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.067448 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6e96caa-b906-4b24-af21-8068ea727bba-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c6e96caa-b906-4b24-af21-8068ea727bba" (UID: "c6e96caa-b906-4b24-af21-8068ea727bba"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.072153 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6e96caa-b906-4b24-af21-8068ea727bba-kube-api-access-fg6sm" (OuterVolumeSpecName: "kube-api-access-fg6sm") pod "c6e96caa-b906-4b24-af21-8068ea727bba" (UID: "c6e96caa-b906-4b24-af21-8068ea727bba"). InnerVolumeSpecName "kube-api-access-fg6sm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.072227 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "6f47f095-abde-4e07-8edf-d0a318043581" (UID: "6f47f095-abde-4e07-8edf-d0a318043581"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.072327 4867 scope.go:117] "RemoveContainer" containerID="c1322607e2d2d81092b3e995c7264c64ede61c6ce739cb323ee27a1ce97fbebb" Jan 01 08:50:32 crc kubenswrapper[4867]: E0101 08:50:32.072693 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1322607e2d2d81092b3e995c7264c64ede61c6ce739cb323ee27a1ce97fbebb\": container with ID starting with c1322607e2d2d81092b3e995c7264c64ede61c6ce739cb323ee27a1ce97fbebb not found: ID does not exist" containerID="c1322607e2d2d81092b3e995c7264c64ede61c6ce739cb323ee27a1ce97fbebb" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.072733 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1322607e2d2d81092b3e995c7264c64ede61c6ce739cb323ee27a1ce97fbebb"} err="failed to get container status \"c1322607e2d2d81092b3e995c7264c64ede61c6ce739cb323ee27a1ce97fbebb\": rpc error: code = NotFound desc = could not find container \"c1322607e2d2d81092b3e995c7264c64ede61c6ce739cb323ee27a1ce97fbebb\": container with ID starting with c1322607e2d2d81092b3e995c7264c64ede61c6ce739cb323ee27a1ce97fbebb not found: ID does not exist" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.072752 4867 scope.go:117] "RemoveContainer" containerID="faeef81012a39d5e86ee47c82b3d29f10718732a72e3a4c2371bd4f1d2e7f489" Jan 01 08:50:32 crc kubenswrapper[4867]: E0101 08:50:32.073040 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"faeef81012a39d5e86ee47c82b3d29f10718732a72e3a4c2371bd4f1d2e7f489\": container with ID starting with faeef81012a39d5e86ee47c82b3d29f10718732a72e3a4c2371bd4f1d2e7f489 not found: ID does not exist" containerID="faeef81012a39d5e86ee47c82b3d29f10718732a72e3a4c2371bd4f1d2e7f489" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.073064 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faeef81012a39d5e86ee47c82b3d29f10718732a72e3a4c2371bd4f1d2e7f489"} err="failed to get container status \"faeef81012a39d5e86ee47c82b3d29f10718732a72e3a4c2371bd4f1d2e7f489\": rpc error: code = NotFound desc = could not find container \"faeef81012a39d5e86ee47c82b3d29f10718732a72e3a4c2371bd4f1d2e7f489\": container with ID starting with faeef81012a39d5e86ee47c82b3d29f10718732a72e3a4c2371bd4f1d2e7f489 not found: ID does not exist" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.073079 4867 scope.go:117] "RemoveContainer" containerID="e80411603dc0ac8d446f1e707d73b2bad909e42859006cf6a585616040d3b259" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.073290 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3c2a7f74-c5ce-45fb-a1fa-c19c025aea20" (UID: "3c2a7f74-c5ce-45fb-a1fa-c19c025aea20"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.076755 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22fe2632-f8f6-4ef9-9f4c-72b69bd45932-kube-api-access-dd62t" (OuterVolumeSpecName: "kube-api-access-dd62t") pod "22fe2632-f8f6-4ef9-9f4c-72b69bd45932" (UID: "22fe2632-f8f6-4ef9-9f4c-72b69bd45932"). InnerVolumeSpecName "kube-api-access-dd62t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.096042 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "mysql-db") pod "3bd7d188-bdc2-4aa8-891b-0775de1a5eeb" (UID: "3bd7d188-bdc2-4aa8-891b-0775de1a5eeb"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: E0101 08:50:32.131300 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd is running failed: container process not found" containerID="823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 01 08:50:32 crc kubenswrapper[4867]: E0101 08:50:32.132622 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd is running failed: container process not found" containerID="823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.135580 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b43ddff2-67cd-4ab7-84c1-763dd002457c-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "b43ddff2-67cd-4ab7-84c1-763dd002457c" (UID: "b43ddff2-67cd-4ab7-84c1-763dd002457c"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: E0101 08:50:32.136446 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd is running failed: container process not found" containerID="823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 01 08:50:32 crc kubenswrapper[4867]: E0101 08:50:32.136500 4867 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-smgl6" podUID="d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e" containerName="ovsdb-server" Jan 01 08:50:32 crc kubenswrapper[4867]: E0101 08:50:32.147257 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c5d97f4ef6c67417f1c06bc5b592d06096afac0628ba26043d76ab1c8ed2c65b" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.147426 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b43ddff2-67cd-4ab7-84c1-763dd002457c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b43ddff2-67cd-4ab7-84c1-763dd002457c" (UID: "b43ddff2-67cd-4ab7-84c1-763dd002457c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: E0101 08:50:32.150643 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c5d97f4ef6c67417f1c06bc5b592d06096afac0628ba26043d76ab1c8ed2c65b" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.152402 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e96caa-b906-4b24-af21-8068ea727bba-combined-ca-bundle\") pod \"c6e96caa-b906-4b24-af21-8068ea727bba\" (UID: \"c6e96caa-b906-4b24-af21-8068ea727bba\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.152441 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22fe2632-f8f6-4ef9-9f4c-72b69bd45932-logs\") pod \"22fe2632-f8f6-4ef9-9f4c-72b69bd45932\" (UID: \"22fe2632-f8f6-4ef9-9f4c-72b69bd45932\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.152489 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22fe2632-f8f6-4ef9-9f4c-72b69bd45932-config-data\") pod \"22fe2632-f8f6-4ef9-9f4c-72b69bd45932\" (UID: \"22fe2632-f8f6-4ef9-9f4c-72b69bd45932\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.152515 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6e96caa-b906-4b24-af21-8068ea727bba-config-data\") pod \"c6e96caa-b906-4b24-af21-8068ea727bba\" (UID: \"c6e96caa-b906-4b24-af21-8068ea727bba\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.152539 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-config-data-default\") pod \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\" (UID: \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.152562 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-kolla-config\") pod \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\" (UID: \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.152586 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-config-data-generated\") pod \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\" (UID: \"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.153061 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.153079 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6e96caa-b906-4b24-af21-8068ea727bba-logs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.153091 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b43ddff2-67cd-4ab7-84c1-763dd002457c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.153102 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22fe2632-f8f6-4ef9-9f4c-72b69bd45932-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.153123 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.153135 4867 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b43ddff2-67cd-4ab7-84c1-763dd002457c-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.153145 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f47f095-abde-4e07-8edf-d0a318043581-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.153156 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dd62t\" (UniqueName: \"kubernetes.io/projected/22fe2632-f8f6-4ef9-9f4c-72b69bd45932-kube-api-access-dd62t\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.153169 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fg6sm\" (UniqueName: \"kubernetes.io/projected/c6e96caa-b906-4b24-af21-8068ea727bba-kube-api-access-fg6sm\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.153179 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x94zz\" (UniqueName: \"kubernetes.io/projected/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-kube-api-access-x94zz\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.153189 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.153201 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6e96caa-b906-4b24-af21-8068ea727bba-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.153212 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7003b80-53fa-4550-8f18-486a0f7988c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.153222 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.153238 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.153249 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-logs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.153550 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "3bd7d188-bdc2-4aa8-891b-0775de1a5eeb" (UID: "3bd7d188-bdc2-4aa8-891b-0775de1a5eeb"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.154436 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22fe2632-f8f6-4ef9-9f4c-72b69bd45932-logs" (OuterVolumeSpecName: "logs") pod "22fe2632-f8f6-4ef9-9f4c-72b69bd45932" (UID: "22fe2632-f8f6-4ef9-9f4c-72b69bd45932"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.154703 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "3bd7d188-bdc2-4aa8-891b-0775de1a5eeb" (UID: "3bd7d188-bdc2-4aa8-891b-0775de1a5eeb"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.156098 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "3bd7d188-bdc2-4aa8-891b-0775de1a5eeb" (UID: "3bd7d188-bdc2-4aa8-891b-0775de1a5eeb"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.159628 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="8dad921b-d7dd-4113-85d2-78d6f59944b4" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.204:3000/\": dial tcp 10.217.0.204:3000: connect: connection refused" Jan 01 08:50:32 crc kubenswrapper[4867]: E0101 08:50:32.176134 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c5d97f4ef6c67417f1c06bc5b592d06096afac0628ba26043d76ab1c8ed2c65b" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 01 08:50:32 crc kubenswrapper[4867]: E0101 08:50:32.176351 4867 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-smgl6" podUID="d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e" containerName="ovs-vswitchd" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.181819 4867 scope.go:117] "RemoveContainer" containerID="455b0cde75a033b7a0c94fdc6b6b1dd1216e9777beb9c14b66a6998f6b2fa1d5" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.195440 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.199234 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3bd7d188-bdc2-4aa8-891b-0775de1a5eeb" (UID: "3bd7d188-bdc2-4aa8-891b-0775de1a5eeb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.245639 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7n8cs" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.252007 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22fe2632-f8f6-4ef9-9f4c-72b69bd45932-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22fe2632-f8f6-4ef9-9f4c-72b69bd45932" (UID: "22fe2632-f8f6-4ef9-9f4c-72b69bd45932"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.253353 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "3bd7d188-bdc2-4aa8-891b-0775de1a5eeb" (UID: "3bd7d188-bdc2-4aa8-891b-0775de1a5eeb"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.254230 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19551dba-c741-42e0-b228-6cad78717264-operator-scripts\") pod \"19551dba-c741-42e0-b228-6cad78717264\" (UID: \"19551dba-c741-42e0-b228-6cad78717264\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.254289 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9-combined-ca-bundle\") pod \"1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9\" (UID: \"1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.254312 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcx8x\" (UniqueName: \"kubernetes.io/projected/19551dba-c741-42e0-b228-6cad78717264-kube-api-access-vcx8x\") pod \"19551dba-c741-42e0-b228-6cad78717264\" (UID: \"19551dba-c741-42e0-b228-6cad78717264\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.254385 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpgt6\" (UniqueName: \"kubernetes.io/projected/1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9-kube-api-access-gpgt6\") pod \"1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9\" (UID: \"1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.254405 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9-config-data\") pod \"1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9\" (UID: \"1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.254629 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.254642 4867 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.254650 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.254659 4867 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.254668 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.254678 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22fe2632-f8f6-4ef9-9f4c-72b69bd45932-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.254687 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22fe2632-f8f6-4ef9-9f4c-72b69bd45932-logs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.256862 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19551dba-c741-42e0-b228-6cad78717264-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "19551dba-c741-42e0-b228-6cad78717264" (UID: "19551dba-c741-42e0-b228-6cad78717264"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.265538 4867 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.270488 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19551dba-c741-42e0-b228-6cad78717264-kube-api-access-vcx8x" (OuterVolumeSpecName: "kube-api-access-vcx8x") pod "19551dba-c741-42e0-b228-6cad78717264" (UID: "19551dba-c741-42e0-b228-6cad78717264"). InnerVolumeSpecName "kube-api-access-vcx8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.272519 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9-kube-api-access-gpgt6" (OuterVolumeSpecName: "kube-api-access-gpgt6") pod "1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9" (UID: "1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9"). InnerVolumeSpecName "kube-api-access-gpgt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.274226 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6e96caa-b906-4b24-af21-8068ea727bba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6e96caa-b906-4b24-af21-8068ea727bba" (UID: "c6e96caa-b906-4b24-af21-8068ea727bba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.282027 4867 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.286411 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-config-data" (OuterVolumeSpecName: "config-data") pod "3c2a7f74-c5ce-45fb-a1fa-c19c025aea20" (UID: "3c2a7f74-c5ce-45fb-a1fa-c19c025aea20"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.290744 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6e96caa-b906-4b24-af21-8068ea727bba-config-data" (OuterVolumeSpecName: "config-data") pod "c6e96caa-b906-4b24-af21-8068ea727bba" (UID: "c6e96caa-b906-4b24-af21-8068ea727bba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.299909 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9-config-data" (OuterVolumeSpecName: "config-data") pod "1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9" (UID: "1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.302115 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3c2a7f74-c5ce-45fb-a1fa-c19c025aea20" (UID: "3c2a7f74-c5ce-45fb-a1fa-c19c025aea20"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.317635 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f47f095-abde-4e07-8edf-d0a318043581-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6f47f095-abde-4e07-8edf-d0a318043581" (UID: "6f47f095-abde-4e07-8edf-d0a318043581"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.318019 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3c2a7f74-c5ce-45fb-a1fa-c19c025aea20" (UID: "3c2a7f74-c5ce-45fb-a1fa-c19c025aea20"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.321621 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22fe2632-f8f6-4ef9-9f4c-72b69bd45932-config-data" (OuterVolumeSpecName: "config-data") pod "22fe2632-f8f6-4ef9-9f4c-72b69bd45932" (UID: "22fe2632-f8f6-4ef9-9f4c-72b69bd45932"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.325966 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9" (UID: "1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.356543 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22fe2632-f8f6-4ef9-9f4c-72b69bd45932-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.356571 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6e96caa-b906-4b24-af21-8068ea727bba-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.356581 4867 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.356589 4867 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f47f095-abde-4e07-8edf-d0a318043581-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.356598 4867 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.356607 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpgt6\" (UniqueName: \"kubernetes.io/projected/1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9-kube-api-access-gpgt6\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.356615 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.356623 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.356631 4867 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.356638 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19551dba-c741-42e0-b228-6cad78717264-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.356646 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e96caa-b906-4b24-af21-8068ea727bba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.356653 4867 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.356663 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.356671 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcx8x\" (UniqueName: \"kubernetes.io/projected/19551dba-c741-42e0-b228-6cad78717264-kube-api-access-vcx8x\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.455778 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9943de7c-1d29-416f-ba57-ea51bf9e56f3/ovn-northd/0.log" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.455944 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.458888 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.559115 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cda1d2c0-2470-41f9-9969-776f8883a38b-combined-ca-bundle\") pod \"cda1d2c0-2470-41f9-9969-776f8883a38b\" (UID: \"cda1d2c0-2470-41f9-9969-776f8883a38b\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.559162 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9943de7c-1d29-416f-ba57-ea51bf9e56f3-combined-ca-bundle\") pod \"9943de7c-1d29-416f-ba57-ea51bf9e56f3\" (UID: \"9943de7c-1d29-416f-ba57-ea51bf9e56f3\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.559215 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cda1d2c0-2470-41f9-9969-776f8883a38b-internal-tls-certs\") pod \"cda1d2c0-2470-41f9-9969-776f8883a38b\" (UID: \"cda1d2c0-2470-41f9-9969-776f8883a38b\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.559297 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9943de7c-1d29-416f-ba57-ea51bf9e56f3-ovn-northd-tls-certs\") pod \"9943de7c-1d29-416f-ba57-ea51bf9e56f3\" (UID: \"9943de7c-1d29-416f-ba57-ea51bf9e56f3\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.559328 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8rpj\" (UniqueName: \"kubernetes.io/projected/9943de7c-1d29-416f-ba57-ea51bf9e56f3-kube-api-access-g8rpj\") pod \"9943de7c-1d29-416f-ba57-ea51bf9e56f3\" (UID: \"9943de7c-1d29-416f-ba57-ea51bf9e56f3\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.559372 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9943de7c-1d29-416f-ba57-ea51bf9e56f3-metrics-certs-tls-certs\") pod \"9943de7c-1d29-416f-ba57-ea51bf9e56f3\" (UID: \"9943de7c-1d29-416f-ba57-ea51bf9e56f3\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.559402 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cda1d2c0-2470-41f9-9969-776f8883a38b-logs\") pod \"cda1d2c0-2470-41f9-9969-776f8883a38b\" (UID: \"cda1d2c0-2470-41f9-9969-776f8883a38b\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.559426 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cda1d2c0-2470-41f9-9969-776f8883a38b-public-tls-certs\") pod \"cda1d2c0-2470-41f9-9969-776f8883a38b\" (UID: \"cda1d2c0-2470-41f9-9969-776f8883a38b\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.559463 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9943de7c-1d29-416f-ba57-ea51bf9e56f3-ovn-rundir\") pod \"9943de7c-1d29-416f-ba57-ea51bf9e56f3\" (UID: \"9943de7c-1d29-416f-ba57-ea51bf9e56f3\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.559493 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cda1d2c0-2470-41f9-9969-776f8883a38b-config-data\") pod \"cda1d2c0-2470-41f9-9969-776f8883a38b\" (UID: \"cda1d2c0-2470-41f9-9969-776f8883a38b\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.559523 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9943de7c-1d29-416f-ba57-ea51bf9e56f3-scripts\") pod \"9943de7c-1d29-416f-ba57-ea51bf9e56f3\" (UID: \"9943de7c-1d29-416f-ba57-ea51bf9e56f3\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.559560 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9943de7c-1d29-416f-ba57-ea51bf9e56f3-config\") pod \"9943de7c-1d29-416f-ba57-ea51bf9e56f3\" (UID: \"9943de7c-1d29-416f-ba57-ea51bf9e56f3\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.559617 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zznmd\" (UniqueName: \"kubernetes.io/projected/cda1d2c0-2470-41f9-9969-776f8883a38b-kube-api-access-zznmd\") pod \"cda1d2c0-2470-41f9-9969-776f8883a38b\" (UID: \"cda1d2c0-2470-41f9-9969-776f8883a38b\") " Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.560358 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9943de7c-1d29-416f-ba57-ea51bf9e56f3-scripts" (OuterVolumeSpecName: "scripts") pod "9943de7c-1d29-416f-ba57-ea51bf9e56f3" (UID: "9943de7c-1d29-416f-ba57-ea51bf9e56f3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.560381 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9943de7c-1d29-416f-ba57-ea51bf9e56f3-config" (OuterVolumeSpecName: "config") pod "9943de7c-1d29-416f-ba57-ea51bf9e56f3" (UID: "9943de7c-1d29-416f-ba57-ea51bf9e56f3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.560495 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9943de7c-1d29-416f-ba57-ea51bf9e56f3-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "9943de7c-1d29-416f-ba57-ea51bf9e56f3" (UID: "9943de7c-1d29-416f-ba57-ea51bf9e56f3"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.560812 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cda1d2c0-2470-41f9-9969-776f8883a38b-logs" (OuterVolumeSpecName: "logs") pod "cda1d2c0-2470-41f9-9969-776f8883a38b" (UID: "cda1d2c0-2470-41f9-9969-776f8883a38b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.565701 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cda1d2c0-2470-41f9-9969-776f8883a38b-kube-api-access-zznmd" (OuterVolumeSpecName: "kube-api-access-zznmd") pod "cda1d2c0-2470-41f9-9969-776f8883a38b" (UID: "cda1d2c0-2470-41f9-9969-776f8883a38b"). InnerVolumeSpecName "kube-api-access-zznmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.567822 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9943de7c-1d29-416f-ba57-ea51bf9e56f3-kube-api-access-g8rpj" (OuterVolumeSpecName: "kube-api-access-g8rpj") pod "9943de7c-1d29-416f-ba57-ea51bf9e56f3" (UID: "9943de7c-1d29-416f-ba57-ea51bf9e56f3"). InnerVolumeSpecName "kube-api-access-g8rpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.587021 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9943de7c-1d29-416f-ba57-ea51bf9e56f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9943de7c-1d29-416f-ba57-ea51bf9e56f3" (UID: "9943de7c-1d29-416f-ba57-ea51bf9e56f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.601731 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cda1d2c0-2470-41f9-9969-776f8883a38b-config-data" (OuterVolumeSpecName: "config-data") pod "cda1d2c0-2470-41f9-9969-776f8883a38b" (UID: "cda1d2c0-2470-41f9-9969-776f8883a38b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.605085 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cda1d2c0-2470-41f9-9969-776f8883a38b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cda1d2c0-2470-41f9-9969-776f8883a38b" (UID: "cda1d2c0-2470-41f9-9969-776f8883a38b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.638923 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cda1d2c0-2470-41f9-9969-776f8883a38b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "cda1d2c0-2470-41f9-9969-776f8883a38b" (UID: "cda1d2c0-2470-41f9-9969-776f8883a38b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.640081 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cda1d2c0-2470-41f9-9969-776f8883a38b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "cda1d2c0-2470-41f9-9969-776f8883a38b" (UID: "cda1d2c0-2470-41f9-9969-776f8883a38b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.664952 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9943de7c-1d29-416f-ba57-ea51bf9e56f3-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.664988 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zznmd\" (UniqueName: \"kubernetes.io/projected/cda1d2c0-2470-41f9-9969-776f8883a38b-kube-api-access-zznmd\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.665002 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cda1d2c0-2470-41f9-9969-776f8883a38b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.665015 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9943de7c-1d29-416f-ba57-ea51bf9e56f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.665026 4867 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cda1d2c0-2470-41f9-9969-776f8883a38b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.665038 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8rpj\" (UniqueName: \"kubernetes.io/projected/9943de7c-1d29-416f-ba57-ea51bf9e56f3-kube-api-access-g8rpj\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.665048 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cda1d2c0-2470-41f9-9969-776f8883a38b-logs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.665057 4867 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cda1d2c0-2470-41f9-9969-776f8883a38b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.665066 4867 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9943de7c-1d29-416f-ba57-ea51bf9e56f3-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.665076 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cda1d2c0-2470-41f9-9969-776f8883a38b-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.665086 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9943de7c-1d29-416f-ba57-ea51bf9e56f3-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.679940 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9943de7c-1d29-416f-ba57-ea51bf9e56f3-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "9943de7c-1d29-416f-ba57-ea51bf9e56f3" (UID: "9943de7c-1d29-416f-ba57-ea51bf9e56f3"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.681122 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9943de7c-1d29-416f-ba57-ea51bf9e56f3-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "9943de7c-1d29-416f-ba57-ea51bf9e56f3" (UID: "9943de7c-1d29-416f-ba57-ea51bf9e56f3"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.756667 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3bd7d188-bdc2-4aa8-891b-0775de1a5eeb","Type":"ContainerDied","Data":"2a41ae11feae684b800d38b921b0c7c3b2a86837e000d38cf362797dc6c9f5c4"} Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.756716 4867 scope.go:117] "RemoveContainer" containerID="b839a4dffb22a75f3657a1d1eebb4e7c86aa3448b01b75268a7fa008e4d35304" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.756864 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.759368 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-8jl6r" podUID="02bf5c7d-1674-4308-8bcf-751d6c4a3783" containerName="ovn-controller" probeResult="failure" output="command timed out" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.766544 4867 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9943de7c-1d29-416f-ba57-ea51bf9e56f3-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.766572 4867 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9943de7c-1d29-416f-ba57-ea51bf9e56f3-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.767757 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6596d5f4d6-9cxqr" event={"ID":"c6e96caa-b906-4b24-af21-8068ea727bba","Type":"ContainerDied","Data":"7168733cccc3027f6c89418c54278683d8482779dc899896d27c0684ce67c9d3"} Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.767802 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6596d5f4d6-9cxqr" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.773676 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7n8cs" event={"ID":"19551dba-c741-42e0-b228-6cad78717264","Type":"ContainerDied","Data":"76dd789da56b50cb9d816998bd6433f9f32b14d954aa19464cfe9c312057f888"} Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.773739 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7n8cs" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.782007 4867 generic.go:334] "Generic (PLEG): container finished" podID="cda1d2c0-2470-41f9-9969-776f8883a38b" containerID="c013d238e22e79dc9d0e40fa979e690ede634035cc8946de67a72eabb0c5ea17" exitCode=0 Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.782049 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.782194 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cda1d2c0-2470-41f9-9969-776f8883a38b","Type":"ContainerDied","Data":"c013d238e22e79dc9d0e40fa979e690ede634035cc8946de67a72eabb0c5ea17"} Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.782249 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cda1d2c0-2470-41f9-9969-776f8883a38b","Type":"ContainerDied","Data":"c78780e541a9b5d44089e1b10d2b1c3d526e9e3b73adb115b1bfc1c415ead6a0"} Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.788537 4867 scope.go:117] "RemoveContainer" containerID="ddbbd1f9c02c5fe9c1620e55640fa0cc298224e65712bb5df0b7c0ca0dbbf444" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.790858 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9","Type":"ContainerDied","Data":"0a8a2c1f4672ffb058bbef5bc7420b021bb60cf0bcf689b95da09dc5f0b6f793"} Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.790979 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.809821 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.810975 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-8jl6r" podUID="02bf5c7d-1674-4308-8bcf-751d6c4a3783" containerName="ovn-controller" probeResult="failure" output=< Jan 01 08:50:32 crc kubenswrapper[4867]: ERROR - Failed to get connection status from ovn-controller, ovn-appctl exit status: 0 Jan 01 08:50:32 crc kubenswrapper[4867]: > Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.840369 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.841826 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7965d77d77-cwbt7" event={"ID":"22fe2632-f8f6-4ef9-9f4c-72b69bd45932","Type":"ContainerDied","Data":"7e87f25f46bcfa7bfcd57512086c4de728bbc31e4afa1b98cc7982e64bec37f2"} Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.841964 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7965d77d77-cwbt7" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.844382 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9943de7c-1d29-416f-ba57-ea51bf9e56f3/ovn-northd/0.log" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.844428 4867 generic.go:334] "Generic (PLEG): container finished" podID="9943de7c-1d29-416f-ba57-ea51bf9e56f3" containerID="ec53f251aded63efc11dcea8ffde6a118aeb1632f72313429e33668486c985a2" exitCode=139 Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.844519 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58dc5bfddd-522rc" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.844573 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9943de7c-1d29-416f-ba57-ea51bf9e56f3","Type":"ContainerDied","Data":"ec53f251aded63efc11dcea8ffde6a118aeb1632f72313429e33668486c985a2"} Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.844601 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9943de7c-1d29-416f-ba57-ea51bf9e56f3","Type":"ContainerDied","Data":"2afb3fba923f62bb17eecb5f89c16c0e6495c8ed2c3f37ce01d15ea389a0d4e4"} Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.844653 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.844697 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.844746 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.845000 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.854297 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-6596d5f4d6-9cxqr"] Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.872340 4867 scope.go:117] "RemoveContainer" containerID="12ac59ef1025a56a54145198bfc20879e2c8969f62ef2c28de3bb86b0129fd27" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.881340 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-6596d5f4d6-9cxqr"] Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.898618 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-7n8cs"] Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.904409 4867 scope.go:117] "RemoveContainer" containerID="dc673bc1feba5e02af532b24171ae7075ed044000fad91c5933e93e216ca2214" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.905582 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-7n8cs"] Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.912049 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.923919 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.926625 4867 scope.go:117] "RemoveContainer" containerID="c17bcf9b54864c61af9437862e149d2b101e45a4f71a347afd0f0194d5754edb" Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.939225 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.944460 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.948954 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.953416 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.957643 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 01 08:50:32 crc kubenswrapper[4867]: I0101 08:50:32.961855 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.004973 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-58dc5bfddd-522rc"] Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.012187 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-58dc5bfddd-522rc"] Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.021336 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.028574 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.032246 4867 scope.go:117] "RemoveContainer" containerID="c013d238e22e79dc9d0e40fa979e690ede634035cc8946de67a72eabb0c5ea17" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.033344 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-7965d77d77-cwbt7"] Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.039615 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-7965d77d77-cwbt7"] Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.044308 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.048568 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.057414 4867 scope.go:117] "RemoveContainer" containerID="b66638c98090f5e1aaf6296dd6eb2d5dfcfb3fdb6de51af32ae9f4151cd17179" Jan 01 08:50:33 crc kubenswrapper[4867]: E0101 08:50:33.071233 4867 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 01 08:50:33 crc kubenswrapper[4867]: E0101 08:50:33.071325 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-config-data podName:1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99 nodeName:}" failed. No retries permitted until 2026-01-01 08:50:41.071301007 +0000 UTC m=+1450.206569786 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-config-data") pod "rabbitmq-cell1-server-0" (UID: "1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99") : configmap "rabbitmq-cell1-config-data" not found Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.080779 4867 scope.go:117] "RemoveContainer" containerID="c013d238e22e79dc9d0e40fa979e690ede634035cc8946de67a72eabb0c5ea17" Jan 01 08:50:33 crc kubenswrapper[4867]: E0101 08:50:33.081362 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c013d238e22e79dc9d0e40fa979e690ede634035cc8946de67a72eabb0c5ea17\": container with ID starting with c013d238e22e79dc9d0e40fa979e690ede634035cc8946de67a72eabb0c5ea17 not found: ID does not exist" containerID="c013d238e22e79dc9d0e40fa979e690ede634035cc8946de67a72eabb0c5ea17" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.081421 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c013d238e22e79dc9d0e40fa979e690ede634035cc8946de67a72eabb0c5ea17"} err="failed to get container status \"c013d238e22e79dc9d0e40fa979e690ede634035cc8946de67a72eabb0c5ea17\": rpc error: code = NotFound desc = could not find container \"c013d238e22e79dc9d0e40fa979e690ede634035cc8946de67a72eabb0c5ea17\": container with ID starting with c013d238e22e79dc9d0e40fa979e690ede634035cc8946de67a72eabb0c5ea17 not found: ID does not exist" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.081463 4867 scope.go:117] "RemoveContainer" containerID="b66638c98090f5e1aaf6296dd6eb2d5dfcfb3fdb6de51af32ae9f4151cd17179" Jan 01 08:50:33 crc kubenswrapper[4867]: E0101 08:50:33.082098 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b66638c98090f5e1aaf6296dd6eb2d5dfcfb3fdb6de51af32ae9f4151cd17179\": container with ID starting with b66638c98090f5e1aaf6296dd6eb2d5dfcfb3fdb6de51af32ae9f4151cd17179 not found: ID does not exist" containerID="b66638c98090f5e1aaf6296dd6eb2d5dfcfb3fdb6de51af32ae9f4151cd17179" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.082147 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b66638c98090f5e1aaf6296dd6eb2d5dfcfb3fdb6de51af32ae9f4151cd17179"} err="failed to get container status \"b66638c98090f5e1aaf6296dd6eb2d5dfcfb3fdb6de51af32ae9f4151cd17179\": rpc error: code = NotFound desc = could not find container \"b66638c98090f5e1aaf6296dd6eb2d5dfcfb3fdb6de51af32ae9f4151cd17179\": container with ID starting with b66638c98090f5e1aaf6296dd6eb2d5dfcfb3fdb6de51af32ae9f4151cd17179 not found: ID does not exist" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.082174 4867 scope.go:117] "RemoveContainer" containerID="3d733e18f1ee0ab5fdfc275f4b701971bfd4e30736094221d5f2e06640b3bfa5" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.099248 4867 scope.go:117] "RemoveContainer" containerID="f95ad7dcbf76b229ef0f72ae0e667de7d0e25a5f3d7e84f84fd18139ab18e305" Jan 01 08:50:33 crc kubenswrapper[4867]: E0101 08:50:33.105796 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e36af79288ec74b9ac3b28d475ec0bec31b44ef20ef075ee5431a9a0e5c8698a" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 01 08:50:33 crc kubenswrapper[4867]: E0101 08:50:33.107372 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e36af79288ec74b9ac3b28d475ec0bec31b44ef20ef075ee5431a9a0e5c8698a" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 01 08:50:33 crc kubenswrapper[4867]: E0101 08:50:33.109283 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e36af79288ec74b9ac3b28d475ec0bec31b44ef20ef075ee5431a9a0e5c8698a" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 01 08:50:33 crc kubenswrapper[4867]: E0101 08:50:33.109337 4867 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb" containerName="nova-cell0-conductor-conductor" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.122272 4867 scope.go:117] "RemoveContainer" containerID="65ef15ad242719f3da63fa724d97de1fb1223fd81f2c48a72e0cb2f1c91f8f4b" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.144173 4867 scope.go:117] "RemoveContainer" containerID="92bfa5f8823984895188abd8d532b965b9e2ae8de93cf5f7e5a288490fe32e3c" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.171132 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1822baf8-11aa-4152-a74f-2ce0383c1094" path="/var/lib/kubelet/pods/1822baf8-11aa-4152-a74f-2ce0383c1094/volumes" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.172064 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19551dba-c741-42e0-b228-6cad78717264" path="/var/lib/kubelet/pods/19551dba-c741-42e0-b228-6cad78717264/volumes" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.172697 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9" path="/var/lib/kubelet/pods/1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9/volumes" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.173915 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22fe2632-f8f6-4ef9-9f4c-72b69bd45932" path="/var/lib/kubelet/pods/22fe2632-f8f6-4ef9-9f4c-72b69bd45932/volumes" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.174665 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bd7d188-bdc2-4aa8-891b-0775de1a5eeb" path="/var/lib/kubelet/pods/3bd7d188-bdc2-4aa8-891b-0775de1a5eeb/volumes" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.175532 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c2a7f74-c5ce-45fb-a1fa-c19c025aea20" path="/var/lib/kubelet/pods/3c2a7f74-c5ce-45fb-a1fa-c19c025aea20/volumes" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.176883 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f4d9b08-1038-4f16-9217-509166cc2e7b" path="/var/lib/kubelet/pods/3f4d9b08-1038-4f16-9217-509166cc2e7b/volumes" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.177241 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f47f095-abde-4e07-8edf-d0a318043581" path="/var/lib/kubelet/pods/6f47f095-abde-4e07-8edf-d0a318043581/volumes" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.177836 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9943de7c-1d29-416f-ba57-ea51bf9e56f3" path="/var/lib/kubelet/pods/9943de7c-1d29-416f-ba57-ea51bf9e56f3/volumes" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.178823 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b43ddff2-67cd-4ab7-84c1-763dd002457c" path="/var/lib/kubelet/pods/b43ddff2-67cd-4ab7-84c1-763dd002457c/volumes" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.179360 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6e96caa-b906-4b24-af21-8068ea727bba" path="/var/lib/kubelet/pods/c6e96caa-b906-4b24-af21-8068ea727bba/volumes" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.179912 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cda1d2c0-2470-41f9-9969-776f8883a38b" path="/var/lib/kubelet/pods/cda1d2c0-2470-41f9-9969-776f8883a38b/volumes" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.180498 4867 scope.go:117] "RemoveContainer" containerID="ec53f251aded63efc11dcea8ffde6a118aeb1632f72313429e33668486c985a2" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.181692 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7003b80-53fa-4550-8f18-486a0f7988c9" path="/var/lib/kubelet/pods/e7003b80-53fa-4550-8f18-486a0f7988c9/volumes" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.208569 4867 scope.go:117] "RemoveContainer" containerID="92bfa5f8823984895188abd8d532b965b9e2ae8de93cf5f7e5a288490fe32e3c" Jan 01 08:50:33 crc kubenswrapper[4867]: E0101 08:50:33.209039 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92bfa5f8823984895188abd8d532b965b9e2ae8de93cf5f7e5a288490fe32e3c\": container with ID starting with 92bfa5f8823984895188abd8d532b965b9e2ae8de93cf5f7e5a288490fe32e3c not found: ID does not exist" containerID="92bfa5f8823984895188abd8d532b965b9e2ae8de93cf5f7e5a288490fe32e3c" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.209072 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92bfa5f8823984895188abd8d532b965b9e2ae8de93cf5f7e5a288490fe32e3c"} err="failed to get container status \"92bfa5f8823984895188abd8d532b965b9e2ae8de93cf5f7e5a288490fe32e3c\": rpc error: code = NotFound desc = could not find container \"92bfa5f8823984895188abd8d532b965b9e2ae8de93cf5f7e5a288490fe32e3c\": container with ID starting with 92bfa5f8823984895188abd8d532b965b9e2ae8de93cf5f7e5a288490fe32e3c not found: ID does not exist" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.209092 4867 scope.go:117] "RemoveContainer" containerID="ec53f251aded63efc11dcea8ffde6a118aeb1632f72313429e33668486c985a2" Jan 01 08:50:33 crc kubenswrapper[4867]: E0101 08:50:33.209382 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec53f251aded63efc11dcea8ffde6a118aeb1632f72313429e33668486c985a2\": container with ID starting with ec53f251aded63efc11dcea8ffde6a118aeb1632f72313429e33668486c985a2 not found: ID does not exist" containerID="ec53f251aded63efc11dcea8ffde6a118aeb1632f72313429e33668486c985a2" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.209403 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec53f251aded63efc11dcea8ffde6a118aeb1632f72313429e33668486c985a2"} err="failed to get container status \"ec53f251aded63efc11dcea8ffde6a118aeb1632f72313429e33668486c985a2\": rpc error: code = NotFound desc = could not find container \"ec53f251aded63efc11dcea8ffde6a118aeb1632f72313429e33668486c985a2\": container with ID starting with ec53f251aded63efc11dcea8ffde6a118aeb1632f72313429e33668486c985a2 not found: ID does not exist" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.338413 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6498f7d58c-nhfz8" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.478337 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-fernet-keys\") pod \"985cc3ff-ea2f-4386-a828-180deef97412\" (UID: \"985cc3ff-ea2f-4386-a828-180deef97412\") " Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.478664 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-scripts\") pod \"985cc3ff-ea2f-4386-a828-180deef97412\" (UID: \"985cc3ff-ea2f-4386-a828-180deef97412\") " Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.478748 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-config-data\") pod \"985cc3ff-ea2f-4386-a828-180deef97412\" (UID: \"985cc3ff-ea2f-4386-a828-180deef97412\") " Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.478800 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-combined-ca-bundle\") pod \"985cc3ff-ea2f-4386-a828-180deef97412\" (UID: \"985cc3ff-ea2f-4386-a828-180deef97412\") " Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.478868 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-internal-tls-certs\") pod \"985cc3ff-ea2f-4386-a828-180deef97412\" (UID: \"985cc3ff-ea2f-4386-a828-180deef97412\") " Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.479718 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fp592\" (UniqueName: \"kubernetes.io/projected/985cc3ff-ea2f-4386-a828-180deef97412-kube-api-access-fp592\") pod \"985cc3ff-ea2f-4386-a828-180deef97412\" (UID: \"985cc3ff-ea2f-4386-a828-180deef97412\") " Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.479792 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-credential-keys\") pod \"985cc3ff-ea2f-4386-a828-180deef97412\" (UID: \"985cc3ff-ea2f-4386-a828-180deef97412\") " Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.479857 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-public-tls-certs\") pod \"985cc3ff-ea2f-4386-a828-180deef97412\" (UID: \"985cc3ff-ea2f-4386-a828-180deef97412\") " Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.484665 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-scripts" (OuterVolumeSpecName: "scripts") pod "985cc3ff-ea2f-4386-a828-180deef97412" (UID: "985cc3ff-ea2f-4386-a828-180deef97412"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.485496 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "985cc3ff-ea2f-4386-a828-180deef97412" (UID: "985cc3ff-ea2f-4386-a828-180deef97412"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.489483 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/985cc3ff-ea2f-4386-a828-180deef97412-kube-api-access-fp592" (OuterVolumeSpecName: "kube-api-access-fp592") pod "985cc3ff-ea2f-4386-a828-180deef97412" (UID: "985cc3ff-ea2f-4386-a828-180deef97412"). InnerVolumeSpecName "kube-api-access-fp592". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.512205 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "985cc3ff-ea2f-4386-a828-180deef97412" (UID: "985cc3ff-ea2f-4386-a828-180deef97412"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.514052 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-config-data" (OuterVolumeSpecName: "config-data") pod "985cc3ff-ea2f-4386-a828-180deef97412" (UID: "985cc3ff-ea2f-4386-a828-180deef97412"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.515159 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "985cc3ff-ea2f-4386-a828-180deef97412" (UID: "985cc3ff-ea2f-4386-a828-180deef97412"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.536228 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "985cc3ff-ea2f-4386-a828-180deef97412" (UID: "985cc3ff-ea2f-4386-a828-180deef97412"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.536952 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "985cc3ff-ea2f-4386-a828-180deef97412" (UID: "985cc3ff-ea2f-4386-a828-180deef97412"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.583359 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.583388 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.583397 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.583407 4867 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.583415 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fp592\" (UniqueName: \"kubernetes.io/projected/985cc3ff-ea2f-4386-a828-180deef97412-kube-api-access-fp592\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.583425 4867 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.583433 4867 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.583457 4867 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/985cc3ff-ea2f-4386-a828-180deef97412-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.673071 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-86c7f77bc7-nt6jq" podUID="28bc6ac4-481b-4809-a61e-f32ff6a17920" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.171:8080/healthcheck\": dial tcp 10.217.0.171:8080: i/o timeout" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.673383 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-86c7f77bc7-nt6jq" podUID="28bc6ac4-481b-4809-a61e-f32ff6a17920" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.171:8080/healthcheck\": dial tcp 10.217.0.171:8080: i/o timeout" Jan 01 08:50:33 crc kubenswrapper[4867]: E0101 08:50:33.687889 4867 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 01 08:50:33 crc kubenswrapper[4867]: E0101 08:50:33.688173 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/84d7aac6-1073-41c0-acff-169e36ec197d-config-data podName:84d7aac6-1073-41c0-acff-169e36ec197d nodeName:}" failed. No retries permitted until 2026-01-01 08:50:41.688154892 +0000 UTC m=+1450.823423661 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/84d7aac6-1073-41c0-acff-169e36ec197d-config-data") pod "rabbitmq-server-0" (UID: "84d7aac6-1073-41c0-acff-169e36ec197d") : configmap "rabbitmq-config-data" not found Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.877399 4867 generic.go:334] "Generic (PLEG): container finished" podID="985cc3ff-ea2f-4386-a828-180deef97412" containerID="8e0fec353ecc8bde0124bae2920fcbd9124025492a08e22cab9fb8e38095f3a4" exitCode=0 Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.877469 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6498f7d58c-nhfz8" event={"ID":"985cc3ff-ea2f-4386-a828-180deef97412","Type":"ContainerDied","Data":"8e0fec353ecc8bde0124bae2920fcbd9124025492a08e22cab9fb8e38095f3a4"} Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.877470 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6498f7d58c-nhfz8" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.877492 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6498f7d58c-nhfz8" event={"ID":"985cc3ff-ea2f-4386-a828-180deef97412","Type":"ContainerDied","Data":"78d166e6881a233791f4c96550ca9196d6e3169a5a30f0435a44c02b656e7909"} Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.877508 4867 scope.go:117] "RemoveContainer" containerID="8e0fec353ecc8bde0124bae2920fcbd9124025492a08e22cab9fb8e38095f3a4" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.926847 4867 scope.go:117] "RemoveContainer" containerID="8e0fec353ecc8bde0124bae2920fcbd9124025492a08e22cab9fb8e38095f3a4" Jan 01 08:50:33 crc kubenswrapper[4867]: E0101 08:50:33.927430 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e0fec353ecc8bde0124bae2920fcbd9124025492a08e22cab9fb8e38095f3a4\": container with ID starting with 8e0fec353ecc8bde0124bae2920fcbd9124025492a08e22cab9fb8e38095f3a4 not found: ID does not exist" containerID="8e0fec353ecc8bde0124bae2920fcbd9124025492a08e22cab9fb8e38095f3a4" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.927468 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e0fec353ecc8bde0124bae2920fcbd9124025492a08e22cab9fb8e38095f3a4"} err="failed to get container status \"8e0fec353ecc8bde0124bae2920fcbd9124025492a08e22cab9fb8e38095f3a4\": rpc error: code = NotFound desc = could not find container \"8e0fec353ecc8bde0124bae2920fcbd9124025492a08e22cab9fb8e38095f3a4\": container with ID starting with 8e0fec353ecc8bde0124bae2920fcbd9124025492a08e22cab9fb8e38095f3a4 not found: ID does not exist" Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.928344 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6498f7d58c-nhfz8"] Jan 01 08:50:33 crc kubenswrapper[4867]: I0101 08:50:33.939142 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-6498f7d58c-nhfz8"] Jan 01 08:50:34 crc kubenswrapper[4867]: E0101 08:50:34.294267 4867 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 01 08:50:34 crc kubenswrapper[4867]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-01T08:50:28Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 01 08:50:34 crc kubenswrapper[4867]: /etc/init.d/functions: line 589: 393 Alarm clock "$@" Jan 01 08:50:34 crc kubenswrapper[4867]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-8jl6r" message=< Jan 01 08:50:34 crc kubenswrapper[4867]: Exiting ovn-controller (1) [FAILED] Jan 01 08:50:34 crc kubenswrapper[4867]: Killing ovn-controller (1) [ OK ] Jan 01 08:50:34 crc kubenswrapper[4867]: 2026-01-01T08:50:28Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 01 08:50:34 crc kubenswrapper[4867]: /etc/init.d/functions: line 589: 393 Alarm clock "$@" Jan 01 08:50:34 crc kubenswrapper[4867]: > Jan 01 08:50:34 crc kubenswrapper[4867]: E0101 08:50:34.294575 4867 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 01 08:50:34 crc kubenswrapper[4867]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-01T08:50:28Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 01 08:50:34 crc kubenswrapper[4867]: /etc/init.d/functions: line 589: 393 Alarm clock "$@" Jan 01 08:50:34 crc kubenswrapper[4867]: > pod="openstack/ovn-controller-8jl6r" podUID="02bf5c7d-1674-4308-8bcf-751d6c4a3783" containerName="ovn-controller" containerID="cri-o://c6772513d0760f994bd984b33ae94d311c871e514a88791bb619cd34da240dd9" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.294624 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-8jl6r" podUID="02bf5c7d-1674-4308-8bcf-751d6c4a3783" containerName="ovn-controller" containerID="cri-o://c6772513d0760f994bd984b33ae94d311c871e514a88791bb619cd34da240dd9" gracePeriod=23 Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.424016 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.564122 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.611721 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgxj9\" (UniqueName: \"kubernetes.io/projected/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-kube-api-access-fgxj9\") pod \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.611774 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.611808 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-server-conf\") pod \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.611836 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-erlang-cookie-secret\") pod \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.611901 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-rabbitmq-plugins\") pod \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.611927 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-plugins-conf\") pod \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.611971 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-rabbitmq-tls\") pod \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.611992 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-rabbitmq-confd\") pod \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.612021 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-pod-info\") pod \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.612054 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-config-data\") pod \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.612073 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-rabbitmq-erlang-cookie\") pod \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\" (UID: \"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99\") " Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.612420 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99" (UID: "1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.612597 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99" (UID: "1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.613422 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99" (UID: "1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.616472 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99" (UID: "1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.617296 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-pod-info" (OuterVolumeSpecName: "pod-info") pod "1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99" (UID: "1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.617306 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-kube-api-access-fgxj9" (OuterVolumeSpecName: "kube-api-access-fgxj9") pod "1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99" (UID: "1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99"). InnerVolumeSpecName "kube-api-access-fgxj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.617595 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99" (UID: "1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.618022 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99" (UID: "1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.640845 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-config-data" (OuterVolumeSpecName: "config-data") pod "1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99" (UID: "1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.659594 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-8jl6r_02bf5c7d-1674-4308-8bcf-751d6c4a3783/ovn-controller/0.log" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.659666 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8jl6r" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.668745 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-server-conf" (OuterVolumeSpecName: "server-conf") pod "1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99" (UID: "1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.698666 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99" (UID: "1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.713210 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/84d7aac6-1073-41c0-acff-169e36ec197d-server-conf\") pod \"84d7aac6-1073-41c0-acff-169e36ec197d\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.713278 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbzfw\" (UniqueName: \"kubernetes.io/projected/84d7aac6-1073-41c0-acff-169e36ec197d-kube-api-access-gbzfw\") pod \"84d7aac6-1073-41c0-acff-169e36ec197d\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.713312 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/84d7aac6-1073-41c0-acff-169e36ec197d-plugins-conf\") pod \"84d7aac6-1073-41c0-acff-169e36ec197d\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.713351 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/84d7aac6-1073-41c0-acff-169e36ec197d-rabbitmq-tls\") pod \"84d7aac6-1073-41c0-acff-169e36ec197d\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.713444 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/84d7aac6-1073-41c0-acff-169e36ec197d-rabbitmq-erlang-cookie\") pod \"84d7aac6-1073-41c0-acff-169e36ec197d\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.713509 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/84d7aac6-1073-41c0-acff-169e36ec197d-pod-info\") pod \"84d7aac6-1073-41c0-acff-169e36ec197d\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.713591 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/84d7aac6-1073-41c0-acff-169e36ec197d-rabbitmq-plugins\") pod \"84d7aac6-1073-41c0-acff-169e36ec197d\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.713766 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/84d7aac6-1073-41c0-acff-169e36ec197d-rabbitmq-confd\") pod \"84d7aac6-1073-41c0-acff-169e36ec197d\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.713796 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"84d7aac6-1073-41c0-acff-169e36ec197d\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.713849 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/84d7aac6-1073-41c0-acff-169e36ec197d-erlang-cookie-secret\") pod \"84d7aac6-1073-41c0-acff-169e36ec197d\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.713933 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/84d7aac6-1073-41c0-acff-169e36ec197d-config-data\") pod \"84d7aac6-1073-41c0-acff-169e36ec197d\" (UID: \"84d7aac6-1073-41c0-acff-169e36ec197d\") " Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.714002 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/02bf5c7d-1674-4308-8bcf-751d6c4a3783-var-run-ovn\") pod \"02bf5c7d-1674-4308-8bcf-751d6c4a3783\" (UID: \"02bf5c7d-1674-4308-8bcf-751d6c4a3783\") " Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.714102 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84d7aac6-1073-41c0-acff-169e36ec197d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "84d7aac6-1073-41c0-acff-169e36ec197d" (UID: "84d7aac6-1073-41c0-acff-169e36ec197d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.714304 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84d7aac6-1073-41c0-acff-169e36ec197d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "84d7aac6-1073-41c0-acff-169e36ec197d" (UID: "84d7aac6-1073-41c0-acff-169e36ec197d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.714364 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02bf5c7d-1674-4308-8bcf-751d6c4a3783-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "02bf5c7d-1674-4308-8bcf-751d6c4a3783" (UID: "02bf5c7d-1674-4308-8bcf-751d6c4a3783"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.714391 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84d7aac6-1073-41c0-acff-169e36ec197d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "84d7aac6-1073-41c0-acff-169e36ec197d" (UID: "84d7aac6-1073-41c0-acff-169e36ec197d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.714540 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.714562 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.714579 4867 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-pod-info\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.714590 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.714601 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.714613 4867 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/84d7aac6-1073-41c0-acff-169e36ec197d-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.714626 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/84d7aac6-1073-41c0-acff-169e36ec197d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.714637 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgxj9\" (UniqueName: \"kubernetes.io/projected/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-kube-api-access-fgxj9\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.714673 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.714685 4867 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-server-conf\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.714698 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/84d7aac6-1073-41c0-acff-169e36ec197d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.714710 4867 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.714720 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.714731 4867 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/02bf5c7d-1674-4308-8bcf-751d6c4a3783-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.714742 4867 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.716040 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84d7aac6-1073-41c0-acff-169e36ec197d-kube-api-access-gbzfw" (OuterVolumeSpecName: "kube-api-access-gbzfw") pod "84d7aac6-1073-41c0-acff-169e36ec197d" (UID: "84d7aac6-1073-41c0-acff-169e36ec197d"). InnerVolumeSpecName "kube-api-access-gbzfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.716658 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84d7aac6-1073-41c0-acff-169e36ec197d-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "84d7aac6-1073-41c0-acff-169e36ec197d" (UID: "84d7aac6-1073-41c0-acff-169e36ec197d"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.717287 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/84d7aac6-1073-41c0-acff-169e36ec197d-pod-info" (OuterVolumeSpecName: "pod-info") pod "84d7aac6-1073-41c0-acff-169e36ec197d" (UID: "84d7aac6-1073-41c0-acff-169e36ec197d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.717471 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "84d7aac6-1073-41c0-acff-169e36ec197d" (UID: "84d7aac6-1073-41c0-acff-169e36ec197d"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.717530 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84d7aac6-1073-41c0-acff-169e36ec197d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "84d7aac6-1073-41c0-acff-169e36ec197d" (UID: "84d7aac6-1073-41c0-acff-169e36ec197d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.740074 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84d7aac6-1073-41c0-acff-169e36ec197d-config-data" (OuterVolumeSpecName: "config-data") pod "84d7aac6-1073-41c0-acff-169e36ec197d" (UID: "84d7aac6-1073-41c0-acff-169e36ec197d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.741013 4867 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.750906 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84d7aac6-1073-41c0-acff-169e36ec197d-server-conf" (OuterVolumeSpecName: "server-conf") pod "84d7aac6-1073-41c0-acff-169e36ec197d" (UID: "84d7aac6-1073-41c0-acff-169e36ec197d"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.789221 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84d7aac6-1073-41c0-acff-169e36ec197d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "84d7aac6-1073-41c0-acff-169e36ec197d" (UID: "84d7aac6-1073-41c0-acff-169e36ec197d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.815157 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02bf5c7d-1674-4308-8bcf-751d6c4a3783-scripts\") pod \"02bf5c7d-1674-4308-8bcf-751d6c4a3783\" (UID: \"02bf5c7d-1674-4308-8bcf-751d6c4a3783\") " Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.815194 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02bf5c7d-1674-4308-8bcf-751d6c4a3783-combined-ca-bundle\") pod \"02bf5c7d-1674-4308-8bcf-751d6c4a3783\" (UID: \"02bf5c7d-1674-4308-8bcf-751d6c4a3783\") " Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.815212 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/02bf5c7d-1674-4308-8bcf-751d6c4a3783-var-run\") pod \"02bf5c7d-1674-4308-8bcf-751d6c4a3783\" (UID: \"02bf5c7d-1674-4308-8bcf-751d6c4a3783\") " Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.815297 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02bf5c7d-1674-4308-8bcf-751d6c4a3783-var-run" (OuterVolumeSpecName: "var-run") pod "02bf5c7d-1674-4308-8bcf-751d6c4a3783" (UID: "02bf5c7d-1674-4308-8bcf-751d6c4a3783"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.815247 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/02bf5c7d-1674-4308-8bcf-751d6c4a3783-ovn-controller-tls-certs\") pod \"02bf5c7d-1674-4308-8bcf-751d6c4a3783\" (UID: \"02bf5c7d-1674-4308-8bcf-751d6c4a3783\") " Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.815355 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2llt4\" (UniqueName: \"kubernetes.io/projected/02bf5c7d-1674-4308-8bcf-751d6c4a3783-kube-api-access-2llt4\") pod \"02bf5c7d-1674-4308-8bcf-751d6c4a3783\" (UID: \"02bf5c7d-1674-4308-8bcf-751d6c4a3783\") " Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.815917 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/02bf5c7d-1674-4308-8bcf-751d6c4a3783-var-log-ovn\") pod \"02bf5c7d-1674-4308-8bcf-751d6c4a3783\" (UID: \"02bf5c7d-1674-4308-8bcf-751d6c4a3783\") " Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.816003 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02bf5c7d-1674-4308-8bcf-751d6c4a3783-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "02bf5c7d-1674-4308-8bcf-751d6c4a3783" (UID: "02bf5c7d-1674-4308-8bcf-751d6c4a3783"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.816448 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02bf5c7d-1674-4308-8bcf-751d6c4a3783-scripts" (OuterVolumeSpecName: "scripts") pod "02bf5c7d-1674-4308-8bcf-751d6c4a3783" (UID: "02bf5c7d-1674-4308-8bcf-751d6c4a3783"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.816680 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/84d7aac6-1073-41c0-acff-169e36ec197d-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.816709 4867 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/84d7aac6-1073-41c0-acff-169e36ec197d-pod-info\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.816722 4867 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.816735 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/84d7aac6-1073-41c0-acff-169e36ec197d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.816779 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.816794 4867 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/84d7aac6-1073-41c0-acff-169e36ec197d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.816808 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/84d7aac6-1073-41c0-acff-169e36ec197d-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.816820 4867 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/02bf5c7d-1674-4308-8bcf-751d6c4a3783-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.816832 4867 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/84d7aac6-1073-41c0-acff-169e36ec197d-server-conf\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.816846 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02bf5c7d-1674-4308-8bcf-751d6c4a3783-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.816859 4867 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/02bf5c7d-1674-4308-8bcf-751d6c4a3783-var-run\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.816871 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbzfw\" (UniqueName: \"kubernetes.io/projected/84d7aac6-1073-41c0-acff-169e36ec197d-kube-api-access-gbzfw\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.818859 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02bf5c7d-1674-4308-8bcf-751d6c4a3783-kube-api-access-2llt4" (OuterVolumeSpecName: "kube-api-access-2llt4") pod "02bf5c7d-1674-4308-8bcf-751d6c4a3783" (UID: "02bf5c7d-1674-4308-8bcf-751d6c4a3783"). InnerVolumeSpecName "kube-api-access-2llt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.832787 4867 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.834533 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02bf5c7d-1674-4308-8bcf-751d6c4a3783-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02bf5c7d-1674-4308-8bcf-751d6c4a3783" (UID: "02bf5c7d-1674-4308-8bcf-751d6c4a3783"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.858764 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02bf5c7d-1674-4308-8bcf-751d6c4a3783-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "02bf5c7d-1674-4308-8bcf-751d6c4a3783" (UID: "02bf5c7d-1674-4308-8bcf-751d6c4a3783"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.895303 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-8jl6r_02bf5c7d-1674-4308-8bcf-751d6c4a3783/ovn-controller/0.log" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.895346 4867 generic.go:334] "Generic (PLEG): container finished" podID="02bf5c7d-1674-4308-8bcf-751d6c4a3783" containerID="c6772513d0760f994bd984b33ae94d311c871e514a88791bb619cd34da240dd9" exitCode=139 Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.895392 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8jl6r" event={"ID":"02bf5c7d-1674-4308-8bcf-751d6c4a3783","Type":"ContainerDied","Data":"c6772513d0760f994bd984b33ae94d311c871e514a88791bb619cd34da240dd9"} Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.895417 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8jl6r" event={"ID":"02bf5c7d-1674-4308-8bcf-751d6c4a3783","Type":"ContainerDied","Data":"d6833be0face85320c24caf8c9689ccc88d4efbbf08d20fcdfdc9aa8fe11e591"} Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.895433 4867 scope.go:117] "RemoveContainer" containerID="c6772513d0760f994bd984b33ae94d311c871e514a88791bb619cd34da240dd9" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.895514 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8jl6r" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.908036 4867 generic.go:334] "Generic (PLEG): container finished" podID="1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99" containerID="bc5390d4bcf01426a28783738a2f8a8259143f42fe7c013e5c96ae09dbf77b55" exitCode=0 Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.908110 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99","Type":"ContainerDied","Data":"bc5390d4bcf01426a28783738a2f8a8259143f42fe7c013e5c96ae09dbf77b55"} Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.908142 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99","Type":"ContainerDied","Data":"43ccb89b14f5d6ab3efda5233941517a044e3be038303100a78892f5374fe376"} Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.908220 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.912532 4867 generic.go:334] "Generic (PLEG): container finished" podID="84d7aac6-1073-41c0-acff-169e36ec197d" containerID="8eae07fdea9c0953b3fdfc9cbf9df315288333bc30f11ac89880d48e2c61ac1b" exitCode=0 Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.912804 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.912814 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"84d7aac6-1073-41c0-acff-169e36ec197d","Type":"ContainerDied","Data":"8eae07fdea9c0953b3fdfc9cbf9df315288333bc30f11ac89880d48e2c61ac1b"} Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.912984 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"84d7aac6-1073-41c0-acff-169e36ec197d","Type":"ContainerDied","Data":"0a965d6da9ca07e8eb784a68437e692b51772e19780f24a10abeda0c60018d21"} Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.918044 4867 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/02bf5c7d-1674-4308-8bcf-751d6c4a3783-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.918075 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2llt4\" (UniqueName: \"kubernetes.io/projected/02bf5c7d-1674-4308-8bcf-751d6c4a3783-kube-api-access-2llt4\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.918089 4867 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.918102 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02bf5c7d-1674-4308-8bcf-751d6c4a3783-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.947247 4867 scope.go:117] "RemoveContainer" containerID="c6772513d0760f994bd984b33ae94d311c871e514a88791bb619cd34da240dd9" Jan 01 08:50:34 crc kubenswrapper[4867]: E0101 08:50:34.954350 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6772513d0760f994bd984b33ae94d311c871e514a88791bb619cd34da240dd9\": container with ID starting with c6772513d0760f994bd984b33ae94d311c871e514a88791bb619cd34da240dd9 not found: ID does not exist" containerID="c6772513d0760f994bd984b33ae94d311c871e514a88791bb619cd34da240dd9" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.954400 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6772513d0760f994bd984b33ae94d311c871e514a88791bb619cd34da240dd9"} err="failed to get container status \"c6772513d0760f994bd984b33ae94d311c871e514a88791bb619cd34da240dd9\": rpc error: code = NotFound desc = could not find container \"c6772513d0760f994bd984b33ae94d311c871e514a88791bb619cd34da240dd9\": container with ID starting with c6772513d0760f994bd984b33ae94d311c871e514a88791bb619cd34da240dd9 not found: ID does not exist" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.954438 4867 scope.go:117] "RemoveContainer" containerID="bc5390d4bcf01426a28783738a2f8a8259143f42fe7c013e5c96ae09dbf77b55" Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.958986 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.980994 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.992764 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 01 08:50:34 crc kubenswrapper[4867]: I0101 08:50:34.998786 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.000147 4867 scope.go:117] "RemoveContainer" containerID="dc368467d4b3d995dcecfe0ff1d3410bbb5d37c4caf4fae784cd19c720d828d6" Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.004240 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-8jl6r"] Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.025542 4867 scope.go:117] "RemoveContainer" containerID="bc5390d4bcf01426a28783738a2f8a8259143f42fe7c013e5c96ae09dbf77b55" Jan 01 08:50:35 crc kubenswrapper[4867]: E0101 08:50:35.026391 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc5390d4bcf01426a28783738a2f8a8259143f42fe7c013e5c96ae09dbf77b55\": container with ID starting with bc5390d4bcf01426a28783738a2f8a8259143f42fe7c013e5c96ae09dbf77b55 not found: ID does not exist" containerID="bc5390d4bcf01426a28783738a2f8a8259143f42fe7c013e5c96ae09dbf77b55" Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.026472 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc5390d4bcf01426a28783738a2f8a8259143f42fe7c013e5c96ae09dbf77b55"} err="failed to get container status \"bc5390d4bcf01426a28783738a2f8a8259143f42fe7c013e5c96ae09dbf77b55\": rpc error: code = NotFound desc = could not find container \"bc5390d4bcf01426a28783738a2f8a8259143f42fe7c013e5c96ae09dbf77b55\": container with ID starting with bc5390d4bcf01426a28783738a2f8a8259143f42fe7c013e5c96ae09dbf77b55 not found: ID does not exist" Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.026520 4867 scope.go:117] "RemoveContainer" containerID="dc368467d4b3d995dcecfe0ff1d3410bbb5d37c4caf4fae784cd19c720d828d6" Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.026535 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-8jl6r"] Jan 01 08:50:35 crc kubenswrapper[4867]: E0101 08:50:35.026816 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc368467d4b3d995dcecfe0ff1d3410bbb5d37c4caf4fae784cd19c720d828d6\": container with ID starting with dc368467d4b3d995dcecfe0ff1d3410bbb5d37c4caf4fae784cd19c720d828d6 not found: ID does not exist" containerID="dc368467d4b3d995dcecfe0ff1d3410bbb5d37c4caf4fae784cd19c720d828d6" Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.026933 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc368467d4b3d995dcecfe0ff1d3410bbb5d37c4caf4fae784cd19c720d828d6"} err="failed to get container status \"dc368467d4b3d995dcecfe0ff1d3410bbb5d37c4caf4fae784cd19c720d828d6\": rpc error: code = NotFound desc = could not find container \"dc368467d4b3d995dcecfe0ff1d3410bbb5d37c4caf4fae784cd19c720d828d6\": container with ID starting with dc368467d4b3d995dcecfe0ff1d3410bbb5d37c4caf4fae784cd19c720d828d6 not found: ID does not exist" Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.026959 4867 scope.go:117] "RemoveContainer" containerID="8eae07fdea9c0953b3fdfc9cbf9df315288333bc30f11ac89880d48e2c61ac1b" Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.042907 4867 scope.go:117] "RemoveContainer" containerID="ccf5ec4f83d69a7451d4e4e6f25b8108ea8d0370b161ff5d8a9669794a1fb386" Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.060554 4867 scope.go:117] "RemoveContainer" containerID="8eae07fdea9c0953b3fdfc9cbf9df315288333bc30f11ac89880d48e2c61ac1b" Jan 01 08:50:35 crc kubenswrapper[4867]: E0101 08:50:35.060918 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8eae07fdea9c0953b3fdfc9cbf9df315288333bc30f11ac89880d48e2c61ac1b\": container with ID starting with 8eae07fdea9c0953b3fdfc9cbf9df315288333bc30f11ac89880d48e2c61ac1b not found: ID does not exist" containerID="8eae07fdea9c0953b3fdfc9cbf9df315288333bc30f11ac89880d48e2c61ac1b" Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.060950 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8eae07fdea9c0953b3fdfc9cbf9df315288333bc30f11ac89880d48e2c61ac1b"} err="failed to get container status \"8eae07fdea9c0953b3fdfc9cbf9df315288333bc30f11ac89880d48e2c61ac1b\": rpc error: code = NotFound desc = could not find container \"8eae07fdea9c0953b3fdfc9cbf9df315288333bc30f11ac89880d48e2c61ac1b\": container with ID starting with 8eae07fdea9c0953b3fdfc9cbf9df315288333bc30f11ac89880d48e2c61ac1b not found: ID does not exist" Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.060971 4867 scope.go:117] "RemoveContainer" containerID="ccf5ec4f83d69a7451d4e4e6f25b8108ea8d0370b161ff5d8a9669794a1fb386" Jan 01 08:50:35 crc kubenswrapper[4867]: E0101 08:50:35.061360 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccf5ec4f83d69a7451d4e4e6f25b8108ea8d0370b161ff5d8a9669794a1fb386\": container with ID starting with ccf5ec4f83d69a7451d4e4e6f25b8108ea8d0370b161ff5d8a9669794a1fb386 not found: ID does not exist" containerID="ccf5ec4f83d69a7451d4e4e6f25b8108ea8d0370b161ff5d8a9669794a1fb386" Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.061401 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccf5ec4f83d69a7451d4e4e6f25b8108ea8d0370b161ff5d8a9669794a1fb386"} err="failed to get container status \"ccf5ec4f83d69a7451d4e4e6f25b8108ea8d0370b161ff5d8a9669794a1fb386\": rpc error: code = NotFound desc = could not find container \"ccf5ec4f83d69a7451d4e4e6f25b8108ea8d0370b161ff5d8a9669794a1fb386\": container with ID starting with ccf5ec4f83d69a7451d4e4e6f25b8108ea8d0370b161ff5d8a9669794a1fb386 not found: ID does not exist" Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.154824 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02bf5c7d-1674-4308-8bcf-751d6c4a3783" path="/var/lib/kubelet/pods/02bf5c7d-1674-4308-8bcf-751d6c4a3783/volumes" Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.155554 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99" path="/var/lib/kubelet/pods/1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99/volumes" Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.156664 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84d7aac6-1073-41c0-acff-169e36ec197d" path="/var/lib/kubelet/pods/84d7aac6-1073-41c0-acff-169e36ec197d/volumes" Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.157221 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="985cc3ff-ea2f-4386-a828-180deef97412" path="/var/lib/kubelet/pods/985cc3ff-ea2f-4386-a828-180deef97412/volumes" Jan 01 08:50:35 crc kubenswrapper[4867]: E0101 08:50:35.397378 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f57ce717c258cef589d7a47e6fbf0facf4d6e2d61727c0cbd20f621c798a45bd is running failed: container process not found" containerID="f57ce717c258cef589d7a47e6fbf0facf4d6e2d61727c0cbd20f621c798a45bd" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 01 08:50:35 crc kubenswrapper[4867]: E0101 08:50:35.397790 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f57ce717c258cef589d7a47e6fbf0facf4d6e2d61727c0cbd20f621c798a45bd is running failed: container process not found" containerID="f57ce717c258cef589d7a47e6fbf0facf4d6e2d61727c0cbd20f621c798a45bd" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 01 08:50:35 crc kubenswrapper[4867]: E0101 08:50:35.398061 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f57ce717c258cef589d7a47e6fbf0facf4d6e2d61727c0cbd20f621c798a45bd is running failed: container process not found" containerID="f57ce717c258cef589d7a47e6fbf0facf4d6e2d61727c0cbd20f621c798a45bd" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 01 08:50:35 crc kubenswrapper[4867]: E0101 08:50:35.398084 4867 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f57ce717c258cef589d7a47e6fbf0facf4d6e2d61727c0cbd20f621c798a45bd is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="8799ae41-c9cb-409a-ac59-3e6b59bb0198" containerName="nova-cell1-conductor-conductor" Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.700804 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.758477 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.792143 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.829273 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kx5tm\" (UniqueName: \"kubernetes.io/projected/8799ae41-c9cb-409a-ac59-3e6b59bb0198-kube-api-access-kx5tm\") pod \"8799ae41-c9cb-409a-ac59-3e6b59bb0198\" (UID: \"8799ae41-c9cb-409a-ac59-3e6b59bb0198\") " Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.829311 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8799ae41-c9cb-409a-ac59-3e6b59bb0198-config-data\") pod \"8799ae41-c9cb-409a-ac59-3e6b59bb0198\" (UID: \"8799ae41-c9cb-409a-ac59-3e6b59bb0198\") " Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.829403 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8799ae41-c9cb-409a-ac59-3e6b59bb0198-combined-ca-bundle\") pod \"8799ae41-c9cb-409a-ac59-3e6b59bb0198\" (UID: \"8799ae41-c9cb-409a-ac59-3e6b59bb0198\") " Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.838129 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8799ae41-c9cb-409a-ac59-3e6b59bb0198-kube-api-access-kx5tm" (OuterVolumeSpecName: "kube-api-access-kx5tm") pod "8799ae41-c9cb-409a-ac59-3e6b59bb0198" (UID: "8799ae41-c9cb-409a-ac59-3e6b59bb0198"). InnerVolumeSpecName "kube-api-access-kx5tm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.870975 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8799ae41-c9cb-409a-ac59-3e6b59bb0198-config-data" (OuterVolumeSpecName: "config-data") pod "8799ae41-c9cb-409a-ac59-3e6b59bb0198" (UID: "8799ae41-c9cb-409a-ac59-3e6b59bb0198"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.929107 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8799ae41-c9cb-409a-ac59-3e6b59bb0198-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8799ae41-c9cb-409a-ac59-3e6b59bb0198" (UID: "8799ae41-c9cb-409a-ac59-3e6b59bb0198"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.929980 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dad921b-d7dd-4113-85d2-78d6f59944b4-ceilometer-tls-certs\") pod \"8dad921b-d7dd-4113-85d2-78d6f59944b4\" (UID: \"8dad921b-d7dd-4113-85d2-78d6f59944b4\") " Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.930080 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5ggw\" (UniqueName: \"kubernetes.io/projected/8dad921b-d7dd-4113-85d2-78d6f59944b4-kube-api-access-c5ggw\") pod \"8dad921b-d7dd-4113-85d2-78d6f59944b4\" (UID: \"8dad921b-d7dd-4113-85d2-78d6f59944b4\") " Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.930171 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dad921b-d7dd-4113-85d2-78d6f59944b4-combined-ca-bundle\") pod \"8dad921b-d7dd-4113-85d2-78d6f59944b4\" (UID: \"8dad921b-d7dd-4113-85d2-78d6f59944b4\") " Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.930243 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68bv9\" (UniqueName: \"kubernetes.io/projected/7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb-kube-api-access-68bv9\") pod \"7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb\" (UID: \"7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb\") " Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.930331 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb-config-data\") pod \"7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb\" (UID: \"7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb\") " Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.930447 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb-combined-ca-bundle\") pod \"7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb\" (UID: \"7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb\") " Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.930519 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8dad921b-d7dd-4113-85d2-78d6f59944b4-log-httpd\") pod \"8dad921b-d7dd-4113-85d2-78d6f59944b4\" (UID: \"8dad921b-d7dd-4113-85d2-78d6f59944b4\") " Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.930635 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dad921b-d7dd-4113-85d2-78d6f59944b4-config-data\") pod \"8dad921b-d7dd-4113-85d2-78d6f59944b4\" (UID: \"8dad921b-d7dd-4113-85d2-78d6f59944b4\") " Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.930710 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8dad921b-d7dd-4113-85d2-78d6f59944b4-run-httpd\") pod \"8dad921b-d7dd-4113-85d2-78d6f59944b4\" (UID: \"8dad921b-d7dd-4113-85d2-78d6f59944b4\") " Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.930783 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8dad921b-d7dd-4113-85d2-78d6f59944b4-sg-core-conf-yaml\") pod \"8dad921b-d7dd-4113-85d2-78d6f59944b4\" (UID: \"8dad921b-d7dd-4113-85d2-78d6f59944b4\") " Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.930850 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dad921b-d7dd-4113-85d2-78d6f59944b4-scripts\") pod \"8dad921b-d7dd-4113-85d2-78d6f59944b4\" (UID: \"8dad921b-d7dd-4113-85d2-78d6f59944b4\") " Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.931151 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kx5tm\" (UniqueName: \"kubernetes.io/projected/8799ae41-c9cb-409a-ac59-3e6b59bb0198-kube-api-access-kx5tm\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.931211 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8799ae41-c9cb-409a-ac59-3e6b59bb0198-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.931286 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8799ae41-c9cb-409a-ac59-3e6b59bb0198-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.933825 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dad921b-d7dd-4113-85d2-78d6f59944b4-scripts" (OuterVolumeSpecName: "scripts") pod "8dad921b-d7dd-4113-85d2-78d6f59944b4" (UID: "8dad921b-d7dd-4113-85d2-78d6f59944b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.941366 4867 generic.go:334] "Generic (PLEG): container finished" podID="8799ae41-c9cb-409a-ac59-3e6b59bb0198" containerID="f57ce717c258cef589d7a47e6fbf0facf4d6e2d61727c0cbd20f621c798a45bd" exitCode=0 Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.941463 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8799ae41-c9cb-409a-ac59-3e6b59bb0198","Type":"ContainerDied","Data":"f57ce717c258cef589d7a47e6fbf0facf4d6e2d61727c0cbd20f621c798a45bd"} Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.941491 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8799ae41-c9cb-409a-ac59-3e6b59bb0198","Type":"ContainerDied","Data":"26a9e5e2df70974612bfa34e9b15e287492a7dc38a03f008f32a904f9ed08b17"} Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.941511 4867 scope.go:117] "RemoveContainer" containerID="f57ce717c258cef589d7a47e6fbf0facf4d6e2d61727c0cbd20f621c798a45bd" Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.941626 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.942087 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8dad921b-d7dd-4113-85d2-78d6f59944b4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8dad921b-d7dd-4113-85d2-78d6f59944b4" (UID: "8dad921b-d7dd-4113-85d2-78d6f59944b4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.952012 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8dad921b-d7dd-4113-85d2-78d6f59944b4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8dad921b-d7dd-4113-85d2-78d6f59944b4" (UID: "8dad921b-d7dd-4113-85d2-78d6f59944b4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.973012 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dad921b-d7dd-4113-85d2-78d6f59944b4-kube-api-access-c5ggw" (OuterVolumeSpecName: "kube-api-access-c5ggw") pod "8dad921b-d7dd-4113-85d2-78d6f59944b4" (UID: "8dad921b-d7dd-4113-85d2-78d6f59944b4"). InnerVolumeSpecName "kube-api-access-c5ggw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.982337 4867 generic.go:334] "Generic (PLEG): container finished" podID="8dad921b-d7dd-4113-85d2-78d6f59944b4" containerID="414ccbc2e33650855d1dd8b10146a435965c903f63378a1ad4e86b3cd9a12e1e" exitCode=0 Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.982395 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8dad921b-d7dd-4113-85d2-78d6f59944b4","Type":"ContainerDied","Data":"414ccbc2e33650855d1dd8b10146a435965c903f63378a1ad4e86b3cd9a12e1e"} Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.982420 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8dad921b-d7dd-4113-85d2-78d6f59944b4","Type":"ContainerDied","Data":"82fad26114b9d1d2173180067eb4d1901d30fad6a5254f634a2ac2616775e407"} Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.982505 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.984128 4867 generic.go:334] "Generic (PLEG): container finished" podID="7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb" containerID="e36af79288ec74b9ac3b28d475ec0bec31b44ef20ef075ee5431a9a0e5c8698a" exitCode=0 Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.984154 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb","Type":"ContainerDied","Data":"e36af79288ec74b9ac3b28d475ec0bec31b44ef20ef075ee5431a9a0e5c8698a"} Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.984170 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb","Type":"ContainerDied","Data":"3a2f796a54c9f1518366d91291aafe612e4c60f2a9c2315e7ef35e839ec7d762"} Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.984206 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.990587 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.991059 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb-kube-api-access-68bv9" (OuterVolumeSpecName: "kube-api-access-68bv9") pod "7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb" (UID: "7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb"). InnerVolumeSpecName "kube-api-access-68bv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:35 crc kubenswrapper[4867]: I0101 08:50:35.996723 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 01 08:50:36 crc kubenswrapper[4867]: I0101 08:50:36.000373 4867 scope.go:117] "RemoveContainer" containerID="f57ce717c258cef589d7a47e6fbf0facf4d6e2d61727c0cbd20f621c798a45bd" Jan 01 08:50:36 crc kubenswrapper[4867]: E0101 08:50:36.000698 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f57ce717c258cef589d7a47e6fbf0facf4d6e2d61727c0cbd20f621c798a45bd\": container with ID starting with f57ce717c258cef589d7a47e6fbf0facf4d6e2d61727c0cbd20f621c798a45bd not found: ID does not exist" containerID="f57ce717c258cef589d7a47e6fbf0facf4d6e2d61727c0cbd20f621c798a45bd" Jan 01 08:50:36 crc kubenswrapper[4867]: I0101 08:50:36.000726 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f57ce717c258cef589d7a47e6fbf0facf4d6e2d61727c0cbd20f621c798a45bd"} err="failed to get container status \"f57ce717c258cef589d7a47e6fbf0facf4d6e2d61727c0cbd20f621c798a45bd\": rpc error: code = NotFound desc = could not find container \"f57ce717c258cef589d7a47e6fbf0facf4d6e2d61727c0cbd20f621c798a45bd\": container with ID starting with f57ce717c258cef589d7a47e6fbf0facf4d6e2d61727c0cbd20f621c798a45bd not found: ID does not exist" Jan 01 08:50:36 crc kubenswrapper[4867]: I0101 08:50:36.000745 4867 scope.go:117] "RemoveContainer" containerID="9dd7ea3e293dda5baf51be491f21418ac4b799705fb9b3ff054db3ba80da00c2" Jan 01 08:50:36 crc kubenswrapper[4867]: I0101 08:50:36.006040 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb" (UID: "7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:36 crc kubenswrapper[4867]: I0101 08:50:36.013390 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dad921b-d7dd-4113-85d2-78d6f59944b4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8dad921b-d7dd-4113-85d2-78d6f59944b4" (UID: "8dad921b-d7dd-4113-85d2-78d6f59944b4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:36 crc kubenswrapper[4867]: I0101 08:50:36.026916 4867 scope.go:117] "RemoveContainer" containerID="46a9c735094db260247988c6fe2d5ab62a8071af9e424b678da59b0a5a682f02" Jan 01 08:50:36 crc kubenswrapper[4867]: I0101 08:50:36.030045 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb-config-data" (OuterVolumeSpecName: "config-data") pod "7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb" (UID: "7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:36 crc kubenswrapper[4867]: I0101 08:50:36.032343 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68bv9\" (UniqueName: \"kubernetes.io/projected/7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb-kube-api-access-68bv9\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:36 crc kubenswrapper[4867]: I0101 08:50:36.032433 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:36 crc kubenswrapper[4867]: I0101 08:50:36.032531 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:36 crc kubenswrapper[4867]: I0101 08:50:36.032590 4867 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8dad921b-d7dd-4113-85d2-78d6f59944b4-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:36 crc kubenswrapper[4867]: I0101 08:50:36.032642 4867 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8dad921b-d7dd-4113-85d2-78d6f59944b4-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:36 crc kubenswrapper[4867]: I0101 08:50:36.032721 4867 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8dad921b-d7dd-4113-85d2-78d6f59944b4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:36 crc kubenswrapper[4867]: I0101 08:50:36.032807 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dad921b-d7dd-4113-85d2-78d6f59944b4-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:36 crc kubenswrapper[4867]: I0101 08:50:36.032868 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5ggw\" (UniqueName: \"kubernetes.io/projected/8dad921b-d7dd-4113-85d2-78d6f59944b4-kube-api-access-c5ggw\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:36 crc kubenswrapper[4867]: I0101 08:50:36.035452 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dad921b-d7dd-4113-85d2-78d6f59944b4-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "8dad921b-d7dd-4113-85d2-78d6f59944b4" (UID: "8dad921b-d7dd-4113-85d2-78d6f59944b4"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:36 crc kubenswrapper[4867]: I0101 08:50:36.042406 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dad921b-d7dd-4113-85d2-78d6f59944b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8dad921b-d7dd-4113-85d2-78d6f59944b4" (UID: "8dad921b-d7dd-4113-85d2-78d6f59944b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:36 crc kubenswrapper[4867]: I0101 08:50:36.054553 4867 scope.go:117] "RemoveContainer" containerID="414ccbc2e33650855d1dd8b10146a435965c903f63378a1ad4e86b3cd9a12e1e" Jan 01 08:50:36 crc kubenswrapper[4867]: E0101 08:50:36.070639 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8799ae41_c9cb_409a_ac59_3e6b59bb0198.slice/crio-26a9e5e2df70974612bfa34e9b15e287492a7dc38a03f008f32a904f9ed08b17\": RecentStats: unable to find data in memory cache]" Jan 01 08:50:36 crc kubenswrapper[4867]: I0101 08:50:36.082060 4867 scope.go:117] "RemoveContainer" containerID="b766b7e81e4c520bf9e2f42a30c03f5beb3f1fa3a2ef8a0d49676f36a97ed049" Jan 01 08:50:36 crc kubenswrapper[4867]: I0101 08:50:36.089074 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dad921b-d7dd-4113-85d2-78d6f59944b4-config-data" (OuterVolumeSpecName: "config-data") pod "8dad921b-d7dd-4113-85d2-78d6f59944b4" (UID: "8dad921b-d7dd-4113-85d2-78d6f59944b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:36 crc kubenswrapper[4867]: I0101 08:50:36.104306 4867 scope.go:117] "RemoveContainer" containerID="9dd7ea3e293dda5baf51be491f21418ac4b799705fb9b3ff054db3ba80da00c2" Jan 01 08:50:36 crc kubenswrapper[4867]: E0101 08:50:36.104554 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dd7ea3e293dda5baf51be491f21418ac4b799705fb9b3ff054db3ba80da00c2\": container with ID starting with 9dd7ea3e293dda5baf51be491f21418ac4b799705fb9b3ff054db3ba80da00c2 not found: ID does not exist" containerID="9dd7ea3e293dda5baf51be491f21418ac4b799705fb9b3ff054db3ba80da00c2" Jan 01 08:50:36 crc kubenswrapper[4867]: I0101 08:50:36.104584 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dd7ea3e293dda5baf51be491f21418ac4b799705fb9b3ff054db3ba80da00c2"} err="failed to get container status \"9dd7ea3e293dda5baf51be491f21418ac4b799705fb9b3ff054db3ba80da00c2\": rpc error: code = NotFound desc = could not find container \"9dd7ea3e293dda5baf51be491f21418ac4b799705fb9b3ff054db3ba80da00c2\": container with ID starting with 9dd7ea3e293dda5baf51be491f21418ac4b799705fb9b3ff054db3ba80da00c2 not found: ID does not exist" Jan 01 08:50:36 crc kubenswrapper[4867]: I0101 08:50:36.104604 4867 scope.go:117] "RemoveContainer" containerID="46a9c735094db260247988c6fe2d5ab62a8071af9e424b678da59b0a5a682f02" Jan 01 08:50:36 crc kubenswrapper[4867]: E0101 08:50:36.104799 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46a9c735094db260247988c6fe2d5ab62a8071af9e424b678da59b0a5a682f02\": container with ID starting with 46a9c735094db260247988c6fe2d5ab62a8071af9e424b678da59b0a5a682f02 not found: ID does not exist" containerID="46a9c735094db260247988c6fe2d5ab62a8071af9e424b678da59b0a5a682f02" Jan 01 08:50:36 crc kubenswrapper[4867]: I0101 08:50:36.104819 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46a9c735094db260247988c6fe2d5ab62a8071af9e424b678da59b0a5a682f02"} err="failed to get container status \"46a9c735094db260247988c6fe2d5ab62a8071af9e424b678da59b0a5a682f02\": rpc error: code = NotFound desc = could not find container \"46a9c735094db260247988c6fe2d5ab62a8071af9e424b678da59b0a5a682f02\": container with ID starting with 46a9c735094db260247988c6fe2d5ab62a8071af9e424b678da59b0a5a682f02 not found: ID does not exist" Jan 01 08:50:36 crc kubenswrapper[4867]: I0101 08:50:36.104833 4867 scope.go:117] "RemoveContainer" containerID="414ccbc2e33650855d1dd8b10146a435965c903f63378a1ad4e86b3cd9a12e1e" Jan 01 08:50:36 crc kubenswrapper[4867]: E0101 08:50:36.105005 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"414ccbc2e33650855d1dd8b10146a435965c903f63378a1ad4e86b3cd9a12e1e\": container with ID starting with 414ccbc2e33650855d1dd8b10146a435965c903f63378a1ad4e86b3cd9a12e1e not found: ID does not exist" containerID="414ccbc2e33650855d1dd8b10146a435965c903f63378a1ad4e86b3cd9a12e1e" Jan 01 08:50:36 crc kubenswrapper[4867]: I0101 08:50:36.105025 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"414ccbc2e33650855d1dd8b10146a435965c903f63378a1ad4e86b3cd9a12e1e"} err="failed to get container status \"414ccbc2e33650855d1dd8b10146a435965c903f63378a1ad4e86b3cd9a12e1e\": rpc error: code = NotFound desc = could not find container \"414ccbc2e33650855d1dd8b10146a435965c903f63378a1ad4e86b3cd9a12e1e\": container with ID starting with 414ccbc2e33650855d1dd8b10146a435965c903f63378a1ad4e86b3cd9a12e1e not found: ID does not exist" Jan 01 08:50:36 crc kubenswrapper[4867]: I0101 08:50:36.105037 4867 scope.go:117] "RemoveContainer" containerID="b766b7e81e4c520bf9e2f42a30c03f5beb3f1fa3a2ef8a0d49676f36a97ed049" Jan 01 08:50:36 crc kubenswrapper[4867]: E0101 08:50:36.105197 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b766b7e81e4c520bf9e2f42a30c03f5beb3f1fa3a2ef8a0d49676f36a97ed049\": container with ID starting with b766b7e81e4c520bf9e2f42a30c03f5beb3f1fa3a2ef8a0d49676f36a97ed049 not found: ID does not exist" containerID="b766b7e81e4c520bf9e2f42a30c03f5beb3f1fa3a2ef8a0d49676f36a97ed049" Jan 01 08:50:36 crc kubenswrapper[4867]: I0101 08:50:36.105214 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b766b7e81e4c520bf9e2f42a30c03f5beb3f1fa3a2ef8a0d49676f36a97ed049"} err="failed to get container status \"b766b7e81e4c520bf9e2f42a30c03f5beb3f1fa3a2ef8a0d49676f36a97ed049\": rpc error: code = NotFound desc = could not find container \"b766b7e81e4c520bf9e2f42a30c03f5beb3f1fa3a2ef8a0d49676f36a97ed049\": container with ID starting with b766b7e81e4c520bf9e2f42a30c03f5beb3f1fa3a2ef8a0d49676f36a97ed049 not found: ID does not exist" Jan 01 08:50:36 crc kubenswrapper[4867]: I0101 08:50:36.105230 4867 scope.go:117] "RemoveContainer" containerID="e36af79288ec74b9ac3b28d475ec0bec31b44ef20ef075ee5431a9a0e5c8698a" Jan 01 08:50:36 crc kubenswrapper[4867]: I0101 08:50:36.127314 4867 scope.go:117] "RemoveContainer" containerID="e36af79288ec74b9ac3b28d475ec0bec31b44ef20ef075ee5431a9a0e5c8698a" Jan 01 08:50:36 crc kubenswrapper[4867]: E0101 08:50:36.127721 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e36af79288ec74b9ac3b28d475ec0bec31b44ef20ef075ee5431a9a0e5c8698a\": container with ID starting with e36af79288ec74b9ac3b28d475ec0bec31b44ef20ef075ee5431a9a0e5c8698a not found: ID does not exist" containerID="e36af79288ec74b9ac3b28d475ec0bec31b44ef20ef075ee5431a9a0e5c8698a" Jan 01 08:50:36 crc kubenswrapper[4867]: I0101 08:50:36.127769 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e36af79288ec74b9ac3b28d475ec0bec31b44ef20ef075ee5431a9a0e5c8698a"} err="failed to get container status \"e36af79288ec74b9ac3b28d475ec0bec31b44ef20ef075ee5431a9a0e5c8698a\": rpc error: code = NotFound desc = could not find container \"e36af79288ec74b9ac3b28d475ec0bec31b44ef20ef075ee5431a9a0e5c8698a\": container with ID starting with e36af79288ec74b9ac3b28d475ec0bec31b44ef20ef075ee5431a9a0e5c8698a not found: ID does not exist" Jan 01 08:50:36 crc kubenswrapper[4867]: I0101 08:50:36.134338 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dad921b-d7dd-4113-85d2-78d6f59944b4-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:36 crc kubenswrapper[4867]: I0101 08:50:36.134360 4867 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dad921b-d7dd-4113-85d2-78d6f59944b4-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:36 crc kubenswrapper[4867]: I0101 08:50:36.134369 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dad921b-d7dd-4113-85d2-78d6f59944b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:36 crc kubenswrapper[4867]: I0101 08:50:36.343966 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 01 08:50:36 crc kubenswrapper[4867]: I0101 08:50:36.353189 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 01 08:50:36 crc kubenswrapper[4867]: I0101 08:50:36.361847 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:50:36 crc kubenswrapper[4867]: I0101 08:50:36.369228 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 01 08:50:37 crc kubenswrapper[4867]: E0101 08:50:37.123257 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd is running failed: container process not found" containerID="823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 01 08:50:37 crc kubenswrapper[4867]: E0101 08:50:37.124191 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd is running failed: container process not found" containerID="823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 01 08:50:37 crc kubenswrapper[4867]: E0101 08:50:37.124731 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd is running failed: container process not found" containerID="823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 01 08:50:37 crc kubenswrapper[4867]: E0101 08:50:37.124813 4867 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-smgl6" podUID="d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e" containerName="ovsdb-server" Jan 01 08:50:37 crc kubenswrapper[4867]: E0101 08:50:37.131882 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c5d97f4ef6c67417f1c06bc5b592d06096afac0628ba26043d76ab1c8ed2c65b" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 01 08:50:37 crc kubenswrapper[4867]: E0101 08:50:37.135421 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c5d97f4ef6c67417f1c06bc5b592d06096afac0628ba26043d76ab1c8ed2c65b" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 01 08:50:37 crc kubenswrapper[4867]: E0101 08:50:37.137605 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c5d97f4ef6c67417f1c06bc5b592d06096afac0628ba26043d76ab1c8ed2c65b" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 01 08:50:37 crc kubenswrapper[4867]: E0101 08:50:37.137778 4867 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-smgl6" podUID="d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e" containerName="ovs-vswitchd" Jan 01 08:50:37 crc kubenswrapper[4867]: I0101 08:50:37.150394 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb" path="/var/lib/kubelet/pods/7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb/volumes" Jan 01 08:50:37 crc kubenswrapper[4867]: I0101 08:50:37.151722 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8799ae41-c9cb-409a-ac59-3e6b59bb0198" path="/var/lib/kubelet/pods/8799ae41-c9cb-409a-ac59-3e6b59bb0198/volumes" Jan 01 08:50:37 crc kubenswrapper[4867]: I0101 08:50:37.152587 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dad921b-d7dd-4113-85d2-78d6f59944b4" path="/var/lib/kubelet/pods/8dad921b-d7dd-4113-85d2-78d6f59944b4/volumes" Jan 01 08:50:38 crc kubenswrapper[4867]: I0101 08:50:38.626350 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-5fb785fd89-9d8g9" podUID="0973b1fb-6399-4d31-aa7e-2a41a163e4f4" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.154:9696/\": dial tcp 10.217.0.154:9696: connect: connection refused" Jan 01 08:50:42 crc kubenswrapper[4867]: E0101 08:50:42.122821 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd is running failed: container process not found" containerID="823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 01 08:50:42 crc kubenswrapper[4867]: E0101 08:50:42.123195 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd is running failed: container process not found" containerID="823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 01 08:50:42 crc kubenswrapper[4867]: E0101 08:50:42.123622 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd is running failed: container process not found" containerID="823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 01 08:50:42 crc kubenswrapper[4867]: E0101 08:50:42.123665 4867 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-smgl6" podUID="d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e" containerName="ovsdb-server" Jan 01 08:50:42 crc kubenswrapper[4867]: E0101 08:50:42.125074 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c5d97f4ef6c67417f1c06bc5b592d06096afac0628ba26043d76ab1c8ed2c65b" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 01 08:50:42 crc kubenswrapper[4867]: E0101 08:50:42.126360 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c5d97f4ef6c67417f1c06bc5b592d06096afac0628ba26043d76ab1c8ed2c65b" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 01 08:50:42 crc kubenswrapper[4867]: E0101 08:50:42.128175 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c5d97f4ef6c67417f1c06bc5b592d06096afac0628ba26043d76ab1c8ed2c65b" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 01 08:50:42 crc kubenswrapper[4867]: E0101 08:50:42.128209 4867 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-smgl6" podUID="d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e" containerName="ovs-vswitchd" Jan 01 08:50:47 crc kubenswrapper[4867]: E0101 08:50:47.124040 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd is running failed: container process not found" containerID="823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 01 08:50:47 crc kubenswrapper[4867]: E0101 08:50:47.126121 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd is running failed: container process not found" containerID="823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 01 08:50:47 crc kubenswrapper[4867]: E0101 08:50:47.126163 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c5d97f4ef6c67417f1c06bc5b592d06096afac0628ba26043d76ab1c8ed2c65b" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 01 08:50:47 crc kubenswrapper[4867]: E0101 08:50:47.127690 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd is running failed: container process not found" containerID="823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 01 08:50:47 crc kubenswrapper[4867]: E0101 08:50:47.127776 4867 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-smgl6" podUID="d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e" containerName="ovsdb-server" Jan 01 08:50:47 crc kubenswrapper[4867]: E0101 08:50:47.128504 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c5d97f4ef6c67417f1c06bc5b592d06096afac0628ba26043d76ab1c8ed2c65b" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 01 08:50:47 crc kubenswrapper[4867]: E0101 08:50:47.130728 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c5d97f4ef6c67417f1c06bc5b592d06096afac0628ba26043d76ab1c8ed2c65b" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 01 08:50:47 crc kubenswrapper[4867]: E0101 08:50:47.130799 4867 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-smgl6" podUID="d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e" containerName="ovs-vswitchd" Jan 01 08:50:49 crc kubenswrapper[4867]: I0101 08:50:49.710517 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5fb785fd89-9d8g9" Jan 01 08:50:49 crc kubenswrapper[4867]: I0101 08:50:49.881473 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-public-tls-certs\") pod \"0973b1fb-6399-4d31-aa7e-2a41a163e4f4\" (UID: \"0973b1fb-6399-4d31-aa7e-2a41a163e4f4\") " Jan 01 08:50:49 crc kubenswrapper[4867]: I0101 08:50:49.881543 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-internal-tls-certs\") pod \"0973b1fb-6399-4d31-aa7e-2a41a163e4f4\" (UID: \"0973b1fb-6399-4d31-aa7e-2a41a163e4f4\") " Jan 01 08:50:49 crc kubenswrapper[4867]: I0101 08:50:49.881571 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-httpd-config\") pod \"0973b1fb-6399-4d31-aa7e-2a41a163e4f4\" (UID: \"0973b1fb-6399-4d31-aa7e-2a41a163e4f4\") " Jan 01 08:50:49 crc kubenswrapper[4867]: I0101 08:50:49.881603 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54ph6\" (UniqueName: \"kubernetes.io/projected/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-kube-api-access-54ph6\") pod \"0973b1fb-6399-4d31-aa7e-2a41a163e4f4\" (UID: \"0973b1fb-6399-4d31-aa7e-2a41a163e4f4\") " Jan 01 08:50:49 crc kubenswrapper[4867]: I0101 08:50:49.881633 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-config\") pod \"0973b1fb-6399-4d31-aa7e-2a41a163e4f4\" (UID: \"0973b1fb-6399-4d31-aa7e-2a41a163e4f4\") " Jan 01 08:50:49 crc kubenswrapper[4867]: I0101 08:50:49.881670 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-combined-ca-bundle\") pod \"0973b1fb-6399-4d31-aa7e-2a41a163e4f4\" (UID: \"0973b1fb-6399-4d31-aa7e-2a41a163e4f4\") " Jan 01 08:50:49 crc kubenswrapper[4867]: I0101 08:50:49.881700 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-ovndb-tls-certs\") pod \"0973b1fb-6399-4d31-aa7e-2a41a163e4f4\" (UID: \"0973b1fb-6399-4d31-aa7e-2a41a163e4f4\") " Jan 01 08:50:49 crc kubenswrapper[4867]: I0101 08:50:49.890110 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "0973b1fb-6399-4d31-aa7e-2a41a163e4f4" (UID: "0973b1fb-6399-4d31-aa7e-2a41a163e4f4"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:49 crc kubenswrapper[4867]: I0101 08:50:49.898949 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-kube-api-access-54ph6" (OuterVolumeSpecName: "kube-api-access-54ph6") pod "0973b1fb-6399-4d31-aa7e-2a41a163e4f4" (UID: "0973b1fb-6399-4d31-aa7e-2a41a163e4f4"). InnerVolumeSpecName "kube-api-access-54ph6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:49 crc kubenswrapper[4867]: I0101 08:50:49.920452 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0973b1fb-6399-4d31-aa7e-2a41a163e4f4" (UID: "0973b1fb-6399-4d31-aa7e-2a41a163e4f4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:49 crc kubenswrapper[4867]: I0101 08:50:49.934280 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-config" (OuterVolumeSpecName: "config") pod "0973b1fb-6399-4d31-aa7e-2a41a163e4f4" (UID: "0973b1fb-6399-4d31-aa7e-2a41a163e4f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:49 crc kubenswrapper[4867]: I0101 08:50:49.940373 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0973b1fb-6399-4d31-aa7e-2a41a163e4f4" (UID: "0973b1fb-6399-4d31-aa7e-2a41a163e4f4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:49 crc kubenswrapper[4867]: I0101 08:50:49.943360 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0973b1fb-6399-4d31-aa7e-2a41a163e4f4" (UID: "0973b1fb-6399-4d31-aa7e-2a41a163e4f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:49 crc kubenswrapper[4867]: I0101 08:50:49.964557 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "0973b1fb-6399-4d31-aa7e-2a41a163e4f4" (UID: "0973b1fb-6399-4d31-aa7e-2a41a163e4f4"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:49 crc kubenswrapper[4867]: I0101 08:50:49.983635 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:49 crc kubenswrapper[4867]: I0101 08:50:49.984105 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:49 crc kubenswrapper[4867]: I0101 08:50:49.984118 4867 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:49 crc kubenswrapper[4867]: I0101 08:50:49.984126 4867 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:49 crc kubenswrapper[4867]: I0101 08:50:49.984134 4867 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:49 crc kubenswrapper[4867]: I0101 08:50:49.984141 4867 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:49 crc kubenswrapper[4867]: I0101 08:50:49.984151 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54ph6\" (UniqueName: \"kubernetes.io/projected/0973b1fb-6399-4d31-aa7e-2a41a163e4f4-kube-api-access-54ph6\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:50 crc kubenswrapper[4867]: I0101 08:50:50.159026 4867 generic.go:334] "Generic (PLEG): container finished" podID="0973b1fb-6399-4d31-aa7e-2a41a163e4f4" containerID="729b6a580bd1e1ee405c44dd7bf80943fddab7c16924f9f0fb594ae3af67973d" exitCode=0 Jan 01 08:50:50 crc kubenswrapper[4867]: I0101 08:50:50.159066 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5fb785fd89-9d8g9" Jan 01 08:50:50 crc kubenswrapper[4867]: I0101 08:50:50.159070 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fb785fd89-9d8g9" event={"ID":"0973b1fb-6399-4d31-aa7e-2a41a163e4f4","Type":"ContainerDied","Data":"729b6a580bd1e1ee405c44dd7bf80943fddab7c16924f9f0fb594ae3af67973d"} Jan 01 08:50:50 crc kubenswrapper[4867]: I0101 08:50:50.159248 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fb785fd89-9d8g9" event={"ID":"0973b1fb-6399-4d31-aa7e-2a41a163e4f4","Type":"ContainerDied","Data":"ff5e46304dfd2d1375fb26f79017527c9b78ef588816bac9d81188ffad6768b8"} Jan 01 08:50:50 crc kubenswrapper[4867]: I0101 08:50:50.159299 4867 scope.go:117] "RemoveContainer" containerID="bac9c7668db5a75c9609096697c08006409e297a731d4223463f224f07576d59" Jan 01 08:50:50 crc kubenswrapper[4867]: I0101 08:50:50.198957 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5fb785fd89-9d8g9"] Jan 01 08:50:50 crc kubenswrapper[4867]: I0101 08:50:50.204760 4867 scope.go:117] "RemoveContainer" containerID="729b6a580bd1e1ee405c44dd7bf80943fddab7c16924f9f0fb594ae3af67973d" Jan 01 08:50:50 crc kubenswrapper[4867]: I0101 08:50:50.209085 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5fb785fd89-9d8g9"] Jan 01 08:50:50 crc kubenswrapper[4867]: I0101 08:50:50.234569 4867 scope.go:117] "RemoveContainer" containerID="bac9c7668db5a75c9609096697c08006409e297a731d4223463f224f07576d59" Jan 01 08:50:50 crc kubenswrapper[4867]: E0101 08:50:50.235090 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bac9c7668db5a75c9609096697c08006409e297a731d4223463f224f07576d59\": container with ID starting with bac9c7668db5a75c9609096697c08006409e297a731d4223463f224f07576d59 not found: ID does not exist" containerID="bac9c7668db5a75c9609096697c08006409e297a731d4223463f224f07576d59" Jan 01 08:50:50 crc kubenswrapper[4867]: I0101 08:50:50.235146 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bac9c7668db5a75c9609096697c08006409e297a731d4223463f224f07576d59"} err="failed to get container status \"bac9c7668db5a75c9609096697c08006409e297a731d4223463f224f07576d59\": rpc error: code = NotFound desc = could not find container \"bac9c7668db5a75c9609096697c08006409e297a731d4223463f224f07576d59\": container with ID starting with bac9c7668db5a75c9609096697c08006409e297a731d4223463f224f07576d59 not found: ID does not exist" Jan 01 08:50:50 crc kubenswrapper[4867]: I0101 08:50:50.235186 4867 scope.go:117] "RemoveContainer" containerID="729b6a580bd1e1ee405c44dd7bf80943fddab7c16924f9f0fb594ae3af67973d" Jan 01 08:50:50 crc kubenswrapper[4867]: E0101 08:50:50.235702 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"729b6a580bd1e1ee405c44dd7bf80943fddab7c16924f9f0fb594ae3af67973d\": container with ID starting with 729b6a580bd1e1ee405c44dd7bf80943fddab7c16924f9f0fb594ae3af67973d not found: ID does not exist" containerID="729b6a580bd1e1ee405c44dd7bf80943fddab7c16924f9f0fb594ae3af67973d" Jan 01 08:50:50 crc kubenswrapper[4867]: I0101 08:50:50.235741 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"729b6a580bd1e1ee405c44dd7bf80943fddab7c16924f9f0fb594ae3af67973d"} err="failed to get container status \"729b6a580bd1e1ee405c44dd7bf80943fddab7c16924f9f0fb594ae3af67973d\": rpc error: code = NotFound desc = could not find container \"729b6a580bd1e1ee405c44dd7bf80943fddab7c16924f9f0fb594ae3af67973d\": container with ID starting with 729b6a580bd1e1ee405c44dd7bf80943fddab7c16924f9f0fb594ae3af67973d not found: ID does not exist" Jan 01 08:50:51 crc kubenswrapper[4867]: I0101 08:50:51.148308 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0973b1fb-6399-4d31-aa7e-2a41a163e4f4" path="/var/lib/kubelet/pods/0973b1fb-6399-4d31-aa7e-2a41a163e4f4/volumes" Jan 01 08:50:52 crc kubenswrapper[4867]: E0101 08:50:52.123402 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd is running failed: container process not found" containerID="823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 01 08:50:52 crc kubenswrapper[4867]: E0101 08:50:52.124083 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd is running failed: container process not found" containerID="823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 01 08:50:52 crc kubenswrapper[4867]: E0101 08:50:52.124622 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd is running failed: container process not found" containerID="823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 01 08:50:52 crc kubenswrapper[4867]: E0101 08:50:52.124665 4867 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-smgl6" podUID="d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e" containerName="ovsdb-server" Jan 01 08:50:52 crc kubenswrapper[4867]: E0101 08:50:52.124858 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c5d97f4ef6c67417f1c06bc5b592d06096afac0628ba26043d76ab1c8ed2c65b" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 01 08:50:52 crc kubenswrapper[4867]: E0101 08:50:52.126524 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c5d97f4ef6c67417f1c06bc5b592d06096afac0628ba26043d76ab1c8ed2c65b" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 01 08:50:52 crc kubenswrapper[4867]: E0101 08:50:52.129278 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c5d97f4ef6c67417f1c06bc5b592d06096afac0628ba26043d76ab1c8ed2c65b" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 01 08:50:52 crc kubenswrapper[4867]: E0101 08:50:52.129394 4867 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-smgl6" podUID="d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e" containerName="ovs-vswitchd" Jan 01 08:50:57 crc kubenswrapper[4867]: E0101 08:50:57.123304 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c5d97f4ef6c67417f1c06bc5b592d06096afac0628ba26043d76ab1c8ed2c65b is running failed: container process not found" containerID="c5d97f4ef6c67417f1c06bc5b592d06096afac0628ba26043d76ab1c8ed2c65b" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 01 08:50:57 crc kubenswrapper[4867]: E0101 08:50:57.123337 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd is running failed: container process not found" containerID="823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 01 08:50:57 crc kubenswrapper[4867]: E0101 08:50:57.124451 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c5d97f4ef6c67417f1c06bc5b592d06096afac0628ba26043d76ab1c8ed2c65b is running failed: container process not found" containerID="c5d97f4ef6c67417f1c06bc5b592d06096afac0628ba26043d76ab1c8ed2c65b" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 01 08:50:57 crc kubenswrapper[4867]: E0101 08:50:57.124523 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd is running failed: container process not found" containerID="823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 01 08:50:57 crc kubenswrapper[4867]: E0101 08:50:57.124899 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c5d97f4ef6c67417f1c06bc5b592d06096afac0628ba26043d76ab1c8ed2c65b is running failed: container process not found" containerID="c5d97f4ef6c67417f1c06bc5b592d06096afac0628ba26043d76ab1c8ed2c65b" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 01 08:50:57 crc kubenswrapper[4867]: E0101 08:50:57.124931 4867 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c5d97f4ef6c67417f1c06bc5b592d06096afac0628ba26043d76ab1c8ed2c65b is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-smgl6" podUID="d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e" containerName="ovs-vswitchd" Jan 01 08:50:57 crc kubenswrapper[4867]: E0101 08:50:57.124991 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd is running failed: container process not found" containerID="823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 01 08:50:57 crc kubenswrapper[4867]: E0101 08:50:57.125068 4867 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-smgl6" podUID="d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e" containerName="ovsdb-server" Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.251756 4867 generic.go:334] "Generic (PLEG): container finished" podID="3205b065-c067-4035-8afb-e2bbcc7d8a42" containerID="eb7dcef39a55694c9e76f1f5778b1c287c9ba1f1a1711c0d8fbaaad900a62405" exitCode=137 Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.251846 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3205b065-c067-4035-8afb-e2bbcc7d8a42","Type":"ContainerDied","Data":"eb7dcef39a55694c9e76f1f5778b1c287c9ba1f1a1711c0d8fbaaad900a62405"} Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.251938 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3205b065-c067-4035-8afb-e2bbcc7d8a42","Type":"ContainerDied","Data":"e4046b3e49161a2c6897ac9a886e9820415272b0c4b53c73b3dd45eff1499813"} Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.251954 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4046b3e49161a2c6897ac9a886e9820415272b0c4b53c73b3dd45eff1499813" Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.254537 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-smgl6_d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e/ovs-vswitchd/0.log" Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.256127 4867 generic.go:334] "Generic (PLEG): container finished" podID="d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e" containerID="c5d97f4ef6c67417f1c06bc5b592d06096afac0628ba26043d76ab1c8ed2c65b" exitCode=137 Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.256161 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-smgl6" event={"ID":"d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e","Type":"ContainerDied","Data":"c5d97f4ef6c67417f1c06bc5b592d06096afac0628ba26043d76ab1c8ed2c65b"} Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.301195 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.306034 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-smgl6_d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e/ovs-vswitchd/0.log" Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.307199 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-smgl6" Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.356712 4867 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod1620c75e-1129-4850-9b27-7666e4cb8ed5"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod1620c75e-1129-4850-9b27-7666e4cb8ed5] : Timed out while waiting for systemd to remove kubepods-besteffort-pod1620c75e_1129_4850_9b27_7666e4cb8ed5.slice" Jan 01 08:50:57 crc kubenswrapper[4867]: E0101 08:50:57.356770 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod1620c75e-1129-4850-9b27-7666e4cb8ed5] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod1620c75e-1129-4850-9b27-7666e4cb8ed5] : Timed out while waiting for systemd to remove kubepods-besteffort-pod1620c75e_1129_4850_9b27_7666e4cb8ed5.slice" pod="openstack/ovsdbserver-sb-0" podUID="1620c75e-1129-4850-9b27-7666e4cb8ed5" Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.398416 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e-var-lib\") pod \"d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e\" (UID: \"d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e\") " Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.398492 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8n6v\" (UniqueName: \"kubernetes.io/projected/d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e-kube-api-access-j8n6v\") pod \"d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e\" (UID: \"d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e\") " Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.398543 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3205b065-c067-4035-8afb-e2bbcc7d8a42-config-data-custom\") pod \"3205b065-c067-4035-8afb-e2bbcc7d8a42\" (UID: \"3205b065-c067-4035-8afb-e2bbcc7d8a42\") " Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.398569 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e-scripts\") pod \"d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e\" (UID: \"d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e\") " Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.398559 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e-var-lib" (OuterVolumeSpecName: "var-lib") pod "d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e" (UID: "d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.398604 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgqjm\" (UniqueName: \"kubernetes.io/projected/3205b065-c067-4035-8afb-e2bbcc7d8a42-kube-api-access-xgqjm\") pod \"3205b065-c067-4035-8afb-e2bbcc7d8a42\" (UID: \"3205b065-c067-4035-8afb-e2bbcc7d8a42\") " Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.398629 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3205b065-c067-4035-8afb-e2bbcc7d8a42-combined-ca-bundle\") pod \"3205b065-c067-4035-8afb-e2bbcc7d8a42\" (UID: \"3205b065-c067-4035-8afb-e2bbcc7d8a42\") " Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.398649 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e-etc-ovs\") pod \"d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e\" (UID: \"d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e\") " Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.398668 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3205b065-c067-4035-8afb-e2bbcc7d8a42-scripts\") pod \"3205b065-c067-4035-8afb-e2bbcc7d8a42\" (UID: \"3205b065-c067-4035-8afb-e2bbcc7d8a42\") " Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.398704 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3205b065-c067-4035-8afb-e2bbcc7d8a42-etc-machine-id\") pod \"3205b065-c067-4035-8afb-e2bbcc7d8a42\" (UID: \"3205b065-c067-4035-8afb-e2bbcc7d8a42\") " Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.398737 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e-var-log\") pod \"d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e\" (UID: \"d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e\") " Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.398761 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3205b065-c067-4035-8afb-e2bbcc7d8a42-config-data\") pod \"3205b065-c067-4035-8afb-e2bbcc7d8a42\" (UID: \"3205b065-c067-4035-8afb-e2bbcc7d8a42\") " Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.398809 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e-var-run\") pod \"d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e\" (UID: \"d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e\") " Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.399135 4867 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e-var-lib\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.399184 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e-var-run" (OuterVolumeSpecName: "var-run") pod "d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e" (UID: "d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.400358 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e-scripts" (OuterVolumeSpecName: "scripts") pod "d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e" (UID: "d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.400412 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3205b065-c067-4035-8afb-e2bbcc7d8a42-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3205b065-c067-4035-8afb-e2bbcc7d8a42" (UID: "3205b065-c067-4035-8afb-e2bbcc7d8a42"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.400435 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e-var-log" (OuterVolumeSpecName: "var-log") pod "d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e" (UID: "d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.400914 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e" (UID: "d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.405056 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3205b065-c067-4035-8afb-e2bbcc7d8a42-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3205b065-c067-4035-8afb-e2bbcc7d8a42" (UID: "3205b065-c067-4035-8afb-e2bbcc7d8a42"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.405490 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3205b065-c067-4035-8afb-e2bbcc7d8a42-kube-api-access-xgqjm" (OuterVolumeSpecName: "kube-api-access-xgqjm") pod "3205b065-c067-4035-8afb-e2bbcc7d8a42" (UID: "3205b065-c067-4035-8afb-e2bbcc7d8a42"). InnerVolumeSpecName "kube-api-access-xgqjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.407232 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3205b065-c067-4035-8afb-e2bbcc7d8a42-scripts" (OuterVolumeSpecName: "scripts") pod "3205b065-c067-4035-8afb-e2bbcc7d8a42" (UID: "3205b065-c067-4035-8afb-e2bbcc7d8a42"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.407482 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e-kube-api-access-j8n6v" (OuterVolumeSpecName: "kube-api-access-j8n6v") pod "d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e" (UID: "d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e"). InnerVolumeSpecName "kube-api-access-j8n6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.457064 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3205b065-c067-4035-8afb-e2bbcc7d8a42-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3205b065-c067-4035-8afb-e2bbcc7d8a42" (UID: "3205b065-c067-4035-8afb-e2bbcc7d8a42"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.488305 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3205b065-c067-4035-8afb-e2bbcc7d8a42-config-data" (OuterVolumeSpecName: "config-data") pod "3205b065-c067-4035-8afb-e2bbcc7d8a42" (UID: "3205b065-c067-4035-8afb-e2bbcc7d8a42"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.500895 4867 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e-var-log\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.500923 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3205b065-c067-4035-8afb-e2bbcc7d8a42-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.500933 4867 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e-var-run\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.500944 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8n6v\" (UniqueName: \"kubernetes.io/projected/d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e-kube-api-access-j8n6v\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.500984 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3205b065-c067-4035-8afb-e2bbcc7d8a42-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.500994 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.501002 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgqjm\" (UniqueName: \"kubernetes.io/projected/3205b065-c067-4035-8afb-e2bbcc7d8a42-kube-api-access-xgqjm\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.501010 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3205b065-c067-4035-8afb-e2bbcc7d8a42-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.501017 4867 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e-etc-ovs\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.501024 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3205b065-c067-4035-8afb-e2bbcc7d8a42-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.501053 4867 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3205b065-c067-4035-8afb-e2bbcc7d8a42-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:57 crc kubenswrapper[4867]: I0101 08:50:57.895807 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.007819 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f1f687f2-3229-401c-b5cb-f79e96311c45-lock\") pod \"f1f687f2-3229-401c-b5cb-f79e96311c45\" (UID: \"f1f687f2-3229-401c-b5cb-f79e96311c45\") " Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.007864 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f1f687f2-3229-401c-b5cb-f79e96311c45-cache\") pod \"f1f687f2-3229-401c-b5cb-f79e96311c45\" (UID: \"f1f687f2-3229-401c-b5cb-f79e96311c45\") " Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.007923 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pktsx\" (UniqueName: \"kubernetes.io/projected/f1f687f2-3229-401c-b5cb-f79e96311c45-kube-api-access-pktsx\") pod \"f1f687f2-3229-401c-b5cb-f79e96311c45\" (UID: \"f1f687f2-3229-401c-b5cb-f79e96311c45\") " Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.008026 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f1f687f2-3229-401c-b5cb-f79e96311c45-etc-swift\") pod \"f1f687f2-3229-401c-b5cb-f79e96311c45\" (UID: \"f1f687f2-3229-401c-b5cb-f79e96311c45\") " Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.008072 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"f1f687f2-3229-401c-b5cb-f79e96311c45\" (UID: \"f1f687f2-3229-401c-b5cb-f79e96311c45\") " Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.008776 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1f687f2-3229-401c-b5cb-f79e96311c45-lock" (OuterVolumeSpecName: "lock") pod "f1f687f2-3229-401c-b5cb-f79e96311c45" (UID: "f1f687f2-3229-401c-b5cb-f79e96311c45"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.008823 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1f687f2-3229-401c-b5cb-f79e96311c45-cache" (OuterVolumeSpecName: "cache") pod "f1f687f2-3229-401c-b5cb-f79e96311c45" (UID: "f1f687f2-3229-401c-b5cb-f79e96311c45"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.012544 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1f687f2-3229-401c-b5cb-f79e96311c45-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "f1f687f2-3229-401c-b5cb-f79e96311c45" (UID: "f1f687f2-3229-401c-b5cb-f79e96311c45"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.016115 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "swift") pod "f1f687f2-3229-401c-b5cb-f79e96311c45" (UID: "f1f687f2-3229-401c-b5cb-f79e96311c45"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.016181 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1f687f2-3229-401c-b5cb-f79e96311c45-kube-api-access-pktsx" (OuterVolumeSpecName: "kube-api-access-pktsx") pod "f1f687f2-3229-401c-b5cb-f79e96311c45" (UID: "f1f687f2-3229-401c-b5cb-f79e96311c45"). InnerVolumeSpecName "kube-api-access-pktsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.109692 4867 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f1f687f2-3229-401c-b5cb-f79e96311c45-lock\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.109763 4867 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f1f687f2-3229-401c-b5cb-f79e96311c45-cache\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.109782 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pktsx\" (UniqueName: \"kubernetes.io/projected/f1f687f2-3229-401c-b5cb-f79e96311c45-kube-api-access-pktsx\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.109803 4867 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f1f687f2-3229-401c-b5cb-f79e96311c45-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.109853 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.136222 4867 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.211373 4867 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.268655 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-smgl6_d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e/ovs-vswitchd/0.log" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.270154 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-smgl6" event={"ID":"d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e","Type":"ContainerDied","Data":"8286e944304ec89bdcd775c355caac2a1190ce6bb52fc57bb96a5e21818bb725"} Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.270186 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-smgl6" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.270220 4867 scope.go:117] "RemoveContainer" containerID="c5d97f4ef6c67417f1c06bc5b592d06096afac0628ba26043d76ab1c8ed2c65b" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.286532 4867 generic.go:334] "Generic (PLEG): container finished" podID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerID="772bcfc85f46d71696e64d9fee0b787dd32f23fc9a773f22afa70be5798f659e" exitCode=137 Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.286620 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.291812 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.294432 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1f687f2-3229-401c-b5cb-f79e96311c45","Type":"ContainerDied","Data":"772bcfc85f46d71696e64d9fee0b787dd32f23fc9a773f22afa70be5798f659e"} Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.294511 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1f687f2-3229-401c-b5cb-f79e96311c45","Type":"ContainerDied","Data":"eeda38690f053421ceda9618b7f4c7b6a16f4b1db48821d6e4e0c79fbbabae99"} Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.294604 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.311329 4867 scope.go:117] "RemoveContainer" containerID="823c79e24beab0c0581e806221085f97b6e5ce08fc7b4f10e791faea9d30f6bd" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.363011 4867 scope.go:117] "RemoveContainer" containerID="40c75e44cba104e45661a1d0c049238ec1f59a119529722a4ed7d8876855db31" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.369324 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.383933 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.391937 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-smgl6"] Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.406762 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-smgl6"] Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.409303 4867 scope.go:117] "RemoveContainer" containerID="772bcfc85f46d71696e64d9fee0b787dd32f23fc9a773f22afa70be5798f659e" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.414064 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.422250 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.428424 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.433437 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.438906 4867 scope.go:117] "RemoveContainer" containerID="fce464d833a7c6c6c7e37a1de3b906e406e554021eb3b0eefbac9bedb4d2be70" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.461353 4867 scope.go:117] "RemoveContainer" containerID="b55659f4cef0ce86dc0aded13b57bc4bfb2f19e8bc919819f7fd767199f41066" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.489614 4867 scope.go:117] "RemoveContainer" containerID="fa319fce7bd6ca10e6ad88adaf6f5948cf915cc2e7db3b5bac5c37f79dc9e5b9" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.512109 4867 scope.go:117] "RemoveContainer" containerID="b2f2f0cace6bd82b9e6b696c3fd61a7a16620e0c17129fb46f6aa38276e62493" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.527412 4867 scope.go:117] "RemoveContainer" containerID="5211eb74bef91f578c4b43a263d9289855c0625a768d9b2a86f8757c91730985" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.542432 4867 scope.go:117] "RemoveContainer" containerID="8e53230183aa290b06a69f94b13b5239aeff13215b5020e77a00e699fab2b615" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.561172 4867 scope.go:117] "RemoveContainer" containerID="c025cb17cbfb32add357e55d8877da3e20770d7e20434859f494dda30db32f6a" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.579636 4867 scope.go:117] "RemoveContainer" containerID="5bafde574302da2dbe87cd0e8e0471bb6c66be5019e2607894d57645ead89abd" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.601507 4867 scope.go:117] "RemoveContainer" containerID="2fdf59f8f13262498df0f837fac962ddddedc8b945cc894391e0ea1e2818f2f8" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.621974 4867 scope.go:117] "RemoveContainer" containerID="6ea6d3eb9e4320ec6933e61e262598b312bd6de3e889f0b82a2feeb2cb268787" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.639555 4867 scope.go:117] "RemoveContainer" containerID="e10bde6d2681fb218ae735e5c9c2775d890ddf6556d173f40a07de5845ae619f" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.657622 4867 scope.go:117] "RemoveContainer" containerID="93e3d74fe5bc76c92026ce2db2261c34e59a9fd78d02af2361f795c93327e0c2" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.681290 4867 scope.go:117] "RemoveContainer" containerID="86b047d36ab3b40416494324d7472e6b2de70172c969ce8457a8c077f86da142" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.700716 4867 scope.go:117] "RemoveContainer" containerID="15a53dd61a838436c8cc640222228f29611c350aafbcd4c7ebd7e7d0037f6c08" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.719443 4867 scope.go:117] "RemoveContainer" containerID="772bcfc85f46d71696e64d9fee0b787dd32f23fc9a773f22afa70be5798f659e" Jan 01 08:50:58 crc kubenswrapper[4867]: E0101 08:50:58.719848 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"772bcfc85f46d71696e64d9fee0b787dd32f23fc9a773f22afa70be5798f659e\": container with ID starting with 772bcfc85f46d71696e64d9fee0b787dd32f23fc9a773f22afa70be5798f659e not found: ID does not exist" containerID="772bcfc85f46d71696e64d9fee0b787dd32f23fc9a773f22afa70be5798f659e" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.719876 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"772bcfc85f46d71696e64d9fee0b787dd32f23fc9a773f22afa70be5798f659e"} err="failed to get container status \"772bcfc85f46d71696e64d9fee0b787dd32f23fc9a773f22afa70be5798f659e\": rpc error: code = NotFound desc = could not find container \"772bcfc85f46d71696e64d9fee0b787dd32f23fc9a773f22afa70be5798f659e\": container with ID starting with 772bcfc85f46d71696e64d9fee0b787dd32f23fc9a773f22afa70be5798f659e not found: ID does not exist" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.719908 4867 scope.go:117] "RemoveContainer" containerID="fce464d833a7c6c6c7e37a1de3b906e406e554021eb3b0eefbac9bedb4d2be70" Jan 01 08:50:58 crc kubenswrapper[4867]: E0101 08:50:58.720190 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fce464d833a7c6c6c7e37a1de3b906e406e554021eb3b0eefbac9bedb4d2be70\": container with ID starting with fce464d833a7c6c6c7e37a1de3b906e406e554021eb3b0eefbac9bedb4d2be70 not found: ID does not exist" containerID="fce464d833a7c6c6c7e37a1de3b906e406e554021eb3b0eefbac9bedb4d2be70" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.720221 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fce464d833a7c6c6c7e37a1de3b906e406e554021eb3b0eefbac9bedb4d2be70"} err="failed to get container status \"fce464d833a7c6c6c7e37a1de3b906e406e554021eb3b0eefbac9bedb4d2be70\": rpc error: code = NotFound desc = could not find container \"fce464d833a7c6c6c7e37a1de3b906e406e554021eb3b0eefbac9bedb4d2be70\": container with ID starting with fce464d833a7c6c6c7e37a1de3b906e406e554021eb3b0eefbac9bedb4d2be70 not found: ID does not exist" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.720241 4867 scope.go:117] "RemoveContainer" containerID="b55659f4cef0ce86dc0aded13b57bc4bfb2f19e8bc919819f7fd767199f41066" Jan 01 08:50:58 crc kubenswrapper[4867]: E0101 08:50:58.720594 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b55659f4cef0ce86dc0aded13b57bc4bfb2f19e8bc919819f7fd767199f41066\": container with ID starting with b55659f4cef0ce86dc0aded13b57bc4bfb2f19e8bc919819f7fd767199f41066 not found: ID does not exist" containerID="b55659f4cef0ce86dc0aded13b57bc4bfb2f19e8bc919819f7fd767199f41066" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.720618 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b55659f4cef0ce86dc0aded13b57bc4bfb2f19e8bc919819f7fd767199f41066"} err="failed to get container status \"b55659f4cef0ce86dc0aded13b57bc4bfb2f19e8bc919819f7fd767199f41066\": rpc error: code = NotFound desc = could not find container \"b55659f4cef0ce86dc0aded13b57bc4bfb2f19e8bc919819f7fd767199f41066\": container with ID starting with b55659f4cef0ce86dc0aded13b57bc4bfb2f19e8bc919819f7fd767199f41066 not found: ID does not exist" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.720634 4867 scope.go:117] "RemoveContainer" containerID="fa319fce7bd6ca10e6ad88adaf6f5948cf915cc2e7db3b5bac5c37f79dc9e5b9" Jan 01 08:50:58 crc kubenswrapper[4867]: E0101 08:50:58.720985 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa319fce7bd6ca10e6ad88adaf6f5948cf915cc2e7db3b5bac5c37f79dc9e5b9\": container with ID starting with fa319fce7bd6ca10e6ad88adaf6f5948cf915cc2e7db3b5bac5c37f79dc9e5b9 not found: ID does not exist" containerID="fa319fce7bd6ca10e6ad88adaf6f5948cf915cc2e7db3b5bac5c37f79dc9e5b9" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.721014 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa319fce7bd6ca10e6ad88adaf6f5948cf915cc2e7db3b5bac5c37f79dc9e5b9"} err="failed to get container status \"fa319fce7bd6ca10e6ad88adaf6f5948cf915cc2e7db3b5bac5c37f79dc9e5b9\": rpc error: code = NotFound desc = could not find container \"fa319fce7bd6ca10e6ad88adaf6f5948cf915cc2e7db3b5bac5c37f79dc9e5b9\": container with ID starting with fa319fce7bd6ca10e6ad88adaf6f5948cf915cc2e7db3b5bac5c37f79dc9e5b9 not found: ID does not exist" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.721034 4867 scope.go:117] "RemoveContainer" containerID="b2f2f0cace6bd82b9e6b696c3fd61a7a16620e0c17129fb46f6aa38276e62493" Jan 01 08:50:58 crc kubenswrapper[4867]: E0101 08:50:58.721287 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2f2f0cace6bd82b9e6b696c3fd61a7a16620e0c17129fb46f6aa38276e62493\": container with ID starting with b2f2f0cace6bd82b9e6b696c3fd61a7a16620e0c17129fb46f6aa38276e62493 not found: ID does not exist" containerID="b2f2f0cace6bd82b9e6b696c3fd61a7a16620e0c17129fb46f6aa38276e62493" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.721306 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2f2f0cace6bd82b9e6b696c3fd61a7a16620e0c17129fb46f6aa38276e62493"} err="failed to get container status \"b2f2f0cace6bd82b9e6b696c3fd61a7a16620e0c17129fb46f6aa38276e62493\": rpc error: code = NotFound desc = could not find container \"b2f2f0cace6bd82b9e6b696c3fd61a7a16620e0c17129fb46f6aa38276e62493\": container with ID starting with b2f2f0cace6bd82b9e6b696c3fd61a7a16620e0c17129fb46f6aa38276e62493 not found: ID does not exist" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.721323 4867 scope.go:117] "RemoveContainer" containerID="5211eb74bef91f578c4b43a263d9289855c0625a768d9b2a86f8757c91730985" Jan 01 08:50:58 crc kubenswrapper[4867]: E0101 08:50:58.721505 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5211eb74bef91f578c4b43a263d9289855c0625a768d9b2a86f8757c91730985\": container with ID starting with 5211eb74bef91f578c4b43a263d9289855c0625a768d9b2a86f8757c91730985 not found: ID does not exist" containerID="5211eb74bef91f578c4b43a263d9289855c0625a768d9b2a86f8757c91730985" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.721527 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5211eb74bef91f578c4b43a263d9289855c0625a768d9b2a86f8757c91730985"} err="failed to get container status \"5211eb74bef91f578c4b43a263d9289855c0625a768d9b2a86f8757c91730985\": rpc error: code = NotFound desc = could not find container \"5211eb74bef91f578c4b43a263d9289855c0625a768d9b2a86f8757c91730985\": container with ID starting with 5211eb74bef91f578c4b43a263d9289855c0625a768d9b2a86f8757c91730985 not found: ID does not exist" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.721539 4867 scope.go:117] "RemoveContainer" containerID="8e53230183aa290b06a69f94b13b5239aeff13215b5020e77a00e699fab2b615" Jan 01 08:50:58 crc kubenswrapper[4867]: E0101 08:50:58.721762 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e53230183aa290b06a69f94b13b5239aeff13215b5020e77a00e699fab2b615\": container with ID starting with 8e53230183aa290b06a69f94b13b5239aeff13215b5020e77a00e699fab2b615 not found: ID does not exist" containerID="8e53230183aa290b06a69f94b13b5239aeff13215b5020e77a00e699fab2b615" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.721780 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e53230183aa290b06a69f94b13b5239aeff13215b5020e77a00e699fab2b615"} err="failed to get container status \"8e53230183aa290b06a69f94b13b5239aeff13215b5020e77a00e699fab2b615\": rpc error: code = NotFound desc = could not find container \"8e53230183aa290b06a69f94b13b5239aeff13215b5020e77a00e699fab2b615\": container with ID starting with 8e53230183aa290b06a69f94b13b5239aeff13215b5020e77a00e699fab2b615 not found: ID does not exist" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.721796 4867 scope.go:117] "RemoveContainer" containerID="c025cb17cbfb32add357e55d8877da3e20770d7e20434859f494dda30db32f6a" Jan 01 08:50:58 crc kubenswrapper[4867]: E0101 08:50:58.722069 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c025cb17cbfb32add357e55d8877da3e20770d7e20434859f494dda30db32f6a\": container with ID starting with c025cb17cbfb32add357e55d8877da3e20770d7e20434859f494dda30db32f6a not found: ID does not exist" containerID="c025cb17cbfb32add357e55d8877da3e20770d7e20434859f494dda30db32f6a" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.722129 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c025cb17cbfb32add357e55d8877da3e20770d7e20434859f494dda30db32f6a"} err="failed to get container status \"c025cb17cbfb32add357e55d8877da3e20770d7e20434859f494dda30db32f6a\": rpc error: code = NotFound desc = could not find container \"c025cb17cbfb32add357e55d8877da3e20770d7e20434859f494dda30db32f6a\": container with ID starting with c025cb17cbfb32add357e55d8877da3e20770d7e20434859f494dda30db32f6a not found: ID does not exist" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.722150 4867 scope.go:117] "RemoveContainer" containerID="5bafde574302da2dbe87cd0e8e0471bb6c66be5019e2607894d57645ead89abd" Jan 01 08:50:58 crc kubenswrapper[4867]: E0101 08:50:58.722547 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bafde574302da2dbe87cd0e8e0471bb6c66be5019e2607894d57645ead89abd\": container with ID starting with 5bafde574302da2dbe87cd0e8e0471bb6c66be5019e2607894d57645ead89abd not found: ID does not exist" containerID="5bafde574302da2dbe87cd0e8e0471bb6c66be5019e2607894d57645ead89abd" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.722567 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bafde574302da2dbe87cd0e8e0471bb6c66be5019e2607894d57645ead89abd"} err="failed to get container status \"5bafde574302da2dbe87cd0e8e0471bb6c66be5019e2607894d57645ead89abd\": rpc error: code = NotFound desc = could not find container \"5bafde574302da2dbe87cd0e8e0471bb6c66be5019e2607894d57645ead89abd\": container with ID starting with 5bafde574302da2dbe87cd0e8e0471bb6c66be5019e2607894d57645ead89abd not found: ID does not exist" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.722579 4867 scope.go:117] "RemoveContainer" containerID="2fdf59f8f13262498df0f837fac962ddddedc8b945cc894391e0ea1e2818f2f8" Jan 01 08:50:58 crc kubenswrapper[4867]: E0101 08:50:58.723256 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fdf59f8f13262498df0f837fac962ddddedc8b945cc894391e0ea1e2818f2f8\": container with ID starting with 2fdf59f8f13262498df0f837fac962ddddedc8b945cc894391e0ea1e2818f2f8 not found: ID does not exist" containerID="2fdf59f8f13262498df0f837fac962ddddedc8b945cc894391e0ea1e2818f2f8" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.723320 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fdf59f8f13262498df0f837fac962ddddedc8b945cc894391e0ea1e2818f2f8"} err="failed to get container status \"2fdf59f8f13262498df0f837fac962ddddedc8b945cc894391e0ea1e2818f2f8\": rpc error: code = NotFound desc = could not find container \"2fdf59f8f13262498df0f837fac962ddddedc8b945cc894391e0ea1e2818f2f8\": container with ID starting with 2fdf59f8f13262498df0f837fac962ddddedc8b945cc894391e0ea1e2818f2f8 not found: ID does not exist" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.723365 4867 scope.go:117] "RemoveContainer" containerID="6ea6d3eb9e4320ec6933e61e262598b312bd6de3e889f0b82a2feeb2cb268787" Jan 01 08:50:58 crc kubenswrapper[4867]: E0101 08:50:58.723736 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ea6d3eb9e4320ec6933e61e262598b312bd6de3e889f0b82a2feeb2cb268787\": container with ID starting with 6ea6d3eb9e4320ec6933e61e262598b312bd6de3e889f0b82a2feeb2cb268787 not found: ID does not exist" containerID="6ea6d3eb9e4320ec6933e61e262598b312bd6de3e889f0b82a2feeb2cb268787" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.723769 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ea6d3eb9e4320ec6933e61e262598b312bd6de3e889f0b82a2feeb2cb268787"} err="failed to get container status \"6ea6d3eb9e4320ec6933e61e262598b312bd6de3e889f0b82a2feeb2cb268787\": rpc error: code = NotFound desc = could not find container \"6ea6d3eb9e4320ec6933e61e262598b312bd6de3e889f0b82a2feeb2cb268787\": container with ID starting with 6ea6d3eb9e4320ec6933e61e262598b312bd6de3e889f0b82a2feeb2cb268787 not found: ID does not exist" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.723791 4867 scope.go:117] "RemoveContainer" containerID="e10bde6d2681fb218ae735e5c9c2775d890ddf6556d173f40a07de5845ae619f" Jan 01 08:50:58 crc kubenswrapper[4867]: E0101 08:50:58.724334 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e10bde6d2681fb218ae735e5c9c2775d890ddf6556d173f40a07de5845ae619f\": container with ID starting with e10bde6d2681fb218ae735e5c9c2775d890ddf6556d173f40a07de5845ae619f not found: ID does not exist" containerID="e10bde6d2681fb218ae735e5c9c2775d890ddf6556d173f40a07de5845ae619f" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.724383 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e10bde6d2681fb218ae735e5c9c2775d890ddf6556d173f40a07de5845ae619f"} err="failed to get container status \"e10bde6d2681fb218ae735e5c9c2775d890ddf6556d173f40a07de5845ae619f\": rpc error: code = NotFound desc = could not find container \"e10bde6d2681fb218ae735e5c9c2775d890ddf6556d173f40a07de5845ae619f\": container with ID starting with e10bde6d2681fb218ae735e5c9c2775d890ddf6556d173f40a07de5845ae619f not found: ID does not exist" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.724418 4867 scope.go:117] "RemoveContainer" containerID="93e3d74fe5bc76c92026ce2db2261c34e59a9fd78d02af2361f795c93327e0c2" Jan 01 08:50:58 crc kubenswrapper[4867]: E0101 08:50:58.725022 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93e3d74fe5bc76c92026ce2db2261c34e59a9fd78d02af2361f795c93327e0c2\": container with ID starting with 93e3d74fe5bc76c92026ce2db2261c34e59a9fd78d02af2361f795c93327e0c2 not found: ID does not exist" containerID="93e3d74fe5bc76c92026ce2db2261c34e59a9fd78d02af2361f795c93327e0c2" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.725065 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93e3d74fe5bc76c92026ce2db2261c34e59a9fd78d02af2361f795c93327e0c2"} err="failed to get container status \"93e3d74fe5bc76c92026ce2db2261c34e59a9fd78d02af2361f795c93327e0c2\": rpc error: code = NotFound desc = could not find container \"93e3d74fe5bc76c92026ce2db2261c34e59a9fd78d02af2361f795c93327e0c2\": container with ID starting with 93e3d74fe5bc76c92026ce2db2261c34e59a9fd78d02af2361f795c93327e0c2 not found: ID does not exist" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.725097 4867 scope.go:117] "RemoveContainer" containerID="86b047d36ab3b40416494324d7472e6b2de70172c969ce8457a8c077f86da142" Jan 01 08:50:58 crc kubenswrapper[4867]: E0101 08:50:58.725364 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86b047d36ab3b40416494324d7472e6b2de70172c969ce8457a8c077f86da142\": container with ID starting with 86b047d36ab3b40416494324d7472e6b2de70172c969ce8457a8c077f86da142 not found: ID does not exist" containerID="86b047d36ab3b40416494324d7472e6b2de70172c969ce8457a8c077f86da142" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.725398 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86b047d36ab3b40416494324d7472e6b2de70172c969ce8457a8c077f86da142"} err="failed to get container status \"86b047d36ab3b40416494324d7472e6b2de70172c969ce8457a8c077f86da142\": rpc error: code = NotFound desc = could not find container \"86b047d36ab3b40416494324d7472e6b2de70172c969ce8457a8c077f86da142\": container with ID starting with 86b047d36ab3b40416494324d7472e6b2de70172c969ce8457a8c077f86da142 not found: ID does not exist" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.725424 4867 scope.go:117] "RemoveContainer" containerID="15a53dd61a838436c8cc640222228f29611c350aafbcd4c7ebd7e7d0037f6c08" Jan 01 08:50:58 crc kubenswrapper[4867]: E0101 08:50:58.725774 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15a53dd61a838436c8cc640222228f29611c350aafbcd4c7ebd7e7d0037f6c08\": container with ID starting with 15a53dd61a838436c8cc640222228f29611c350aafbcd4c7ebd7e7d0037f6c08 not found: ID does not exist" containerID="15a53dd61a838436c8cc640222228f29611c350aafbcd4c7ebd7e7d0037f6c08" Jan 01 08:50:58 crc kubenswrapper[4867]: I0101 08:50:58.725816 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15a53dd61a838436c8cc640222228f29611c350aafbcd4c7ebd7e7d0037f6c08"} err="failed to get container status \"15a53dd61a838436c8cc640222228f29611c350aafbcd4c7ebd7e7d0037f6c08\": rpc error: code = NotFound desc = could not find container \"15a53dd61a838436c8cc640222228f29611c350aafbcd4c7ebd7e7d0037f6c08\": container with ID starting with 15a53dd61a838436c8cc640222228f29611c350aafbcd4c7ebd7e7d0037f6c08 not found: ID does not exist" Jan 01 08:50:59 crc kubenswrapper[4867]: I0101 08:50:59.144271 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1620c75e-1129-4850-9b27-7666e4cb8ed5" path="/var/lib/kubelet/pods/1620c75e-1129-4850-9b27-7666e4cb8ed5/volumes" Jan 01 08:50:59 crc kubenswrapper[4867]: I0101 08:50:59.145519 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3205b065-c067-4035-8afb-e2bbcc7d8a42" path="/var/lib/kubelet/pods/3205b065-c067-4035-8afb-e2bbcc7d8a42/volumes" Jan 01 08:50:59 crc kubenswrapper[4867]: I0101 08:50:59.147085 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e" path="/var/lib/kubelet/pods/d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e/volumes" Jan 01 08:50:59 crc kubenswrapper[4867]: I0101 08:50:59.149799 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" path="/var/lib/kubelet/pods/f1f687f2-3229-401c-b5cb-f79e96311c45/volumes" Jan 01 08:51:01 crc kubenswrapper[4867]: I0101 08:51:01.803708 4867 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod3f4d9b08-1038-4f16-9217-509166cc2e7b"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod3f4d9b08-1038-4f16-9217-509166cc2e7b] : Timed out while waiting for systemd to remove kubepods-besteffort-pod3f4d9b08_1038_4f16_9217_509166cc2e7b.slice" Jan 01 08:51:01 crc kubenswrapper[4867]: I0101 08:51:01.805706 4867 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pode809a11a-a5d8-49a0-9d9d-cac6a399dd35"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pode809a11a-a5d8-49a0-9d9d-cac6a399dd35] : Timed out while waiting for systemd to remove kubepods-besteffort-pode809a11a_a5d8_49a0_9d9d_cac6a399dd35.slice" Jan 01 08:51:01 crc kubenswrapper[4867]: E0101 08:51:01.805783 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pode809a11a-a5d8-49a0-9d9d-cac6a399dd35] : unable to destroy cgroup paths for cgroup [kubepods besteffort pode809a11a-a5d8-49a0-9d9d-cac6a399dd35] : Timed out while waiting for systemd to remove kubepods-besteffort-pode809a11a_a5d8_49a0_9d9d_cac6a399dd35.slice" pod="openstack/glance-default-internal-api-0" podUID="e809a11a-a5d8-49a0-9d9d-cac6a399dd35" Jan 01 08:51:01 crc kubenswrapper[4867]: I0101 08:51:01.808791 4867 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podff82f43d-33bd-47f0-9864-83bb3048f9b2"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podff82f43d-33bd-47f0-9864-83bb3048f9b2] : Timed out while waiting for systemd to remove kubepods-besteffort-podff82f43d_33bd_47f0_9864_83bb3048f9b2.slice" Jan 01 08:51:01 crc kubenswrapper[4867]: E0101 08:51:01.808831 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podff82f43d-33bd-47f0-9864-83bb3048f9b2] : unable to destroy cgroup paths for cgroup [kubepods besteffort podff82f43d-33bd-47f0-9864-83bb3048f9b2] : Timed out while waiting for systemd to remove kubepods-besteffort-podff82f43d_33bd_47f0_9864_83bb3048f9b2.slice" pod="openstack/cinder-api-0" podUID="ff82f43d-33bd-47f0-9864-83bb3048f9b2" Jan 01 08:51:02 crc kubenswrapper[4867]: I0101 08:51:02.700801 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 01 08:51:02 crc kubenswrapper[4867]: I0101 08:51:02.700881 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 01 08:51:02 crc kubenswrapper[4867]: I0101 08:51:02.803303 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 01 08:51:02 crc kubenswrapper[4867]: I0101 08:51:02.825311 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 01 08:51:02 crc kubenswrapper[4867]: I0101 08:51:02.836110 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 01 08:51:02 crc kubenswrapper[4867]: I0101 08:51:02.845500 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 01 08:51:03 crc kubenswrapper[4867]: I0101 08:51:03.142607 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e809a11a-a5d8-49a0-9d9d-cac6a399dd35" path="/var/lib/kubelet/pods/e809a11a-a5d8-49a0-9d9d-cac6a399dd35/volumes" Jan 01 08:51:03 crc kubenswrapper[4867]: I0101 08:51:03.144294 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff82f43d-33bd-47f0-9864-83bb3048f9b2" path="/var/lib/kubelet/pods/ff82f43d-33bd-47f0-9864-83bb3048f9b2/volumes" Jan 01 08:51:21 crc kubenswrapper[4867]: I0101 08:51:21.330901 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 08:51:21 crc kubenswrapper[4867]: I0101 08:51:21.331483 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 08:51:36 crc kubenswrapper[4867]: I0101 08:51:36.185571 4867 scope.go:117] "RemoveContainer" containerID="4e331c080ef51c9e8e140526532ca6a567a4007701b3c6a5707e70e828973809" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.958118 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t4tph"] Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.958780 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6e96caa-b906-4b24-af21-8068ea727bba" containerName="barbican-keystone-listener" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.958798 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6e96caa-b906-4b24-af21-8068ea727bba" containerName="barbican-keystone-listener" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.958820 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2662702-83ed-4457-a630-e8a6d07ffb8b" containerName="galera" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.958828 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2662702-83ed-4457-a630-e8a6d07ffb8b" containerName="galera" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.958836 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c2a7f74-c5ce-45fb-a1fa-c19c025aea20" containerName="barbican-api-log" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.958842 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c2a7f74-c5ce-45fb-a1fa-c19c025aea20" containerName="barbican-api-log" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.958851 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="account-auditor" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.958858 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="account-auditor" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.958866 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="container-server" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.958874 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="container-server" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.958906 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff82f43d-33bd-47f0-9864-83bb3048f9b2" containerName="cinder-api" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.958914 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff82f43d-33bd-47f0-9864-83bb3048f9b2" containerName="cinder-api" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.958924 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="object-replicator" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.958950 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="object-replicator" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.958965 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02bf5c7d-1674-4308-8bcf-751d6c4a3783" containerName="ovn-controller" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.958970 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="02bf5c7d-1674-4308-8bcf-751d6c4a3783" containerName="ovn-controller" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.958978 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="object-expirer" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.958985 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="object-expirer" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.958998 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="object-updater" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959004 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="object-updater" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959017 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="account-replicator" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959024 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="account-replicator" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959038 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e" containerName="ovs-vswitchd" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959045 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e" containerName="ovs-vswitchd" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959053 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84d7aac6-1073-41c0-acff-169e36ec197d" containerName="rabbitmq" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959059 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="84d7aac6-1073-41c0-acff-169e36ec197d" containerName="rabbitmq" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959071 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dad921b-d7dd-4113-85d2-78d6f59944b4" containerName="sg-core" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959077 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dad921b-d7dd-4113-85d2-78d6f59944b4" containerName="sg-core" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959091 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff82f43d-33bd-47f0-9864-83bb3048f9b2" containerName="cinder-api-log" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959099 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff82f43d-33bd-47f0-9864-83bb3048f9b2" containerName="cinder-api-log" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959106 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19551dba-c741-42e0-b228-6cad78717264" containerName="mariadb-account-create-update" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959112 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="19551dba-c741-42e0-b228-6cad78717264" containerName="mariadb-account-create-update" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959120 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e" containerName="ovsdb-server-init" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959126 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e" containerName="ovsdb-server-init" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959137 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dad921b-d7dd-4113-85d2-78d6f59944b4" containerName="ceilometer-notification-agent" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959144 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dad921b-d7dd-4113-85d2-78d6f59944b4" containerName="ceilometer-notification-agent" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959154 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6e96caa-b906-4b24-af21-8068ea727bba" containerName="barbican-keystone-listener-log" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959161 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6e96caa-b906-4b24-af21-8068ea727bba" containerName="barbican-keystone-listener-log" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959173 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="container-auditor" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959180 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="container-auditor" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959194 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e" containerName="ovsdb-server" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959201 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e" containerName="ovsdb-server" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959213 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1822baf8-11aa-4152-a74f-2ce0383c1094" containerName="placement-log" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959220 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1822baf8-11aa-4152-a74f-2ce0383c1094" containerName="placement-log" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959227 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="account-server" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959234 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="account-server" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959241 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="object-auditor" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959249 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="object-auditor" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959264 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9943de7c-1d29-416f-ba57-ea51bf9e56f3" containerName="ovn-northd" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959271 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9943de7c-1d29-416f-ba57-ea51bf9e56f3" containerName="ovn-northd" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959282 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7003b80-53fa-4550-8f18-486a0f7988c9" containerName="nova-metadata-log" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959290 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7003b80-53fa-4550-8f18-486a0f7988c9" containerName="nova-metadata-log" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959302 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="container-replicator" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959310 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="container-replicator" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959318 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8799ae41-c9cb-409a-ac59-3e6b59bb0198" containerName="nova-cell1-conductor-conductor" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959326 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8799ae41-c9cb-409a-ac59-3e6b59bb0198" containerName="nova-cell1-conductor-conductor" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959337 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9943de7c-1d29-416f-ba57-ea51bf9e56f3" containerName="openstack-network-exporter" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959345 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9943de7c-1d29-416f-ba57-ea51bf9e56f3" containerName="openstack-network-exporter" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959353 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb" containerName="nova-cell0-conductor-conductor" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959359 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb" containerName="nova-cell0-conductor-conductor" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959368 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c2a7f74-c5ce-45fb-a1fa-c19c025aea20" containerName="barbican-api" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959374 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c2a7f74-c5ce-45fb-a1fa-c19c025aea20" containerName="barbican-api" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959381 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3205b065-c067-4035-8afb-e2bbcc7d8a42" containerName="cinder-scheduler" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959387 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3205b065-c067-4035-8afb-e2bbcc7d8a42" containerName="cinder-scheduler" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959393 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cda1d2c0-2470-41f9-9969-776f8883a38b" containerName="nova-api-log" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959399 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="cda1d2c0-2470-41f9-9969-776f8883a38b" containerName="nova-api-log" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959407 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="rsync" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959412 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="rsync" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959418 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="account-reaper" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959423 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="account-reaper" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959432 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99" containerName="rabbitmq" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959437 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99" containerName="rabbitmq" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959445 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84d7aac6-1073-41c0-acff-169e36ec197d" containerName="setup-container" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959450 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="84d7aac6-1073-41c0-acff-169e36ec197d" containerName="setup-container" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959457 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22fe2632-f8f6-4ef9-9f4c-72b69bd45932" containerName="barbican-worker-log" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959463 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="22fe2632-f8f6-4ef9-9f4c-72b69bd45932" containerName="barbican-worker-log" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959474 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1822baf8-11aa-4152-a74f-2ce0383c1094" containerName="placement-api" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959480 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1822baf8-11aa-4152-a74f-2ce0383c1094" containerName="placement-api" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959490 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9" containerName="nova-scheduler-scheduler" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959495 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9" containerName="nova-scheduler-scheduler" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959502 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="object-server" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959508 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="object-server" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959514 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0973b1fb-6399-4d31-aa7e-2a41a163e4f4" containerName="neutron-api" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959519 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="0973b1fb-6399-4d31-aa7e-2a41a163e4f4" containerName="neutron-api" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959527 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f47f095-abde-4e07-8edf-d0a318043581" containerName="glance-log" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959532 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f47f095-abde-4e07-8edf-d0a318043581" containerName="glance-log" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959542 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22fe2632-f8f6-4ef9-9f4c-72b69bd45932" containerName="barbican-worker" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959548 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="22fe2632-f8f6-4ef9-9f4c-72b69bd45932" containerName="barbican-worker" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959555 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cda1d2c0-2470-41f9-9969-776f8883a38b" containerName="nova-api-api" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959560 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="cda1d2c0-2470-41f9-9969-776f8883a38b" containerName="nova-api-api" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959569 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="container-updater" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959575 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="container-updater" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959588 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e809a11a-a5d8-49a0-9d9d-cac6a399dd35" containerName="glance-log" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959593 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e809a11a-a5d8-49a0-9d9d-cac6a399dd35" containerName="glance-log" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959600 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f47f095-abde-4e07-8edf-d0a318043581" containerName="glance-httpd" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959606 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f47f095-abde-4e07-8edf-d0a318043581" containerName="glance-httpd" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959615 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dad921b-d7dd-4113-85d2-78d6f59944b4" containerName="proxy-httpd" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959620 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dad921b-d7dd-4113-85d2-78d6f59944b4" containerName="proxy-httpd" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959628 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99" containerName="setup-container" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959633 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99" containerName="setup-container" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959640 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="985cc3ff-ea2f-4386-a828-180deef97412" containerName="keystone-api" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959646 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="985cc3ff-ea2f-4386-a828-180deef97412" containerName="keystone-api" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959653 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e809a11a-a5d8-49a0-9d9d-cac6a399dd35" containerName="glance-httpd" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959658 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e809a11a-a5d8-49a0-9d9d-cac6a399dd35" containerName="glance-httpd" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959664 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0973b1fb-6399-4d31-aa7e-2a41a163e4f4" containerName="neutron-httpd" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959670 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="0973b1fb-6399-4d31-aa7e-2a41a163e4f4" containerName="neutron-httpd" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959678 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2662702-83ed-4457-a630-e8a6d07ffb8b" containerName="mysql-bootstrap" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959684 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2662702-83ed-4457-a630-e8a6d07ffb8b" containerName="mysql-bootstrap" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959691 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3205b065-c067-4035-8afb-e2bbcc7d8a42" containerName="probe" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959697 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3205b065-c067-4035-8afb-e2bbcc7d8a42" containerName="probe" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959707 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd7d188-bdc2-4aa8-891b-0775de1a5eeb" containerName="mysql-bootstrap" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959716 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd7d188-bdc2-4aa8-891b-0775de1a5eeb" containerName="mysql-bootstrap" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959728 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="swift-recon-cron" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959735 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="swift-recon-cron" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959746 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd7d188-bdc2-4aa8-891b-0775de1a5eeb" containerName="galera" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959751 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd7d188-bdc2-4aa8-891b-0775de1a5eeb" containerName="galera" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959762 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dad921b-d7dd-4113-85d2-78d6f59944b4" containerName="ceilometer-central-agent" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959767 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dad921b-d7dd-4113-85d2-78d6f59944b4" containerName="ceilometer-central-agent" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959776 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b43ddff2-67cd-4ab7-84c1-763dd002457c" containerName="memcached" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959781 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b43ddff2-67cd-4ab7-84c1-763dd002457c" containerName="memcached" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959787 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7003b80-53fa-4550-8f18-486a0f7988c9" containerName="nova-metadata-metadata" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959792 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7003b80-53fa-4550-8f18-486a0f7988c9" containerName="nova-metadata-metadata" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.959799 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15b1cd3d-248e-4861-a69a-4c8d284babb3" containerName="kube-state-metrics" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959805 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="15b1cd3d-248e-4861-a69a-4c8d284babb3" containerName="kube-state-metrics" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959944 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="container-server" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959953 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="1aab3a81-e7f7-44c2-8a88-4a9ef8a55d99" containerName="rabbitmq" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959960 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dad921b-d7dd-4113-85d2-78d6f59944b4" containerName="sg-core" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959969 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff82f43d-33bd-47f0-9864-83bb3048f9b2" containerName="cinder-api-log" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959980 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="84d7aac6-1073-41c0-acff-169e36ec197d" containerName="rabbitmq" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959990 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="swift-recon-cron" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.959996 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="object-auditor" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960005 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="1822baf8-11aa-4152-a74f-2ce0383c1094" containerName="placement-log" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960012 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="22fe2632-f8f6-4ef9-9f4c-72b69bd45932" containerName="barbican-worker-log" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960020 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dad921b-d7dd-4113-85d2-78d6f59944b4" containerName="proxy-httpd" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960027 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="15b1cd3d-248e-4861-a69a-4c8d284babb3" containerName="kube-state-metrics" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960034 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7003b80-53fa-4550-8f18-486a0f7988c9" containerName="nova-metadata-log" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960041 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d2e6f4b-31bf-4ad6-89ed-ebdd4f3aa5d9" containerName="nova-scheduler-scheduler" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960048 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e" containerName="ovsdb-server" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960057 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="3205b065-c067-4035-8afb-e2bbcc7d8a42" containerName="probe" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960063 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="9943de7c-1d29-416f-ba57-ea51bf9e56f3" containerName="openstack-network-exporter" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960072 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="object-server" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960080 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="account-server" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960086 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dad921b-d7dd-4113-85d2-78d6f59944b4" containerName="ceilometer-central-agent" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960096 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="account-auditor" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960106 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="19551dba-c741-42e0-b228-6cad78717264" containerName="mariadb-account-create-update" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960115 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="account-replicator" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960122 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="1822baf8-11aa-4152-a74f-2ce0383c1094" containerName="placement-api" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960131 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="22fe2632-f8f6-4ef9-9f4c-72b69bd45932" containerName="barbican-worker" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960138 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f47f095-abde-4e07-8edf-d0a318043581" containerName="glance-httpd" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960146 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="985cc3ff-ea2f-4386-a828-180deef97412" containerName="keystone-api" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960155 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c2a7f74-c5ce-45fb-a1fa-c19c025aea20" containerName="barbican-api-log" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960162 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6e96caa-b906-4b24-af21-8068ea727bba" containerName="barbican-keystone-listener-log" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960171 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="cda1d2c0-2470-41f9-9969-776f8883a38b" containerName="nova-api-api" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960178 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="19551dba-c741-42e0-b228-6cad78717264" containerName="mariadb-account-create-update" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960186 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="e809a11a-a5d8-49a0-9d9d-cac6a399dd35" containerName="glance-log" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960194 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c2a7f74-c5ce-45fb-a1fa-c19c025aea20" containerName="barbican-api" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960201 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="3205b065-c067-4035-8afb-e2bbcc7d8a42" containerName="cinder-scheduler" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960213 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b43ddff2-67cd-4ab7-84c1-763dd002457c" containerName="memcached" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960221 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="container-updater" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960229 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="object-updater" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960234 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6e96caa-b906-4b24-af21-8068ea727bba" containerName="barbican-keystone-listener" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960244 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1b4e13b-b1c7-49b6-8ac7-d6c74c869c7e" containerName="ovs-vswitchd" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960253 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="e809a11a-a5d8-49a0-9d9d-cac6a399dd35" containerName="glance-httpd" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960262 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="cda1d2c0-2470-41f9-9969-776f8883a38b" containerName="nova-api-log" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960269 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="container-replicator" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960274 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="0973b1fb-6399-4d31-aa7e-2a41a163e4f4" containerName="neutron-api" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960281 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff82f43d-33bd-47f0-9864-83bb3048f9b2" containerName="cinder-api" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960289 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7003b80-53fa-4550-8f18-486a0f7988c9" containerName="nova-metadata-metadata" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960297 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="9943de7c-1d29-416f-ba57-ea51bf9e56f3" containerName="ovn-northd" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960307 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="container-auditor" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960314 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="account-reaper" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960323 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="0973b1fb-6399-4d31-aa7e-2a41a163e4f4" containerName="neutron-httpd" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960331 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="object-expirer" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960337 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f47f095-abde-4e07-8edf-d0a318043581" containerName="glance-log" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960344 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cb3185a-7a53-4d1a-a1c0-ec2fa0490ffb" containerName="nova-cell0-conductor-conductor" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960351 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="02bf5c7d-1674-4308-8bcf-751d6c4a3783" containerName="ovn-controller" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960359 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="object-replicator" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960365 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2662702-83ed-4457-a630-e8a6d07ffb8b" containerName="galera" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960376 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dad921b-d7dd-4113-85d2-78d6f59944b4" containerName="ceilometer-notification-agent" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960385 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bd7d188-bdc2-4aa8-891b-0775de1a5eeb" containerName="galera" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960390 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1f687f2-3229-401c-b5cb-f79e96311c45" containerName="rsync" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960397 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8799ae41-c9cb-409a-ac59-3e6b59bb0198" containerName="nova-cell1-conductor-conductor" Jan 01 08:51:40 crc kubenswrapper[4867]: E0101 08:51:40.960515 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19551dba-c741-42e0-b228-6cad78717264" containerName="mariadb-account-create-update" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.960523 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="19551dba-c741-42e0-b228-6cad78717264" containerName="mariadb-account-create-update" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.962223 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t4tph" Jan 01 08:51:40 crc kubenswrapper[4867]: I0101 08:51:40.974072 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4tph"] Jan 01 08:51:41 crc kubenswrapper[4867]: I0101 08:51:41.025326 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89tsm\" (UniqueName: \"kubernetes.io/projected/c6822afc-ac34-45d2-b36d-e912ebbaa317-kube-api-access-89tsm\") pod \"redhat-marketplace-t4tph\" (UID: \"c6822afc-ac34-45d2-b36d-e912ebbaa317\") " pod="openshift-marketplace/redhat-marketplace-t4tph" Jan 01 08:51:41 crc kubenswrapper[4867]: I0101 08:51:41.025398 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6822afc-ac34-45d2-b36d-e912ebbaa317-catalog-content\") pod \"redhat-marketplace-t4tph\" (UID: \"c6822afc-ac34-45d2-b36d-e912ebbaa317\") " pod="openshift-marketplace/redhat-marketplace-t4tph" Jan 01 08:51:41 crc kubenswrapper[4867]: I0101 08:51:41.025459 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6822afc-ac34-45d2-b36d-e912ebbaa317-utilities\") pod \"redhat-marketplace-t4tph\" (UID: \"c6822afc-ac34-45d2-b36d-e912ebbaa317\") " pod="openshift-marketplace/redhat-marketplace-t4tph" Jan 01 08:51:41 crc kubenswrapper[4867]: I0101 08:51:41.127292 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6822afc-ac34-45d2-b36d-e912ebbaa317-utilities\") pod \"redhat-marketplace-t4tph\" (UID: \"c6822afc-ac34-45d2-b36d-e912ebbaa317\") " pod="openshift-marketplace/redhat-marketplace-t4tph" Jan 01 08:51:41 crc kubenswrapper[4867]: I0101 08:51:41.127405 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89tsm\" (UniqueName: \"kubernetes.io/projected/c6822afc-ac34-45d2-b36d-e912ebbaa317-kube-api-access-89tsm\") pod \"redhat-marketplace-t4tph\" (UID: \"c6822afc-ac34-45d2-b36d-e912ebbaa317\") " pod="openshift-marketplace/redhat-marketplace-t4tph" Jan 01 08:51:41 crc kubenswrapper[4867]: I0101 08:51:41.127518 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6822afc-ac34-45d2-b36d-e912ebbaa317-catalog-content\") pod \"redhat-marketplace-t4tph\" (UID: \"c6822afc-ac34-45d2-b36d-e912ebbaa317\") " pod="openshift-marketplace/redhat-marketplace-t4tph" Jan 01 08:51:41 crc kubenswrapper[4867]: I0101 08:51:41.128378 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6822afc-ac34-45d2-b36d-e912ebbaa317-catalog-content\") pod \"redhat-marketplace-t4tph\" (UID: \"c6822afc-ac34-45d2-b36d-e912ebbaa317\") " pod="openshift-marketplace/redhat-marketplace-t4tph" Jan 01 08:51:41 crc kubenswrapper[4867]: I0101 08:51:41.134713 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6822afc-ac34-45d2-b36d-e912ebbaa317-utilities\") pod \"redhat-marketplace-t4tph\" (UID: \"c6822afc-ac34-45d2-b36d-e912ebbaa317\") " pod="openshift-marketplace/redhat-marketplace-t4tph" Jan 01 08:51:41 crc kubenswrapper[4867]: I0101 08:51:41.154869 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89tsm\" (UniqueName: \"kubernetes.io/projected/c6822afc-ac34-45d2-b36d-e912ebbaa317-kube-api-access-89tsm\") pod \"redhat-marketplace-t4tph\" (UID: \"c6822afc-ac34-45d2-b36d-e912ebbaa317\") " pod="openshift-marketplace/redhat-marketplace-t4tph" Jan 01 08:51:41 crc kubenswrapper[4867]: I0101 08:51:41.284739 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t4tph" Jan 01 08:51:41 crc kubenswrapper[4867]: I0101 08:51:41.703726 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4tph"] Jan 01 08:51:42 crc kubenswrapper[4867]: I0101 08:51:42.123273 4867 generic.go:334] "Generic (PLEG): container finished" podID="c6822afc-ac34-45d2-b36d-e912ebbaa317" containerID="2232adeebe28d4e33600ad51b02a8aa4ced1135debddb11c594d47337c922773" exitCode=0 Jan 01 08:51:42 crc kubenswrapper[4867]: I0101 08:51:42.123379 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4tph" event={"ID":"c6822afc-ac34-45d2-b36d-e912ebbaa317","Type":"ContainerDied","Data":"2232adeebe28d4e33600ad51b02a8aa4ced1135debddb11c594d47337c922773"} Jan 01 08:51:42 crc kubenswrapper[4867]: I0101 08:51:42.123635 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4tph" event={"ID":"c6822afc-ac34-45d2-b36d-e912ebbaa317","Type":"ContainerStarted","Data":"0f9339dc3c78f41df498f3f58d27ed3c8336fb8be096495ac1f1689a8ab667f2"} Jan 01 08:51:42 crc kubenswrapper[4867]: I0101 08:51:42.125508 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 01 08:51:43 crc kubenswrapper[4867]: I0101 08:51:43.150215 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4tph" event={"ID":"c6822afc-ac34-45d2-b36d-e912ebbaa317","Type":"ContainerStarted","Data":"151a6af991fd2a70309f18330733420f430eca2ce71b9f1c435168088762d218"} Jan 01 08:51:44 crc kubenswrapper[4867]: I0101 08:51:44.165801 4867 generic.go:334] "Generic (PLEG): container finished" podID="c6822afc-ac34-45d2-b36d-e912ebbaa317" containerID="151a6af991fd2a70309f18330733420f430eca2ce71b9f1c435168088762d218" exitCode=0 Jan 01 08:51:44 crc kubenswrapper[4867]: I0101 08:51:44.165854 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4tph" event={"ID":"c6822afc-ac34-45d2-b36d-e912ebbaa317","Type":"ContainerDied","Data":"151a6af991fd2a70309f18330733420f430eca2ce71b9f1c435168088762d218"} Jan 01 08:51:45 crc kubenswrapper[4867]: I0101 08:51:45.181680 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4tph" event={"ID":"c6822afc-ac34-45d2-b36d-e912ebbaa317","Type":"ContainerStarted","Data":"ad17baf27d1a14e7e850adbcc1b69984ad9d75e40396783e108b79235a487430"} Jan 01 08:51:45 crc kubenswrapper[4867]: I0101 08:51:45.205678 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t4tph" podStartSLOduration=2.756082662 podStartE2EDuration="5.205655889s" podCreationTimestamp="2026-01-01 08:51:40 +0000 UTC" firstStartedPulling="2026-01-01 08:51:42.125175215 +0000 UTC m=+1511.260444004" lastFinishedPulling="2026-01-01 08:51:44.574748422 +0000 UTC m=+1513.710017231" observedRunningTime="2026-01-01 08:51:45.202635084 +0000 UTC m=+1514.337903883" watchObservedRunningTime="2026-01-01 08:51:45.205655889 +0000 UTC m=+1514.340924668" Jan 01 08:51:51 crc kubenswrapper[4867]: I0101 08:51:51.285452 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t4tph" Jan 01 08:51:51 crc kubenswrapper[4867]: I0101 08:51:51.286031 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t4tph" Jan 01 08:51:51 crc kubenswrapper[4867]: I0101 08:51:51.330720 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 08:51:51 crc kubenswrapper[4867]: I0101 08:51:51.330789 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 08:51:51 crc kubenswrapper[4867]: I0101 08:51:51.348130 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t4tph" Jan 01 08:51:52 crc kubenswrapper[4867]: I0101 08:51:52.313169 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t4tph" Jan 01 08:51:52 crc kubenswrapper[4867]: I0101 08:51:52.372182 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4tph"] Jan 01 08:51:54 crc kubenswrapper[4867]: I0101 08:51:54.278466 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t4tph" podUID="c6822afc-ac34-45d2-b36d-e912ebbaa317" containerName="registry-server" containerID="cri-o://ad17baf27d1a14e7e850adbcc1b69984ad9d75e40396783e108b79235a487430" gracePeriod=2 Jan 01 08:51:54 crc kubenswrapper[4867]: I0101 08:51:54.752645 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t4tph" Jan 01 08:51:54 crc kubenswrapper[4867]: I0101 08:51:54.828781 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89tsm\" (UniqueName: \"kubernetes.io/projected/c6822afc-ac34-45d2-b36d-e912ebbaa317-kube-api-access-89tsm\") pod \"c6822afc-ac34-45d2-b36d-e912ebbaa317\" (UID: \"c6822afc-ac34-45d2-b36d-e912ebbaa317\") " Jan 01 08:51:54 crc kubenswrapper[4867]: I0101 08:51:54.828829 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6822afc-ac34-45d2-b36d-e912ebbaa317-utilities\") pod \"c6822afc-ac34-45d2-b36d-e912ebbaa317\" (UID: \"c6822afc-ac34-45d2-b36d-e912ebbaa317\") " Jan 01 08:51:54 crc kubenswrapper[4867]: I0101 08:51:54.828870 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6822afc-ac34-45d2-b36d-e912ebbaa317-catalog-content\") pod \"c6822afc-ac34-45d2-b36d-e912ebbaa317\" (UID: \"c6822afc-ac34-45d2-b36d-e912ebbaa317\") " Jan 01 08:51:54 crc kubenswrapper[4867]: I0101 08:51:54.829782 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6822afc-ac34-45d2-b36d-e912ebbaa317-utilities" (OuterVolumeSpecName: "utilities") pod "c6822afc-ac34-45d2-b36d-e912ebbaa317" (UID: "c6822afc-ac34-45d2-b36d-e912ebbaa317"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:51:54 crc kubenswrapper[4867]: I0101 08:51:54.834580 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6822afc-ac34-45d2-b36d-e912ebbaa317-kube-api-access-89tsm" (OuterVolumeSpecName: "kube-api-access-89tsm") pod "c6822afc-ac34-45d2-b36d-e912ebbaa317" (UID: "c6822afc-ac34-45d2-b36d-e912ebbaa317"). InnerVolumeSpecName "kube-api-access-89tsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:51:54 crc kubenswrapper[4867]: I0101 08:51:54.851489 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6822afc-ac34-45d2-b36d-e912ebbaa317-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6822afc-ac34-45d2-b36d-e912ebbaa317" (UID: "c6822afc-ac34-45d2-b36d-e912ebbaa317"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:51:54 crc kubenswrapper[4867]: I0101 08:51:54.930924 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89tsm\" (UniqueName: \"kubernetes.io/projected/c6822afc-ac34-45d2-b36d-e912ebbaa317-kube-api-access-89tsm\") on node \"crc\" DevicePath \"\"" Jan 01 08:51:54 crc kubenswrapper[4867]: I0101 08:51:54.930960 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6822afc-ac34-45d2-b36d-e912ebbaa317-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 08:51:54 crc kubenswrapper[4867]: I0101 08:51:54.930991 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6822afc-ac34-45d2-b36d-e912ebbaa317-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 08:51:55 crc kubenswrapper[4867]: I0101 08:51:55.294375 4867 generic.go:334] "Generic (PLEG): container finished" podID="c6822afc-ac34-45d2-b36d-e912ebbaa317" containerID="ad17baf27d1a14e7e850adbcc1b69984ad9d75e40396783e108b79235a487430" exitCode=0 Jan 01 08:51:55 crc kubenswrapper[4867]: I0101 08:51:55.294420 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4tph" event={"ID":"c6822afc-ac34-45d2-b36d-e912ebbaa317","Type":"ContainerDied","Data":"ad17baf27d1a14e7e850adbcc1b69984ad9d75e40396783e108b79235a487430"} Jan 01 08:51:55 crc kubenswrapper[4867]: I0101 08:51:55.294446 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4tph" event={"ID":"c6822afc-ac34-45d2-b36d-e912ebbaa317","Type":"ContainerDied","Data":"0f9339dc3c78f41df498f3f58d27ed3c8336fb8be096495ac1f1689a8ab667f2"} Jan 01 08:51:55 crc kubenswrapper[4867]: I0101 08:51:55.294450 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t4tph" Jan 01 08:51:55 crc kubenswrapper[4867]: I0101 08:51:55.294463 4867 scope.go:117] "RemoveContainer" containerID="ad17baf27d1a14e7e850adbcc1b69984ad9d75e40396783e108b79235a487430" Jan 01 08:51:55 crc kubenswrapper[4867]: I0101 08:51:55.328723 4867 scope.go:117] "RemoveContainer" containerID="151a6af991fd2a70309f18330733420f430eca2ce71b9f1c435168088762d218" Jan 01 08:51:55 crc kubenswrapper[4867]: I0101 08:51:55.330499 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4tph"] Jan 01 08:51:55 crc kubenswrapper[4867]: I0101 08:51:55.338904 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4tph"] Jan 01 08:51:55 crc kubenswrapper[4867]: I0101 08:51:55.350139 4867 scope.go:117] "RemoveContainer" containerID="2232adeebe28d4e33600ad51b02a8aa4ced1135debddb11c594d47337c922773" Jan 01 08:51:55 crc kubenswrapper[4867]: I0101 08:51:55.404966 4867 scope.go:117] "RemoveContainer" containerID="ad17baf27d1a14e7e850adbcc1b69984ad9d75e40396783e108b79235a487430" Jan 01 08:51:55 crc kubenswrapper[4867]: E0101 08:51:55.405547 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad17baf27d1a14e7e850adbcc1b69984ad9d75e40396783e108b79235a487430\": container with ID starting with ad17baf27d1a14e7e850adbcc1b69984ad9d75e40396783e108b79235a487430 not found: ID does not exist" containerID="ad17baf27d1a14e7e850adbcc1b69984ad9d75e40396783e108b79235a487430" Jan 01 08:51:55 crc kubenswrapper[4867]: I0101 08:51:55.405617 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad17baf27d1a14e7e850adbcc1b69984ad9d75e40396783e108b79235a487430"} err="failed to get container status \"ad17baf27d1a14e7e850adbcc1b69984ad9d75e40396783e108b79235a487430\": rpc error: code = NotFound desc = could not find container \"ad17baf27d1a14e7e850adbcc1b69984ad9d75e40396783e108b79235a487430\": container with ID starting with ad17baf27d1a14e7e850adbcc1b69984ad9d75e40396783e108b79235a487430 not found: ID does not exist" Jan 01 08:51:55 crc kubenswrapper[4867]: I0101 08:51:55.405666 4867 scope.go:117] "RemoveContainer" containerID="151a6af991fd2a70309f18330733420f430eca2ce71b9f1c435168088762d218" Jan 01 08:51:55 crc kubenswrapper[4867]: E0101 08:51:55.406207 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"151a6af991fd2a70309f18330733420f430eca2ce71b9f1c435168088762d218\": container with ID starting with 151a6af991fd2a70309f18330733420f430eca2ce71b9f1c435168088762d218 not found: ID does not exist" containerID="151a6af991fd2a70309f18330733420f430eca2ce71b9f1c435168088762d218" Jan 01 08:51:55 crc kubenswrapper[4867]: I0101 08:51:55.406254 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"151a6af991fd2a70309f18330733420f430eca2ce71b9f1c435168088762d218"} err="failed to get container status \"151a6af991fd2a70309f18330733420f430eca2ce71b9f1c435168088762d218\": rpc error: code = NotFound desc = could not find container \"151a6af991fd2a70309f18330733420f430eca2ce71b9f1c435168088762d218\": container with ID starting with 151a6af991fd2a70309f18330733420f430eca2ce71b9f1c435168088762d218 not found: ID does not exist" Jan 01 08:51:55 crc kubenswrapper[4867]: I0101 08:51:55.406271 4867 scope.go:117] "RemoveContainer" containerID="2232adeebe28d4e33600ad51b02a8aa4ced1135debddb11c594d47337c922773" Jan 01 08:51:55 crc kubenswrapper[4867]: E0101 08:51:55.406726 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2232adeebe28d4e33600ad51b02a8aa4ced1135debddb11c594d47337c922773\": container with ID starting with 2232adeebe28d4e33600ad51b02a8aa4ced1135debddb11c594d47337c922773 not found: ID does not exist" containerID="2232adeebe28d4e33600ad51b02a8aa4ced1135debddb11c594d47337c922773" Jan 01 08:51:55 crc kubenswrapper[4867]: I0101 08:51:55.406767 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2232adeebe28d4e33600ad51b02a8aa4ced1135debddb11c594d47337c922773"} err="failed to get container status \"2232adeebe28d4e33600ad51b02a8aa4ced1135debddb11c594d47337c922773\": rpc error: code = NotFound desc = could not find container \"2232adeebe28d4e33600ad51b02a8aa4ced1135debddb11c594d47337c922773\": container with ID starting with 2232adeebe28d4e33600ad51b02a8aa4ced1135debddb11c594d47337c922773 not found: ID does not exist" Jan 01 08:51:57 crc kubenswrapper[4867]: I0101 08:51:57.143438 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6822afc-ac34-45d2-b36d-e912ebbaa317" path="/var/lib/kubelet/pods/c6822afc-ac34-45d2-b36d-e912ebbaa317/volumes" Jan 01 08:52:21 crc kubenswrapper[4867]: I0101 08:52:21.330847 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 08:52:21 crc kubenswrapper[4867]: I0101 08:52:21.331530 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 08:52:21 crc kubenswrapper[4867]: I0101 08:52:21.331581 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-69jph" Jan 01 08:52:21 crc kubenswrapper[4867]: I0101 08:52:21.332156 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a427096bdc8d52c036c3aaecb3f5ce96e01b2c654772d5d0fcf3342a5e745a56"} pod="openshift-machine-config-operator/machine-config-daemon-69jph" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 01 08:52:21 crc kubenswrapper[4867]: I0101 08:52:21.332224 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" containerID="cri-o://a427096bdc8d52c036c3aaecb3f5ce96e01b2c654772d5d0fcf3342a5e745a56" gracePeriod=600 Jan 01 08:52:21 crc kubenswrapper[4867]: E0101 08:52:21.467688 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 08:52:21 crc kubenswrapper[4867]: I0101 08:52:21.568961 4867 generic.go:334] "Generic (PLEG): container finished" podID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerID="a427096bdc8d52c036c3aaecb3f5ce96e01b2c654772d5d0fcf3342a5e745a56" exitCode=0 Jan 01 08:52:21 crc kubenswrapper[4867]: I0101 08:52:21.569047 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerDied","Data":"a427096bdc8d52c036c3aaecb3f5ce96e01b2c654772d5d0fcf3342a5e745a56"} Jan 01 08:52:21 crc kubenswrapper[4867]: I0101 08:52:21.569373 4867 scope.go:117] "RemoveContainer" containerID="fae12ab6ce4b32e7095b166bc2001d0435bf314dafdb60059b95e31213f00b52" Jan 01 08:52:21 crc kubenswrapper[4867]: I0101 08:52:21.569988 4867 scope.go:117] "RemoveContainer" containerID="a427096bdc8d52c036c3aaecb3f5ce96e01b2c654772d5d0fcf3342a5e745a56" Jan 01 08:52:21 crc kubenswrapper[4867]: E0101 08:52:21.570336 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 08:52:25 crc kubenswrapper[4867]: I0101 08:52:25.532254 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-n9wtf"] Jan 01 08:52:25 crc kubenswrapper[4867]: E0101 08:52:25.535323 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6822afc-ac34-45d2-b36d-e912ebbaa317" containerName="extract-utilities" Jan 01 08:52:25 crc kubenswrapper[4867]: I0101 08:52:25.535587 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6822afc-ac34-45d2-b36d-e912ebbaa317" containerName="extract-utilities" Jan 01 08:52:25 crc kubenswrapper[4867]: E0101 08:52:25.535784 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6822afc-ac34-45d2-b36d-e912ebbaa317" containerName="extract-content" Jan 01 08:52:25 crc kubenswrapper[4867]: I0101 08:52:25.536061 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6822afc-ac34-45d2-b36d-e912ebbaa317" containerName="extract-content" Jan 01 08:52:25 crc kubenswrapper[4867]: E0101 08:52:25.536291 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6822afc-ac34-45d2-b36d-e912ebbaa317" containerName="registry-server" Jan 01 08:52:25 crc kubenswrapper[4867]: I0101 08:52:25.536476 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6822afc-ac34-45d2-b36d-e912ebbaa317" containerName="registry-server" Jan 01 08:52:25 crc kubenswrapper[4867]: I0101 08:52:25.537028 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6822afc-ac34-45d2-b36d-e912ebbaa317" containerName="registry-server" Jan 01 08:52:25 crc kubenswrapper[4867]: I0101 08:52:25.539532 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n9wtf" Jan 01 08:52:25 crc kubenswrapper[4867]: I0101 08:52:25.554556 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n9wtf"] Jan 01 08:52:25 crc kubenswrapper[4867]: I0101 08:52:25.689936 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5266c508-eafd-40d1-8021-74486191690d-catalog-content\") pod \"certified-operators-n9wtf\" (UID: \"5266c508-eafd-40d1-8021-74486191690d\") " pod="openshift-marketplace/certified-operators-n9wtf" Jan 01 08:52:25 crc kubenswrapper[4867]: I0101 08:52:25.690360 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9skfk\" (UniqueName: \"kubernetes.io/projected/5266c508-eafd-40d1-8021-74486191690d-kube-api-access-9skfk\") pod \"certified-operators-n9wtf\" (UID: \"5266c508-eafd-40d1-8021-74486191690d\") " pod="openshift-marketplace/certified-operators-n9wtf" Jan 01 08:52:25 crc kubenswrapper[4867]: I0101 08:52:25.690511 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5266c508-eafd-40d1-8021-74486191690d-utilities\") pod \"certified-operators-n9wtf\" (UID: \"5266c508-eafd-40d1-8021-74486191690d\") " pod="openshift-marketplace/certified-operators-n9wtf" Jan 01 08:52:25 crc kubenswrapper[4867]: I0101 08:52:25.792209 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5266c508-eafd-40d1-8021-74486191690d-utilities\") pod \"certified-operators-n9wtf\" (UID: \"5266c508-eafd-40d1-8021-74486191690d\") " pod="openshift-marketplace/certified-operators-n9wtf" Jan 01 08:52:25 crc kubenswrapper[4867]: I0101 08:52:25.792314 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5266c508-eafd-40d1-8021-74486191690d-catalog-content\") pod \"certified-operators-n9wtf\" (UID: \"5266c508-eafd-40d1-8021-74486191690d\") " pod="openshift-marketplace/certified-operators-n9wtf" Jan 01 08:52:25 crc kubenswrapper[4867]: I0101 08:52:25.792399 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9skfk\" (UniqueName: \"kubernetes.io/projected/5266c508-eafd-40d1-8021-74486191690d-kube-api-access-9skfk\") pod \"certified-operators-n9wtf\" (UID: \"5266c508-eafd-40d1-8021-74486191690d\") " pod="openshift-marketplace/certified-operators-n9wtf" Jan 01 08:52:25 crc kubenswrapper[4867]: I0101 08:52:25.792694 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5266c508-eafd-40d1-8021-74486191690d-utilities\") pod \"certified-operators-n9wtf\" (UID: \"5266c508-eafd-40d1-8021-74486191690d\") " pod="openshift-marketplace/certified-operators-n9wtf" Jan 01 08:52:25 crc kubenswrapper[4867]: I0101 08:52:25.793281 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5266c508-eafd-40d1-8021-74486191690d-catalog-content\") pod \"certified-operators-n9wtf\" (UID: \"5266c508-eafd-40d1-8021-74486191690d\") " pod="openshift-marketplace/certified-operators-n9wtf" Jan 01 08:52:25 crc kubenswrapper[4867]: I0101 08:52:25.813790 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9skfk\" (UniqueName: \"kubernetes.io/projected/5266c508-eafd-40d1-8021-74486191690d-kube-api-access-9skfk\") pod \"certified-operators-n9wtf\" (UID: \"5266c508-eafd-40d1-8021-74486191690d\") " pod="openshift-marketplace/certified-operators-n9wtf" Jan 01 08:52:25 crc kubenswrapper[4867]: I0101 08:52:25.870679 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n9wtf" Jan 01 08:52:26 crc kubenswrapper[4867]: I0101 08:52:26.337650 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n9wtf"] Jan 01 08:52:26 crc kubenswrapper[4867]: I0101 08:52:26.617325 4867 generic.go:334] "Generic (PLEG): container finished" podID="5266c508-eafd-40d1-8021-74486191690d" containerID="912993f0b9f311bed84b04f05975e13a883f21221e8c6d19f30f3e4bcd5663f0" exitCode=0 Jan 01 08:52:26 crc kubenswrapper[4867]: I0101 08:52:26.617385 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n9wtf" event={"ID":"5266c508-eafd-40d1-8021-74486191690d","Type":"ContainerDied","Data":"912993f0b9f311bed84b04f05975e13a883f21221e8c6d19f30f3e4bcd5663f0"} Jan 01 08:52:26 crc kubenswrapper[4867]: I0101 08:52:26.617420 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n9wtf" event={"ID":"5266c508-eafd-40d1-8021-74486191690d","Type":"ContainerStarted","Data":"41a98b1034c5059cdac56e006e5943a3516db66726fc69aae7ff3029a78a0bc6"} Jan 01 08:52:28 crc kubenswrapper[4867]: I0101 08:52:28.697261 4867 generic.go:334] "Generic (PLEG): container finished" podID="5266c508-eafd-40d1-8021-74486191690d" containerID="173cada81a47e648c890ea433801ae4da4c274fdb3437b5a478c8d41e9c79a4c" exitCode=0 Jan 01 08:52:28 crc kubenswrapper[4867]: I0101 08:52:28.697325 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n9wtf" event={"ID":"5266c508-eafd-40d1-8021-74486191690d","Type":"ContainerDied","Data":"173cada81a47e648c890ea433801ae4da4c274fdb3437b5a478c8d41e9c79a4c"} Jan 01 08:52:29 crc kubenswrapper[4867]: I0101 08:52:29.715314 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n9wtf" event={"ID":"5266c508-eafd-40d1-8021-74486191690d","Type":"ContainerStarted","Data":"f40029182b2400820137305886f3cfc3530e8931da404ac36c7b75064a154757"} Jan 01 08:52:29 crc kubenswrapper[4867]: I0101 08:52:29.744554 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-n9wtf" podStartSLOduration=2.135341511 podStartE2EDuration="4.744530541s" podCreationTimestamp="2026-01-01 08:52:25 +0000 UTC" firstStartedPulling="2026-01-01 08:52:26.620817344 +0000 UTC m=+1555.756086153" lastFinishedPulling="2026-01-01 08:52:29.230006384 +0000 UTC m=+1558.365275183" observedRunningTime="2026-01-01 08:52:29.741125556 +0000 UTC m=+1558.876394355" watchObservedRunningTime="2026-01-01 08:52:29.744530541 +0000 UTC m=+1558.879799330" Jan 01 08:52:35 crc kubenswrapper[4867]: I0101 08:52:35.870963 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-n9wtf" Jan 01 08:52:35 crc kubenswrapper[4867]: I0101 08:52:35.871226 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-n9wtf" Jan 01 08:52:35 crc kubenswrapper[4867]: I0101 08:52:35.926254 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-n9wtf" Jan 01 08:52:36 crc kubenswrapper[4867]: I0101 08:52:36.351405 4867 scope.go:117] "RemoveContainer" containerID="1019ef1d5ef9d449064f64b97f1872918f743f75ee494fc24a8b9c9b5d1bdb12" Jan 01 08:52:36 crc kubenswrapper[4867]: I0101 08:52:36.398820 4867 scope.go:117] "RemoveContainer" containerID="6dfaece66e1fdc989b0ac443c52ee434f3a3f5bd5a2577a0c1f5bba039a354c7" Jan 01 08:52:36 crc kubenswrapper[4867]: I0101 08:52:36.460815 4867 scope.go:117] "RemoveContainer" containerID="f133afbf0ec3f1bc8e7cda6da7ec25adcc5fedc08b643a231a16e6ba3a90ede2" Jan 01 08:52:36 crc kubenswrapper[4867]: I0101 08:52:36.510165 4867 scope.go:117] "RemoveContainer" containerID="6da8d6827e35aa61466e5baf27b39069b8e735896ff0db9421f9f30c4950bc50" Jan 01 08:52:36 crc kubenswrapper[4867]: I0101 08:52:36.542731 4867 scope.go:117] "RemoveContainer" containerID="e76f85e1ac6a5b72f9751971e1db16afce532dc787a80afdc738f41e023e7b04" Jan 01 08:52:36 crc kubenswrapper[4867]: I0101 08:52:36.585498 4867 scope.go:117] "RemoveContainer" containerID="5b37eaae3cd4207fb60bf9295335cfc6fc1bae0503f20ddf7845ef59590b88e0" Jan 01 08:52:36 crc kubenswrapper[4867]: I0101 08:52:36.613041 4867 scope.go:117] "RemoveContainer" containerID="47cb0673b389a01e11b37b3511a5b9946a15f3796633aee01d44a7b3a87da5dc" Jan 01 08:52:36 crc kubenswrapper[4867]: I0101 08:52:36.635852 4867 scope.go:117] "RemoveContainer" containerID="0d0ed7617262d47e474047d2c96fd322c3300a423e91e4a44690bf1270fbf435" Jan 01 08:52:36 crc kubenswrapper[4867]: I0101 08:52:36.657248 4867 scope.go:117] "RemoveContainer" containerID="797df420b6252d0a8b79bcb4bf7f136bf6cec5144a63c844132b22175617f27e" Jan 01 08:52:36 crc kubenswrapper[4867]: I0101 08:52:36.681311 4867 scope.go:117] "RemoveContainer" containerID="2059162a240a8916135e9afa7c5fdbf55bbc69712359831defcfc55e9745560a" Jan 01 08:52:36 crc kubenswrapper[4867]: I0101 08:52:36.714415 4867 scope.go:117] "RemoveContainer" containerID="55b82754575fc6c9e726dd5ee34b5516da67786d9139b5d74b285932ea1f32ca" Jan 01 08:52:36 crc kubenswrapper[4867]: I0101 08:52:36.742150 4867 scope.go:117] "RemoveContainer" containerID="27a459f45063b31c51749133d58df4adb80c865b6cba29a1ddd5b68907c8e260" Jan 01 08:52:36 crc kubenswrapper[4867]: I0101 08:52:36.765110 4867 scope.go:117] "RemoveContainer" containerID="c9f0f002e87ad70f25a706911fd3098bc1fdeefab04560ef790da58f6517f5f2" Jan 01 08:52:36 crc kubenswrapper[4867]: I0101 08:52:36.790612 4867 scope.go:117] "RemoveContainer" containerID="334d4d6eb157e3eebf46b0a2ddb1231acde816efcbd086adbf555a1977e94879" Jan 01 08:52:36 crc kubenswrapper[4867]: I0101 08:52:36.819953 4867 scope.go:117] "RemoveContainer" containerID="0b77667b1946dbb880951c13289dda1a815b3f0ebcce03d5ef30d4b809c5bd4b" Jan 01 08:52:36 crc kubenswrapper[4867]: I0101 08:52:36.848986 4867 scope.go:117] "RemoveContainer" containerID="18becf772101ab6b2c53a4dce6cb85a47ab0a01a65a2c7b3664c945f540dbbb7" Jan 01 08:52:36 crc kubenswrapper[4867]: I0101 08:52:36.868379 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-n9wtf" Jan 01 08:52:36 crc kubenswrapper[4867]: I0101 08:52:36.886311 4867 scope.go:117] "RemoveContainer" containerID="0d11f9fcb8565559e963bacc9e303403745a1fea936cb1a19e716220ca56c821" Jan 01 08:52:36 crc kubenswrapper[4867]: I0101 08:52:36.915292 4867 scope.go:117] "RemoveContainer" containerID="0d53a6fe6b01eb124abe4dedb90d283c58982a5634b2bd9c0b8f028378652e7d" Jan 01 08:52:36 crc kubenswrapper[4867]: I0101 08:52:36.937219 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n9wtf"] Jan 01 08:52:36 crc kubenswrapper[4867]: I0101 08:52:36.943231 4867 scope.go:117] "RemoveContainer" containerID="11198450002d61606555b27ce0d74c2698b23a1a3fbd35217506bd8a14f9de29" Jan 01 08:52:37 crc kubenswrapper[4867]: I0101 08:52:37.131922 4867 scope.go:117] "RemoveContainer" containerID="a427096bdc8d52c036c3aaecb3f5ce96e01b2c654772d5d0fcf3342a5e745a56" Jan 01 08:52:37 crc kubenswrapper[4867]: E0101 08:52:37.132431 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 08:52:38 crc kubenswrapper[4867]: I0101 08:52:38.830373 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-n9wtf" podUID="5266c508-eafd-40d1-8021-74486191690d" containerName="registry-server" containerID="cri-o://f40029182b2400820137305886f3cfc3530e8931da404ac36c7b75064a154757" gracePeriod=2 Jan 01 08:52:39 crc kubenswrapper[4867]: I0101 08:52:39.841007 4867 generic.go:334] "Generic (PLEG): container finished" podID="5266c508-eafd-40d1-8021-74486191690d" containerID="f40029182b2400820137305886f3cfc3530e8931da404ac36c7b75064a154757" exitCode=0 Jan 01 08:52:39 crc kubenswrapper[4867]: I0101 08:52:39.841090 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n9wtf" event={"ID":"5266c508-eafd-40d1-8021-74486191690d","Type":"ContainerDied","Data":"f40029182b2400820137305886f3cfc3530e8931da404ac36c7b75064a154757"} Jan 01 08:52:39 crc kubenswrapper[4867]: I0101 08:52:39.841423 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n9wtf" event={"ID":"5266c508-eafd-40d1-8021-74486191690d","Type":"ContainerDied","Data":"41a98b1034c5059cdac56e006e5943a3516db66726fc69aae7ff3029a78a0bc6"} Jan 01 08:52:39 crc kubenswrapper[4867]: I0101 08:52:39.841447 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41a98b1034c5059cdac56e006e5943a3516db66726fc69aae7ff3029a78a0bc6" Jan 01 08:52:39 crc kubenswrapper[4867]: I0101 08:52:39.868028 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n9wtf" Jan 01 08:52:39 crc kubenswrapper[4867]: I0101 08:52:39.909390 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5266c508-eafd-40d1-8021-74486191690d-catalog-content\") pod \"5266c508-eafd-40d1-8021-74486191690d\" (UID: \"5266c508-eafd-40d1-8021-74486191690d\") " Jan 01 08:52:39 crc kubenswrapper[4867]: I0101 08:52:39.909461 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5266c508-eafd-40d1-8021-74486191690d-utilities\") pod \"5266c508-eafd-40d1-8021-74486191690d\" (UID: \"5266c508-eafd-40d1-8021-74486191690d\") " Jan 01 08:52:39 crc kubenswrapper[4867]: I0101 08:52:39.909490 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9skfk\" (UniqueName: \"kubernetes.io/projected/5266c508-eafd-40d1-8021-74486191690d-kube-api-access-9skfk\") pod \"5266c508-eafd-40d1-8021-74486191690d\" (UID: \"5266c508-eafd-40d1-8021-74486191690d\") " Jan 01 08:52:39 crc kubenswrapper[4867]: I0101 08:52:39.913160 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5266c508-eafd-40d1-8021-74486191690d-utilities" (OuterVolumeSpecName: "utilities") pod "5266c508-eafd-40d1-8021-74486191690d" (UID: "5266c508-eafd-40d1-8021-74486191690d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:52:39 crc kubenswrapper[4867]: I0101 08:52:39.934780 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5266c508-eafd-40d1-8021-74486191690d-kube-api-access-9skfk" (OuterVolumeSpecName: "kube-api-access-9skfk") pod "5266c508-eafd-40d1-8021-74486191690d" (UID: "5266c508-eafd-40d1-8021-74486191690d"). InnerVolumeSpecName "kube-api-access-9skfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:52:39 crc kubenswrapper[4867]: I0101 08:52:39.987257 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5266c508-eafd-40d1-8021-74486191690d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5266c508-eafd-40d1-8021-74486191690d" (UID: "5266c508-eafd-40d1-8021-74486191690d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:52:40 crc kubenswrapper[4867]: I0101 08:52:40.011187 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5266c508-eafd-40d1-8021-74486191690d-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 08:52:40 crc kubenswrapper[4867]: I0101 08:52:40.011219 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9skfk\" (UniqueName: \"kubernetes.io/projected/5266c508-eafd-40d1-8021-74486191690d-kube-api-access-9skfk\") on node \"crc\" DevicePath \"\"" Jan 01 08:52:40 crc kubenswrapper[4867]: I0101 08:52:40.011229 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5266c508-eafd-40d1-8021-74486191690d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 08:52:40 crc kubenswrapper[4867]: I0101 08:52:40.851226 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n9wtf" Jan 01 08:52:40 crc kubenswrapper[4867]: I0101 08:52:40.897506 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n9wtf"] Jan 01 08:52:40 crc kubenswrapper[4867]: I0101 08:52:40.907437 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-n9wtf"] Jan 01 08:52:41 crc kubenswrapper[4867]: I0101 08:52:41.144119 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5266c508-eafd-40d1-8021-74486191690d" path="/var/lib/kubelet/pods/5266c508-eafd-40d1-8021-74486191690d/volumes" Jan 01 08:52:51 crc kubenswrapper[4867]: I0101 08:52:51.135755 4867 scope.go:117] "RemoveContainer" containerID="a427096bdc8d52c036c3aaecb3f5ce96e01b2c654772d5d0fcf3342a5e745a56" Jan 01 08:52:51 crc kubenswrapper[4867]: E0101 08:52:51.136794 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 08:53:05 crc kubenswrapper[4867]: I0101 08:53:05.130075 4867 scope.go:117] "RemoveContainer" containerID="a427096bdc8d52c036c3aaecb3f5ce96e01b2c654772d5d0fcf3342a5e745a56" Jan 01 08:53:05 crc kubenswrapper[4867]: E0101 08:53:05.131075 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 08:53:06 crc kubenswrapper[4867]: I0101 08:53:06.748540 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2vg28"] Jan 01 08:53:06 crc kubenswrapper[4867]: E0101 08:53:06.749018 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5266c508-eafd-40d1-8021-74486191690d" containerName="registry-server" Jan 01 08:53:06 crc kubenswrapper[4867]: I0101 08:53:06.749040 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5266c508-eafd-40d1-8021-74486191690d" containerName="registry-server" Jan 01 08:53:06 crc kubenswrapper[4867]: E0101 08:53:06.749072 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5266c508-eafd-40d1-8021-74486191690d" containerName="extract-content" Jan 01 08:53:06 crc kubenswrapper[4867]: I0101 08:53:06.749085 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5266c508-eafd-40d1-8021-74486191690d" containerName="extract-content" Jan 01 08:53:06 crc kubenswrapper[4867]: E0101 08:53:06.749118 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5266c508-eafd-40d1-8021-74486191690d" containerName="extract-utilities" Jan 01 08:53:06 crc kubenswrapper[4867]: I0101 08:53:06.749151 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5266c508-eafd-40d1-8021-74486191690d" containerName="extract-utilities" Jan 01 08:53:06 crc kubenswrapper[4867]: I0101 08:53:06.749366 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="5266c508-eafd-40d1-8021-74486191690d" containerName="registry-server" Jan 01 08:53:06 crc kubenswrapper[4867]: I0101 08:53:06.751409 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2vg28" Jan 01 08:53:06 crc kubenswrapper[4867]: I0101 08:53:06.766515 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2vg28"] Jan 01 08:53:06 crc kubenswrapper[4867]: I0101 08:53:06.918328 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60d570cf-e14b-438c-8925-6cb98f99765d-utilities\") pod \"community-operators-2vg28\" (UID: \"60d570cf-e14b-438c-8925-6cb98f99765d\") " pod="openshift-marketplace/community-operators-2vg28" Jan 01 08:53:06 crc kubenswrapper[4867]: I0101 08:53:06.918399 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvnjr\" (UniqueName: \"kubernetes.io/projected/60d570cf-e14b-438c-8925-6cb98f99765d-kube-api-access-jvnjr\") pod \"community-operators-2vg28\" (UID: \"60d570cf-e14b-438c-8925-6cb98f99765d\") " pod="openshift-marketplace/community-operators-2vg28" Jan 01 08:53:06 crc kubenswrapper[4867]: I0101 08:53:06.918434 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60d570cf-e14b-438c-8925-6cb98f99765d-catalog-content\") pod \"community-operators-2vg28\" (UID: \"60d570cf-e14b-438c-8925-6cb98f99765d\") " pod="openshift-marketplace/community-operators-2vg28" Jan 01 08:53:07 crc kubenswrapper[4867]: I0101 08:53:07.019194 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvnjr\" (UniqueName: \"kubernetes.io/projected/60d570cf-e14b-438c-8925-6cb98f99765d-kube-api-access-jvnjr\") pod \"community-operators-2vg28\" (UID: \"60d570cf-e14b-438c-8925-6cb98f99765d\") " pod="openshift-marketplace/community-operators-2vg28" Jan 01 08:53:07 crc kubenswrapper[4867]: I0101 08:53:07.019468 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60d570cf-e14b-438c-8925-6cb98f99765d-catalog-content\") pod \"community-operators-2vg28\" (UID: \"60d570cf-e14b-438c-8925-6cb98f99765d\") " pod="openshift-marketplace/community-operators-2vg28" Jan 01 08:53:07 crc kubenswrapper[4867]: I0101 08:53:07.019620 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60d570cf-e14b-438c-8925-6cb98f99765d-utilities\") pod \"community-operators-2vg28\" (UID: \"60d570cf-e14b-438c-8925-6cb98f99765d\") " pod="openshift-marketplace/community-operators-2vg28" Jan 01 08:53:07 crc kubenswrapper[4867]: I0101 08:53:07.020132 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60d570cf-e14b-438c-8925-6cb98f99765d-utilities\") pod \"community-operators-2vg28\" (UID: \"60d570cf-e14b-438c-8925-6cb98f99765d\") " pod="openshift-marketplace/community-operators-2vg28" Jan 01 08:53:07 crc kubenswrapper[4867]: I0101 08:53:07.020326 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60d570cf-e14b-438c-8925-6cb98f99765d-catalog-content\") pod \"community-operators-2vg28\" (UID: \"60d570cf-e14b-438c-8925-6cb98f99765d\") " pod="openshift-marketplace/community-operators-2vg28" Jan 01 08:53:07 crc kubenswrapper[4867]: I0101 08:53:07.039548 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvnjr\" (UniqueName: \"kubernetes.io/projected/60d570cf-e14b-438c-8925-6cb98f99765d-kube-api-access-jvnjr\") pod \"community-operators-2vg28\" (UID: \"60d570cf-e14b-438c-8925-6cb98f99765d\") " pod="openshift-marketplace/community-operators-2vg28" Jan 01 08:53:07 crc kubenswrapper[4867]: I0101 08:53:07.080366 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2vg28" Jan 01 08:53:07 crc kubenswrapper[4867]: I0101 08:53:07.583506 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2vg28"] Jan 01 08:53:07 crc kubenswrapper[4867]: W0101 08:53:07.592373 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60d570cf_e14b_438c_8925_6cb98f99765d.slice/crio-40fa08b400ea9ca7ea7d2d87fb09907c32d5b11309f8bfe7f2c6a5cbafcbcd51 WatchSource:0}: Error finding container 40fa08b400ea9ca7ea7d2d87fb09907c32d5b11309f8bfe7f2c6a5cbafcbcd51: Status 404 returned error can't find the container with id 40fa08b400ea9ca7ea7d2d87fb09907c32d5b11309f8bfe7f2c6a5cbafcbcd51 Jan 01 08:53:08 crc kubenswrapper[4867]: I0101 08:53:08.148490 4867 generic.go:334] "Generic (PLEG): container finished" podID="60d570cf-e14b-438c-8925-6cb98f99765d" containerID="8c53b284ee4c48bdb2f67485e7aa7204ecfec68b6d4bee8775812bcf89635ab9" exitCode=0 Jan 01 08:53:08 crc kubenswrapper[4867]: I0101 08:53:08.148547 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2vg28" event={"ID":"60d570cf-e14b-438c-8925-6cb98f99765d","Type":"ContainerDied","Data":"8c53b284ee4c48bdb2f67485e7aa7204ecfec68b6d4bee8775812bcf89635ab9"} Jan 01 08:53:08 crc kubenswrapper[4867]: I0101 08:53:08.148943 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2vg28" event={"ID":"60d570cf-e14b-438c-8925-6cb98f99765d","Type":"ContainerStarted","Data":"40fa08b400ea9ca7ea7d2d87fb09907c32d5b11309f8bfe7f2c6a5cbafcbcd51"} Jan 01 08:53:09 crc kubenswrapper[4867]: I0101 08:53:09.164403 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2vg28" event={"ID":"60d570cf-e14b-438c-8925-6cb98f99765d","Type":"ContainerStarted","Data":"04a3bf9dc79781034a133657ac46ef10630d6dacebaa157c8bc8fa62df8929aa"} Jan 01 08:53:09 crc kubenswrapper[4867]: E0101 08:53:09.403965 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60d570cf_e14b_438c_8925_6cb98f99765d.slice/crio-conmon-04a3bf9dc79781034a133657ac46ef10630d6dacebaa157c8bc8fa62df8929aa.scope\": RecentStats: unable to find data in memory cache]" Jan 01 08:53:10 crc kubenswrapper[4867]: I0101 08:53:10.178163 4867 generic.go:334] "Generic (PLEG): container finished" podID="60d570cf-e14b-438c-8925-6cb98f99765d" containerID="04a3bf9dc79781034a133657ac46ef10630d6dacebaa157c8bc8fa62df8929aa" exitCode=0 Jan 01 08:53:10 crc kubenswrapper[4867]: I0101 08:53:10.178229 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2vg28" event={"ID":"60d570cf-e14b-438c-8925-6cb98f99765d","Type":"ContainerDied","Data":"04a3bf9dc79781034a133657ac46ef10630d6dacebaa157c8bc8fa62df8929aa"} Jan 01 08:53:11 crc kubenswrapper[4867]: I0101 08:53:11.193280 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2vg28" event={"ID":"60d570cf-e14b-438c-8925-6cb98f99765d","Type":"ContainerStarted","Data":"c615d8e025fb2ab9e15e5d995a823e046035570d9db9d5c179c8fff488bf1500"} Jan 01 08:53:11 crc kubenswrapper[4867]: I0101 08:53:11.229629 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2vg28" podStartSLOduration=2.58200342 podStartE2EDuration="5.229603652s" podCreationTimestamp="2026-01-01 08:53:06 +0000 UTC" firstStartedPulling="2026-01-01 08:53:08.152536974 +0000 UTC m=+1597.287805743" lastFinishedPulling="2026-01-01 08:53:10.800137196 +0000 UTC m=+1599.935405975" observedRunningTime="2026-01-01 08:53:11.223323628 +0000 UTC m=+1600.358592447" watchObservedRunningTime="2026-01-01 08:53:11.229603652 +0000 UTC m=+1600.364872451" Jan 01 08:53:17 crc kubenswrapper[4867]: I0101 08:53:17.080531 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2vg28" Jan 01 08:53:17 crc kubenswrapper[4867]: I0101 08:53:17.081667 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2vg28" Jan 01 08:53:17 crc kubenswrapper[4867]: I0101 08:53:17.156983 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2vg28" Jan 01 08:53:17 crc kubenswrapper[4867]: I0101 08:53:17.329298 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2vg28" Jan 01 08:53:17 crc kubenswrapper[4867]: I0101 08:53:17.434777 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2vg28"] Jan 01 08:53:19 crc kubenswrapper[4867]: I0101 08:53:19.278608 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2vg28" podUID="60d570cf-e14b-438c-8925-6cb98f99765d" containerName="registry-server" containerID="cri-o://c615d8e025fb2ab9e15e5d995a823e046035570d9db9d5c179c8fff488bf1500" gracePeriod=2 Jan 01 08:53:20 crc kubenswrapper[4867]: I0101 08:53:20.130210 4867 scope.go:117] "RemoveContainer" containerID="a427096bdc8d52c036c3aaecb3f5ce96e01b2c654772d5d0fcf3342a5e745a56" Jan 01 08:53:20 crc kubenswrapper[4867]: E0101 08:53:20.131745 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 08:53:20 crc kubenswrapper[4867]: I0101 08:53:20.297680 4867 generic.go:334] "Generic (PLEG): container finished" podID="60d570cf-e14b-438c-8925-6cb98f99765d" containerID="c615d8e025fb2ab9e15e5d995a823e046035570d9db9d5c179c8fff488bf1500" exitCode=0 Jan 01 08:53:20 crc kubenswrapper[4867]: I0101 08:53:20.297744 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2vg28" event={"ID":"60d570cf-e14b-438c-8925-6cb98f99765d","Type":"ContainerDied","Data":"c615d8e025fb2ab9e15e5d995a823e046035570d9db9d5c179c8fff488bf1500"} Jan 01 08:53:20 crc kubenswrapper[4867]: I0101 08:53:20.345003 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2vg28" Jan 01 08:53:20 crc kubenswrapper[4867]: I0101 08:53:20.537324 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60d570cf-e14b-438c-8925-6cb98f99765d-catalog-content\") pod \"60d570cf-e14b-438c-8925-6cb98f99765d\" (UID: \"60d570cf-e14b-438c-8925-6cb98f99765d\") " Jan 01 08:53:20 crc kubenswrapper[4867]: I0101 08:53:20.537682 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60d570cf-e14b-438c-8925-6cb98f99765d-utilities\") pod \"60d570cf-e14b-438c-8925-6cb98f99765d\" (UID: \"60d570cf-e14b-438c-8925-6cb98f99765d\") " Jan 01 08:53:20 crc kubenswrapper[4867]: I0101 08:53:20.537705 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvnjr\" (UniqueName: \"kubernetes.io/projected/60d570cf-e14b-438c-8925-6cb98f99765d-kube-api-access-jvnjr\") pod \"60d570cf-e14b-438c-8925-6cb98f99765d\" (UID: \"60d570cf-e14b-438c-8925-6cb98f99765d\") " Jan 01 08:53:20 crc kubenswrapper[4867]: I0101 08:53:20.539328 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60d570cf-e14b-438c-8925-6cb98f99765d-utilities" (OuterVolumeSpecName: "utilities") pod "60d570cf-e14b-438c-8925-6cb98f99765d" (UID: "60d570cf-e14b-438c-8925-6cb98f99765d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:53:20 crc kubenswrapper[4867]: I0101 08:53:20.547226 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60d570cf-e14b-438c-8925-6cb98f99765d-kube-api-access-jvnjr" (OuterVolumeSpecName: "kube-api-access-jvnjr") pod "60d570cf-e14b-438c-8925-6cb98f99765d" (UID: "60d570cf-e14b-438c-8925-6cb98f99765d"). InnerVolumeSpecName "kube-api-access-jvnjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 08:53:20 crc kubenswrapper[4867]: I0101 08:53:20.623614 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60d570cf-e14b-438c-8925-6cb98f99765d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "60d570cf-e14b-438c-8925-6cb98f99765d" (UID: "60d570cf-e14b-438c-8925-6cb98f99765d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 08:53:20 crc kubenswrapper[4867]: I0101 08:53:20.640192 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60d570cf-e14b-438c-8925-6cb98f99765d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 08:53:20 crc kubenswrapper[4867]: I0101 08:53:20.640247 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60d570cf-e14b-438c-8925-6cb98f99765d-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 08:53:20 crc kubenswrapper[4867]: I0101 08:53:20.640270 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvnjr\" (UniqueName: \"kubernetes.io/projected/60d570cf-e14b-438c-8925-6cb98f99765d-kube-api-access-jvnjr\") on node \"crc\" DevicePath \"\"" Jan 01 08:53:21 crc kubenswrapper[4867]: I0101 08:53:21.313743 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2vg28" event={"ID":"60d570cf-e14b-438c-8925-6cb98f99765d","Type":"ContainerDied","Data":"40fa08b400ea9ca7ea7d2d87fb09907c32d5b11309f8bfe7f2c6a5cbafcbcd51"} Jan 01 08:53:21 crc kubenswrapper[4867]: I0101 08:53:21.313843 4867 scope.go:117] "RemoveContainer" containerID="c615d8e025fb2ab9e15e5d995a823e046035570d9db9d5c179c8fff488bf1500" Jan 01 08:53:21 crc kubenswrapper[4867]: I0101 08:53:21.313847 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2vg28" Jan 01 08:53:21 crc kubenswrapper[4867]: I0101 08:53:21.355225 4867 scope.go:117] "RemoveContainer" containerID="04a3bf9dc79781034a133657ac46ef10630d6dacebaa157c8bc8fa62df8929aa" Jan 01 08:53:21 crc kubenswrapper[4867]: I0101 08:53:21.356135 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2vg28"] Jan 01 08:53:21 crc kubenswrapper[4867]: I0101 08:53:21.367435 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2vg28"] Jan 01 08:53:21 crc kubenswrapper[4867]: I0101 08:53:21.393699 4867 scope.go:117] "RemoveContainer" containerID="8c53b284ee4c48bdb2f67485e7aa7204ecfec68b6d4bee8775812bcf89635ab9" Jan 01 08:53:23 crc kubenswrapper[4867]: I0101 08:53:23.150980 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60d570cf-e14b-438c-8925-6cb98f99765d" path="/var/lib/kubelet/pods/60d570cf-e14b-438c-8925-6cb98f99765d/volumes" Jan 01 08:53:33 crc kubenswrapper[4867]: I0101 08:53:33.129203 4867 scope.go:117] "RemoveContainer" containerID="a427096bdc8d52c036c3aaecb3f5ce96e01b2c654772d5d0fcf3342a5e745a56" Jan 01 08:53:33 crc kubenswrapper[4867]: E0101 08:53:33.130861 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 08:53:37 crc kubenswrapper[4867]: I0101 08:53:37.313496 4867 scope.go:117] "RemoveContainer" containerID="b206d939a5a3a8e3fc6d74b4154bd992040ba308ec64bd564ca2a8ed436d7ec4" Jan 01 08:53:37 crc kubenswrapper[4867]: I0101 08:53:37.372425 4867 scope.go:117] "RemoveContainer" containerID="937838972a1573c1df4db392f223b3e988bccb1ee572a68dba6f4b3aed9b91ee" Jan 01 08:53:37 crc kubenswrapper[4867]: I0101 08:53:37.393729 4867 scope.go:117] "RemoveContainer" containerID="3ff9506c60682e3449b67de6a85c0c9e60bafe26d40e57a74aeb41a25472b5b4" Jan 01 08:53:37 crc kubenswrapper[4867]: I0101 08:53:37.416303 4867 scope.go:117] "RemoveContainer" containerID="5e417c532e469853d56e32420e309e312face5cadadd0c1e91903b890365eb58" Jan 01 08:53:37 crc kubenswrapper[4867]: I0101 08:53:37.467436 4867 scope.go:117] "RemoveContainer" containerID="7517e3a771393dd813ce33a6e896560947fef7f4d72c0dac4bc8064fd05eda37" Jan 01 08:53:37 crc kubenswrapper[4867]: I0101 08:53:37.517373 4867 scope.go:117] "RemoveContainer" containerID="c075a54d107c469a73bb0798878da75177f69ca6ebfdff598d2c839397f64a6d" Jan 01 08:53:37 crc kubenswrapper[4867]: I0101 08:53:37.583810 4867 scope.go:117] "RemoveContainer" containerID="86c2cec082a09770270bd5194b98abf8c27b1585d93389c0c27fa861b65fdfc6" Jan 01 08:53:37 crc kubenswrapper[4867]: I0101 08:53:37.607436 4867 scope.go:117] "RemoveContainer" containerID="dee65a7cb9358ada33024723a1727a8030b5e5df85252b95d15f51b33fabff61" Jan 01 08:53:37 crc kubenswrapper[4867]: I0101 08:53:37.649796 4867 scope.go:117] "RemoveContainer" containerID="dee68f8d073a368d94e9708c1869989ddd8ada0a6eb993b2a239618bdb95a0c6" Jan 01 08:53:37 crc kubenswrapper[4867]: I0101 08:53:37.680601 4867 scope.go:117] "RemoveContainer" containerID="ad2a6a74c002016bb0dabc7c9ffad35550dacca301a44244c6a61f60a5d320af" Jan 01 08:53:48 crc kubenswrapper[4867]: I0101 08:53:48.129388 4867 scope.go:117] "RemoveContainer" containerID="a427096bdc8d52c036c3aaecb3f5ce96e01b2c654772d5d0fcf3342a5e745a56" Jan 01 08:53:48 crc kubenswrapper[4867]: E0101 08:53:48.130601 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 08:54:01 crc kubenswrapper[4867]: I0101 08:54:01.136280 4867 scope.go:117] "RemoveContainer" containerID="a427096bdc8d52c036c3aaecb3f5ce96e01b2c654772d5d0fcf3342a5e745a56" Jan 01 08:54:01 crc kubenswrapper[4867]: E0101 08:54:01.137491 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 08:54:13 crc kubenswrapper[4867]: I0101 08:54:13.129690 4867 scope.go:117] "RemoveContainer" containerID="a427096bdc8d52c036c3aaecb3f5ce96e01b2c654772d5d0fcf3342a5e745a56" Jan 01 08:54:13 crc kubenswrapper[4867]: E0101 08:54:13.130711 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 08:54:25 crc kubenswrapper[4867]: I0101 08:54:25.128460 4867 scope.go:117] "RemoveContainer" containerID="a427096bdc8d52c036c3aaecb3f5ce96e01b2c654772d5d0fcf3342a5e745a56" Jan 01 08:54:25 crc kubenswrapper[4867]: E0101 08:54:25.129521 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 08:54:37 crc kubenswrapper[4867]: I0101 08:54:37.129131 4867 scope.go:117] "RemoveContainer" containerID="a427096bdc8d52c036c3aaecb3f5ce96e01b2c654772d5d0fcf3342a5e745a56" Jan 01 08:54:37 crc kubenswrapper[4867]: E0101 08:54:37.130235 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 08:54:37 crc kubenswrapper[4867]: I0101 08:54:37.889405 4867 scope.go:117] "RemoveContainer" containerID="c1af335d05f310408a3a3e7e9c132db515267848f5873efa7c468ee6eea3edc6" Jan 01 08:54:37 crc kubenswrapper[4867]: I0101 08:54:37.921147 4867 scope.go:117] "RemoveContainer" containerID="c5d0ca79ac4541ddccc69df852009bbfa95af4615fe22aebff2823b0767f3b63" Jan 01 08:54:37 crc kubenswrapper[4867]: I0101 08:54:37.956293 4867 scope.go:117] "RemoveContainer" containerID="3ca99532fece43ddcd181b8d30d852363f3e317662532c7bb4ef1999cabb7ab0" Jan 01 08:54:37 crc kubenswrapper[4867]: I0101 08:54:37.999548 4867 scope.go:117] "RemoveContainer" containerID="48c5ec08c4d1c98aa016392451b054e9e19f7bcf56b86ed4573484ac5f6be544" Jan 01 08:54:38 crc kubenswrapper[4867]: I0101 08:54:38.040234 4867 scope.go:117] "RemoveContainer" containerID="2308efd8efc29d35e443b922f20dee961e0822be16a9b0b3be84cb600b8719cd" Jan 01 08:54:38 crc kubenswrapper[4867]: I0101 08:54:38.071925 4867 scope.go:117] "RemoveContainer" containerID="95161ca79e37e61717b65705d5f72df0f5d3ee56eaca3808bc4567d29151f991" Jan 01 08:54:38 crc kubenswrapper[4867]: I0101 08:54:38.143871 4867 scope.go:117] "RemoveContainer" containerID="476c6421f57f7d1396c4f3c1ee490bf8a8da19804f6d3d654acc7bb8a4fe3682" Jan 01 08:54:38 crc kubenswrapper[4867]: I0101 08:54:38.172314 4867 scope.go:117] "RemoveContainer" containerID="6fd8f4c7059e184922dd9a91f3056bb550d7c290a243aea1fe9c949fb9fa29c7" Jan 01 08:54:38 crc kubenswrapper[4867]: I0101 08:54:38.214028 4867 scope.go:117] "RemoveContainer" containerID="429cd94d92c39d729bb98925ebfd0b4ac0c1dc9a973cba5d9f2daf931ca4c489" Jan 01 08:54:38 crc kubenswrapper[4867]: I0101 08:54:38.235722 4867 scope.go:117] "RemoveContainer" containerID="21e877e59228579e71714143a5ac19469d97317cbd760c47050750e19b0271c0" Jan 01 08:54:38 crc kubenswrapper[4867]: I0101 08:54:38.260053 4867 scope.go:117] "RemoveContainer" containerID="eb7dcef39a55694c9e76f1f5778b1c287c9ba1f1a1711c0d8fbaaad900a62405" Jan 01 08:54:48 crc kubenswrapper[4867]: I0101 08:54:48.128805 4867 scope.go:117] "RemoveContainer" containerID="a427096bdc8d52c036c3aaecb3f5ce96e01b2c654772d5d0fcf3342a5e745a56" Jan 01 08:54:48 crc kubenswrapper[4867]: E0101 08:54:48.129919 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 08:54:59 crc kubenswrapper[4867]: I0101 08:54:59.129327 4867 scope.go:117] "RemoveContainer" containerID="a427096bdc8d52c036c3aaecb3f5ce96e01b2c654772d5d0fcf3342a5e745a56" Jan 01 08:54:59 crc kubenswrapper[4867]: E0101 08:54:59.130557 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 08:55:14 crc kubenswrapper[4867]: I0101 08:55:14.129489 4867 scope.go:117] "RemoveContainer" containerID="a427096bdc8d52c036c3aaecb3f5ce96e01b2c654772d5d0fcf3342a5e745a56" Jan 01 08:55:14 crc kubenswrapper[4867]: E0101 08:55:14.130756 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 08:55:28 crc kubenswrapper[4867]: I0101 08:55:28.128459 4867 scope.go:117] "RemoveContainer" containerID="a427096bdc8d52c036c3aaecb3f5ce96e01b2c654772d5d0fcf3342a5e745a56" Jan 01 08:55:28 crc kubenswrapper[4867]: E0101 08:55:28.129541 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 08:55:38 crc kubenswrapper[4867]: I0101 08:55:38.460495 4867 scope.go:117] "RemoveContainer" containerID="c2cfbb056087e12b2a2885b24f64300c04f4b5224ad362d98504d48a15610138" Jan 01 08:55:38 crc kubenswrapper[4867]: I0101 08:55:38.502408 4867 scope.go:117] "RemoveContainer" containerID="f2735ab20c4cfd9df9b85bab04e44b30b480005938646b7829854374acce9273" Jan 01 08:55:38 crc kubenswrapper[4867]: I0101 08:55:38.521454 4867 scope.go:117] "RemoveContainer" containerID="28e20cf58885b1ba6853ea54eeca00c752b6873592dcefe99a5ea42e2ad55b82" Jan 01 08:55:38 crc kubenswrapper[4867]: I0101 08:55:38.546290 4867 scope.go:117] "RemoveContainer" containerID="2eec9c315c78c3cc244f82eab542c7f81c78a38711682898002441ea0482a959" Jan 01 08:55:38 crc kubenswrapper[4867]: I0101 08:55:38.605859 4867 scope.go:117] "RemoveContainer" containerID="59083f0140173ffc9cb86f628d124ca92657b3c9e015fa63f212aaaa828beee0" Jan 01 08:55:38 crc kubenswrapper[4867]: I0101 08:55:38.651300 4867 scope.go:117] "RemoveContainer" containerID="60e97c9fdf3987bf4c61e1d7445036f685cabc3d422531233de8d828501df5ed" Jan 01 08:55:40 crc kubenswrapper[4867]: I0101 08:55:40.130116 4867 scope.go:117] "RemoveContainer" containerID="a427096bdc8d52c036c3aaecb3f5ce96e01b2c654772d5d0fcf3342a5e745a56" Jan 01 08:55:40 crc kubenswrapper[4867]: E0101 08:55:40.130443 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 08:55:51 crc kubenswrapper[4867]: I0101 08:55:51.136873 4867 scope.go:117] "RemoveContainer" containerID="a427096bdc8d52c036c3aaecb3f5ce96e01b2c654772d5d0fcf3342a5e745a56" Jan 01 08:55:51 crc kubenswrapper[4867]: E0101 08:55:51.138242 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 08:56:04 crc kubenswrapper[4867]: I0101 08:56:04.129128 4867 scope.go:117] "RemoveContainer" containerID="a427096bdc8d52c036c3aaecb3f5ce96e01b2c654772d5d0fcf3342a5e745a56" Jan 01 08:56:04 crc kubenswrapper[4867]: E0101 08:56:04.131845 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 08:56:19 crc kubenswrapper[4867]: I0101 08:56:19.128785 4867 scope.go:117] "RemoveContainer" containerID="a427096bdc8d52c036c3aaecb3f5ce96e01b2c654772d5d0fcf3342a5e745a56" Jan 01 08:56:19 crc kubenswrapper[4867]: E0101 08:56:19.129872 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 08:56:34 crc kubenswrapper[4867]: I0101 08:56:34.128509 4867 scope.go:117] "RemoveContainer" containerID="a427096bdc8d52c036c3aaecb3f5ce96e01b2c654772d5d0fcf3342a5e745a56" Jan 01 08:56:34 crc kubenswrapper[4867]: E0101 08:56:34.130680 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 08:56:38 crc kubenswrapper[4867]: I0101 08:56:38.774325 4867 scope.go:117] "RemoveContainer" containerID="3cd677561763860feb64840f8907414cdd4cd64aae8107b33f87bbbd3b84da9d" Jan 01 08:56:38 crc kubenswrapper[4867]: I0101 08:56:38.804878 4867 scope.go:117] "RemoveContainer" containerID="d63d8143a81a71b834c747786bff3a5cc7e1868dc867d35f729e24492192e1ec" Jan 01 08:56:49 crc kubenswrapper[4867]: I0101 08:56:49.129262 4867 scope.go:117] "RemoveContainer" containerID="a427096bdc8d52c036c3aaecb3f5ce96e01b2c654772d5d0fcf3342a5e745a56" Jan 01 08:56:49 crc kubenswrapper[4867]: E0101 08:56:49.130363 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 08:57:02 crc kubenswrapper[4867]: I0101 08:57:02.128944 4867 scope.go:117] "RemoveContainer" containerID="a427096bdc8d52c036c3aaecb3f5ce96e01b2c654772d5d0fcf3342a5e745a56" Jan 01 08:57:02 crc kubenswrapper[4867]: E0101 08:57:02.130461 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 08:57:16 crc kubenswrapper[4867]: I0101 08:57:16.130149 4867 scope.go:117] "RemoveContainer" containerID="a427096bdc8d52c036c3aaecb3f5ce96e01b2c654772d5d0fcf3342a5e745a56" Jan 01 08:57:16 crc kubenswrapper[4867]: E0101 08:57:16.131463 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 08:57:28 crc kubenswrapper[4867]: I0101 08:57:28.128746 4867 scope.go:117] "RemoveContainer" containerID="a427096bdc8d52c036c3aaecb3f5ce96e01b2c654772d5d0fcf3342a5e745a56" Jan 01 08:57:28 crc kubenswrapper[4867]: I0101 08:57:28.669142 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerStarted","Data":"05744890d4b5444f64f18b9e166da6da0d9404f93ba3a8da1439f955bbdb4673"} Jan 01 08:58:38 crc kubenswrapper[4867]: I0101 08:58:38.885639 4867 scope.go:117] "RemoveContainer" containerID="173cada81a47e648c890ea433801ae4da4c274fdb3437b5a478c8d41e9c79a4c" Jan 01 08:58:38 crc kubenswrapper[4867]: I0101 08:58:38.918557 4867 scope.go:117] "RemoveContainer" containerID="f40029182b2400820137305886f3cfc3530e8931da404ac36c7b75064a154757" Jan 01 08:58:38 crc kubenswrapper[4867]: I0101 08:58:38.962683 4867 scope.go:117] "RemoveContainer" containerID="912993f0b9f311bed84b04f05975e13a883f21221e8c6d19f30f3e4bcd5663f0" Jan 01 08:59:51 crc kubenswrapper[4867]: I0101 08:59:51.331184 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 08:59:51 crc kubenswrapper[4867]: I0101 08:59:51.331970 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 09:00:00 crc kubenswrapper[4867]: I0101 09:00:00.150592 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29454300-8klth"] Jan 01 09:00:00 crc kubenswrapper[4867]: E0101 09:00:00.151936 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60d570cf-e14b-438c-8925-6cb98f99765d" containerName="extract-utilities" Jan 01 09:00:00 crc kubenswrapper[4867]: I0101 09:00:00.151962 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="60d570cf-e14b-438c-8925-6cb98f99765d" containerName="extract-utilities" Jan 01 09:00:00 crc kubenswrapper[4867]: E0101 09:00:00.152009 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60d570cf-e14b-438c-8925-6cb98f99765d" containerName="registry-server" Jan 01 09:00:00 crc kubenswrapper[4867]: I0101 09:00:00.152035 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="60d570cf-e14b-438c-8925-6cb98f99765d" containerName="registry-server" Jan 01 09:00:00 crc kubenswrapper[4867]: E0101 09:00:00.152081 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60d570cf-e14b-438c-8925-6cb98f99765d" containerName="extract-content" Jan 01 09:00:00 crc kubenswrapper[4867]: I0101 09:00:00.152100 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="60d570cf-e14b-438c-8925-6cb98f99765d" containerName="extract-content" Jan 01 09:00:00 crc kubenswrapper[4867]: I0101 09:00:00.152352 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="60d570cf-e14b-438c-8925-6cb98f99765d" containerName="registry-server" Jan 01 09:00:00 crc kubenswrapper[4867]: I0101 09:00:00.153154 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29454300-8klth" Jan 01 09:00:00 crc kubenswrapper[4867]: I0101 09:00:00.155756 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29454300-8klth"] Jan 01 09:00:00 crc kubenswrapper[4867]: I0101 09:00:00.156976 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 01 09:00:00 crc kubenswrapper[4867]: I0101 09:00:00.157231 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 01 09:00:00 crc kubenswrapper[4867]: I0101 09:00:00.164122 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn99f\" (UniqueName: \"kubernetes.io/projected/4c0f53bc-2a77-4cc1-a6a5-adb9597eab05-kube-api-access-tn99f\") pod \"collect-profiles-29454300-8klth\" (UID: \"4c0f53bc-2a77-4cc1-a6a5-adb9597eab05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454300-8klth" Jan 01 09:00:00 crc kubenswrapper[4867]: I0101 09:00:00.164236 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c0f53bc-2a77-4cc1-a6a5-adb9597eab05-secret-volume\") pod \"collect-profiles-29454300-8klth\" (UID: \"4c0f53bc-2a77-4cc1-a6a5-adb9597eab05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454300-8klth" Jan 01 09:00:00 crc kubenswrapper[4867]: I0101 09:00:00.164268 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c0f53bc-2a77-4cc1-a6a5-adb9597eab05-config-volume\") pod \"collect-profiles-29454300-8klth\" (UID: \"4c0f53bc-2a77-4cc1-a6a5-adb9597eab05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454300-8klth" Jan 01 09:00:00 crc kubenswrapper[4867]: I0101 09:00:00.265480 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c0f53bc-2a77-4cc1-a6a5-adb9597eab05-secret-volume\") pod \"collect-profiles-29454300-8klth\" (UID: \"4c0f53bc-2a77-4cc1-a6a5-adb9597eab05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454300-8klth" Jan 01 09:00:00 crc kubenswrapper[4867]: I0101 09:00:00.265589 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c0f53bc-2a77-4cc1-a6a5-adb9597eab05-config-volume\") pod \"collect-profiles-29454300-8klth\" (UID: \"4c0f53bc-2a77-4cc1-a6a5-adb9597eab05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454300-8klth" Jan 01 09:00:00 crc kubenswrapper[4867]: I0101 09:00:00.265714 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn99f\" (UniqueName: \"kubernetes.io/projected/4c0f53bc-2a77-4cc1-a6a5-adb9597eab05-kube-api-access-tn99f\") pod \"collect-profiles-29454300-8klth\" (UID: \"4c0f53bc-2a77-4cc1-a6a5-adb9597eab05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454300-8klth" Jan 01 09:00:00 crc kubenswrapper[4867]: I0101 09:00:00.267288 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c0f53bc-2a77-4cc1-a6a5-adb9597eab05-config-volume\") pod \"collect-profiles-29454300-8klth\" (UID: \"4c0f53bc-2a77-4cc1-a6a5-adb9597eab05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454300-8klth" Jan 01 09:00:00 crc kubenswrapper[4867]: I0101 09:00:00.279473 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c0f53bc-2a77-4cc1-a6a5-adb9597eab05-secret-volume\") pod \"collect-profiles-29454300-8klth\" (UID: \"4c0f53bc-2a77-4cc1-a6a5-adb9597eab05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454300-8klth" Jan 01 09:00:00 crc kubenswrapper[4867]: I0101 09:00:00.284233 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn99f\" (UniqueName: \"kubernetes.io/projected/4c0f53bc-2a77-4cc1-a6a5-adb9597eab05-kube-api-access-tn99f\") pod \"collect-profiles-29454300-8klth\" (UID: \"4c0f53bc-2a77-4cc1-a6a5-adb9597eab05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454300-8klth" Jan 01 09:00:00 crc kubenswrapper[4867]: I0101 09:00:00.476300 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29454300-8klth" Jan 01 09:00:00 crc kubenswrapper[4867]: I0101 09:00:00.913383 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29454300-8klth"] Jan 01 09:00:01 crc kubenswrapper[4867]: I0101 09:00:01.142450 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29454300-8klth" event={"ID":"4c0f53bc-2a77-4cc1-a6a5-adb9597eab05","Type":"ContainerStarted","Data":"9db6757a9fbebbca1b077381f9abe995099da64e2913983df7598039ed8ba7e3"} Jan 01 09:00:01 crc kubenswrapper[4867]: I0101 09:00:01.142855 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29454300-8klth" event={"ID":"4c0f53bc-2a77-4cc1-a6a5-adb9597eab05","Type":"ContainerStarted","Data":"19c115323a283ece5b0ef619f040c4217c63d4faaa94013919ab27d289b46764"} Jan 01 09:00:01 crc kubenswrapper[4867]: I0101 09:00:01.178357 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29454300-8klth" podStartSLOduration=1.178331742 podStartE2EDuration="1.178331742s" podCreationTimestamp="2026-01-01 09:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 09:00:01.172497588 +0000 UTC m=+2010.307766377" watchObservedRunningTime="2026-01-01 09:00:01.178331742 +0000 UTC m=+2010.313600551" Jan 01 09:00:02 crc kubenswrapper[4867]: I0101 09:00:02.153740 4867 generic.go:334] "Generic (PLEG): container finished" podID="4c0f53bc-2a77-4cc1-a6a5-adb9597eab05" containerID="9db6757a9fbebbca1b077381f9abe995099da64e2913983df7598039ed8ba7e3" exitCode=0 Jan 01 09:00:02 crc kubenswrapper[4867]: I0101 09:00:02.153831 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29454300-8klth" event={"ID":"4c0f53bc-2a77-4cc1-a6a5-adb9597eab05","Type":"ContainerDied","Data":"9db6757a9fbebbca1b077381f9abe995099da64e2913983df7598039ed8ba7e3"} Jan 01 09:00:03 crc kubenswrapper[4867]: I0101 09:00:03.546942 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29454300-8klth" Jan 01 09:00:03 crc kubenswrapper[4867]: I0101 09:00:03.716390 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tn99f\" (UniqueName: \"kubernetes.io/projected/4c0f53bc-2a77-4cc1-a6a5-adb9597eab05-kube-api-access-tn99f\") pod \"4c0f53bc-2a77-4cc1-a6a5-adb9597eab05\" (UID: \"4c0f53bc-2a77-4cc1-a6a5-adb9597eab05\") " Jan 01 09:00:03 crc kubenswrapper[4867]: I0101 09:00:03.716477 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c0f53bc-2a77-4cc1-a6a5-adb9597eab05-config-volume\") pod \"4c0f53bc-2a77-4cc1-a6a5-adb9597eab05\" (UID: \"4c0f53bc-2a77-4cc1-a6a5-adb9597eab05\") " Jan 01 09:00:03 crc kubenswrapper[4867]: I0101 09:00:03.716541 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c0f53bc-2a77-4cc1-a6a5-adb9597eab05-secret-volume\") pod \"4c0f53bc-2a77-4cc1-a6a5-adb9597eab05\" (UID: \"4c0f53bc-2a77-4cc1-a6a5-adb9597eab05\") " Jan 01 09:00:03 crc kubenswrapper[4867]: I0101 09:00:03.723809 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c0f53bc-2a77-4cc1-a6a5-adb9597eab05-config-volume" (OuterVolumeSpecName: "config-volume") pod "4c0f53bc-2a77-4cc1-a6a5-adb9597eab05" (UID: "4c0f53bc-2a77-4cc1-a6a5-adb9597eab05"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 09:00:03 crc kubenswrapper[4867]: I0101 09:00:03.729123 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c0f53bc-2a77-4cc1-a6a5-adb9597eab05-kube-api-access-tn99f" (OuterVolumeSpecName: "kube-api-access-tn99f") pod "4c0f53bc-2a77-4cc1-a6a5-adb9597eab05" (UID: "4c0f53bc-2a77-4cc1-a6a5-adb9597eab05"). InnerVolumeSpecName "kube-api-access-tn99f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:00:03 crc kubenswrapper[4867]: I0101 09:00:03.739336 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c0f53bc-2a77-4cc1-a6a5-adb9597eab05-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4c0f53bc-2a77-4cc1-a6a5-adb9597eab05" (UID: "4c0f53bc-2a77-4cc1-a6a5-adb9597eab05"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 09:00:03 crc kubenswrapper[4867]: I0101 09:00:03.818851 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tn99f\" (UniqueName: \"kubernetes.io/projected/4c0f53bc-2a77-4cc1-a6a5-adb9597eab05-kube-api-access-tn99f\") on node \"crc\" DevicePath \"\"" Jan 01 09:00:03 crc kubenswrapper[4867]: I0101 09:00:03.818928 4867 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c0f53bc-2a77-4cc1-a6a5-adb9597eab05-config-volume\") on node \"crc\" DevicePath \"\"" Jan 01 09:00:03 crc kubenswrapper[4867]: I0101 09:00:03.818951 4867 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c0f53bc-2a77-4cc1-a6a5-adb9597eab05-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 01 09:00:04 crc kubenswrapper[4867]: I0101 09:00:04.175624 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29454300-8klth" event={"ID":"4c0f53bc-2a77-4cc1-a6a5-adb9597eab05","Type":"ContainerDied","Data":"19c115323a283ece5b0ef619f040c4217c63d4faaa94013919ab27d289b46764"} Jan 01 09:00:04 crc kubenswrapper[4867]: I0101 09:00:04.175712 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19c115323a283ece5b0ef619f040c4217c63d4faaa94013919ab27d289b46764" Jan 01 09:00:04 crc kubenswrapper[4867]: I0101 09:00:04.175730 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29454300-8klth" Jan 01 09:00:04 crc kubenswrapper[4867]: I0101 09:00:04.263552 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29454255-49zxs"] Jan 01 09:00:04 crc kubenswrapper[4867]: I0101 09:00:04.274215 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29454255-49zxs"] Jan 01 09:00:05 crc kubenswrapper[4867]: I0101 09:00:05.144508 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed3ea167-3dde-4d3d-b36b-277e5368f1c9" path="/var/lib/kubelet/pods/ed3ea167-3dde-4d3d-b36b-277e5368f1c9/volumes" Jan 01 09:00:21 crc kubenswrapper[4867]: I0101 09:00:21.331497 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 09:00:21 crc kubenswrapper[4867]: I0101 09:00:21.333137 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 09:00:31 crc kubenswrapper[4867]: I0101 09:00:31.179743 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ssp2c"] Jan 01 09:00:31 crc kubenswrapper[4867]: E0101 09:00:31.180756 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c0f53bc-2a77-4cc1-a6a5-adb9597eab05" containerName="collect-profiles" Jan 01 09:00:31 crc kubenswrapper[4867]: I0101 09:00:31.180778 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c0f53bc-2a77-4cc1-a6a5-adb9597eab05" containerName="collect-profiles" Jan 01 09:00:31 crc kubenswrapper[4867]: I0101 09:00:31.181104 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c0f53bc-2a77-4cc1-a6a5-adb9597eab05" containerName="collect-profiles" Jan 01 09:00:31 crc kubenswrapper[4867]: I0101 09:00:31.182833 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ssp2c" Jan 01 09:00:31 crc kubenswrapper[4867]: I0101 09:00:31.187708 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ssp2c"] Jan 01 09:00:31 crc kubenswrapper[4867]: I0101 09:00:31.302951 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce6a8810-3e43-4b91-9c8e-c2e5c206fae2-catalog-content\") pod \"redhat-operators-ssp2c\" (UID: \"ce6a8810-3e43-4b91-9c8e-c2e5c206fae2\") " pod="openshift-marketplace/redhat-operators-ssp2c" Jan 01 09:00:31 crc kubenswrapper[4867]: I0101 09:00:31.303233 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v66ms\" (UniqueName: \"kubernetes.io/projected/ce6a8810-3e43-4b91-9c8e-c2e5c206fae2-kube-api-access-v66ms\") pod \"redhat-operators-ssp2c\" (UID: \"ce6a8810-3e43-4b91-9c8e-c2e5c206fae2\") " pod="openshift-marketplace/redhat-operators-ssp2c" Jan 01 09:00:31 crc kubenswrapper[4867]: I0101 09:00:31.303287 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce6a8810-3e43-4b91-9c8e-c2e5c206fae2-utilities\") pod \"redhat-operators-ssp2c\" (UID: \"ce6a8810-3e43-4b91-9c8e-c2e5c206fae2\") " pod="openshift-marketplace/redhat-operators-ssp2c" Jan 01 09:00:31 crc kubenswrapper[4867]: I0101 09:00:31.404077 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v66ms\" (UniqueName: \"kubernetes.io/projected/ce6a8810-3e43-4b91-9c8e-c2e5c206fae2-kube-api-access-v66ms\") pod \"redhat-operators-ssp2c\" (UID: \"ce6a8810-3e43-4b91-9c8e-c2e5c206fae2\") " pod="openshift-marketplace/redhat-operators-ssp2c" Jan 01 09:00:31 crc kubenswrapper[4867]: I0101 09:00:31.404728 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce6a8810-3e43-4b91-9c8e-c2e5c206fae2-utilities\") pod \"redhat-operators-ssp2c\" (UID: \"ce6a8810-3e43-4b91-9c8e-c2e5c206fae2\") " pod="openshift-marketplace/redhat-operators-ssp2c" Jan 01 09:00:31 crc kubenswrapper[4867]: I0101 09:00:31.404902 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce6a8810-3e43-4b91-9c8e-c2e5c206fae2-catalog-content\") pod \"redhat-operators-ssp2c\" (UID: \"ce6a8810-3e43-4b91-9c8e-c2e5c206fae2\") " pod="openshift-marketplace/redhat-operators-ssp2c" Jan 01 09:00:31 crc kubenswrapper[4867]: I0101 09:00:31.405269 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce6a8810-3e43-4b91-9c8e-c2e5c206fae2-utilities\") pod \"redhat-operators-ssp2c\" (UID: \"ce6a8810-3e43-4b91-9c8e-c2e5c206fae2\") " pod="openshift-marketplace/redhat-operators-ssp2c" Jan 01 09:00:31 crc kubenswrapper[4867]: I0101 09:00:31.405310 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce6a8810-3e43-4b91-9c8e-c2e5c206fae2-catalog-content\") pod \"redhat-operators-ssp2c\" (UID: \"ce6a8810-3e43-4b91-9c8e-c2e5c206fae2\") " pod="openshift-marketplace/redhat-operators-ssp2c" Jan 01 09:00:31 crc kubenswrapper[4867]: I0101 09:00:31.439875 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v66ms\" (UniqueName: \"kubernetes.io/projected/ce6a8810-3e43-4b91-9c8e-c2e5c206fae2-kube-api-access-v66ms\") pod \"redhat-operators-ssp2c\" (UID: \"ce6a8810-3e43-4b91-9c8e-c2e5c206fae2\") " pod="openshift-marketplace/redhat-operators-ssp2c" Jan 01 09:00:31 crc kubenswrapper[4867]: I0101 09:00:31.509131 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ssp2c" Jan 01 09:00:31 crc kubenswrapper[4867]: W0101 09:00:31.763434 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce6a8810_3e43_4b91_9c8e_c2e5c206fae2.slice/crio-f5f57f5be23ad8d62ba9b44ae78c0d8f79703257afb710583026338c5dd405f2 WatchSource:0}: Error finding container f5f57f5be23ad8d62ba9b44ae78c0d8f79703257afb710583026338c5dd405f2: Status 404 returned error can't find the container with id f5f57f5be23ad8d62ba9b44ae78c0d8f79703257afb710583026338c5dd405f2 Jan 01 09:00:31 crc kubenswrapper[4867]: I0101 09:00:31.764783 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ssp2c"] Jan 01 09:00:32 crc kubenswrapper[4867]: I0101 09:00:32.416622 4867 generic.go:334] "Generic (PLEG): container finished" podID="ce6a8810-3e43-4b91-9c8e-c2e5c206fae2" containerID="24026077df665e11b15d9186cee491ac4643f1aa39747c29a5d2a3543c5ce7d3" exitCode=0 Jan 01 09:00:32 crc kubenswrapper[4867]: I0101 09:00:32.416806 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ssp2c" event={"ID":"ce6a8810-3e43-4b91-9c8e-c2e5c206fae2","Type":"ContainerDied","Data":"24026077df665e11b15d9186cee491ac4643f1aa39747c29a5d2a3543c5ce7d3"} Jan 01 09:00:32 crc kubenswrapper[4867]: I0101 09:00:32.417846 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ssp2c" event={"ID":"ce6a8810-3e43-4b91-9c8e-c2e5c206fae2","Type":"ContainerStarted","Data":"f5f57f5be23ad8d62ba9b44ae78c0d8f79703257afb710583026338c5dd405f2"} Jan 01 09:00:32 crc kubenswrapper[4867]: I0101 09:00:32.418445 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 01 09:00:34 crc kubenswrapper[4867]: I0101 09:00:34.438732 4867 generic.go:334] "Generic (PLEG): container finished" podID="ce6a8810-3e43-4b91-9c8e-c2e5c206fae2" containerID="a2a2aee66e4f4f7da1e65e08bd66f30554f2cd72386e9c1f6d2bbd860574ff15" exitCode=0 Jan 01 09:00:34 crc kubenswrapper[4867]: I0101 09:00:34.438846 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ssp2c" event={"ID":"ce6a8810-3e43-4b91-9c8e-c2e5c206fae2","Type":"ContainerDied","Data":"a2a2aee66e4f4f7da1e65e08bd66f30554f2cd72386e9c1f6d2bbd860574ff15"} Jan 01 09:00:35 crc kubenswrapper[4867]: I0101 09:00:35.455775 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ssp2c" event={"ID":"ce6a8810-3e43-4b91-9c8e-c2e5c206fae2","Type":"ContainerStarted","Data":"8c4439e640457e0d4c14055f2d6e8363b86b38540dd99e9f25e4b4a5d9c1823a"} Jan 01 09:00:35 crc kubenswrapper[4867]: I0101 09:00:35.505167 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ssp2c" podStartSLOduration=2.049746519 podStartE2EDuration="4.505141581s" podCreationTimestamp="2026-01-01 09:00:31 +0000 UTC" firstStartedPulling="2026-01-01 09:00:32.418189657 +0000 UTC m=+2041.553458426" lastFinishedPulling="2026-01-01 09:00:34.873584679 +0000 UTC m=+2044.008853488" observedRunningTime="2026-01-01 09:00:35.492251328 +0000 UTC m=+2044.627520137" watchObservedRunningTime="2026-01-01 09:00:35.505141581 +0000 UTC m=+2044.640410390" Jan 01 09:00:39 crc kubenswrapper[4867]: I0101 09:00:39.035419 4867 scope.go:117] "RemoveContainer" containerID="aef129ecc0ac3ce02b207ca25600b4c322c872e8c9af9326a544a7369c6dab45" Jan 01 09:00:41 crc kubenswrapper[4867]: I0101 09:00:41.509727 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ssp2c" Jan 01 09:00:41 crc kubenswrapper[4867]: I0101 09:00:41.510157 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ssp2c" Jan 01 09:00:42 crc kubenswrapper[4867]: I0101 09:00:42.576049 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ssp2c" podUID="ce6a8810-3e43-4b91-9c8e-c2e5c206fae2" containerName="registry-server" probeResult="failure" output=< Jan 01 09:00:42 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Jan 01 09:00:42 crc kubenswrapper[4867]: > Jan 01 09:00:51 crc kubenswrapper[4867]: I0101 09:00:51.331329 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 09:00:51 crc kubenswrapper[4867]: I0101 09:00:51.331854 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 09:00:51 crc kubenswrapper[4867]: I0101 09:00:51.331942 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-69jph" Jan 01 09:00:51 crc kubenswrapper[4867]: I0101 09:00:51.332649 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"05744890d4b5444f64f18b9e166da6da0d9404f93ba3a8da1439f955bbdb4673"} pod="openshift-machine-config-operator/machine-config-daemon-69jph" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 01 09:00:51 crc kubenswrapper[4867]: I0101 09:00:51.332714 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" containerID="cri-o://05744890d4b5444f64f18b9e166da6da0d9404f93ba3a8da1439f955bbdb4673" gracePeriod=600 Jan 01 09:00:51 crc kubenswrapper[4867]: I0101 09:00:51.562756 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ssp2c" Jan 01 09:00:51 crc kubenswrapper[4867]: I0101 09:00:51.613861 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ssp2c" Jan 01 09:00:51 crc kubenswrapper[4867]: I0101 09:00:51.808805 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ssp2c"] Jan 01 09:00:52 crc kubenswrapper[4867]: I0101 09:00:52.594317 4867 generic.go:334] "Generic (PLEG): container finished" podID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerID="05744890d4b5444f64f18b9e166da6da0d9404f93ba3a8da1439f955bbdb4673" exitCode=0 Jan 01 09:00:52 crc kubenswrapper[4867]: I0101 09:00:52.594449 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerDied","Data":"05744890d4b5444f64f18b9e166da6da0d9404f93ba3a8da1439f955bbdb4673"} Jan 01 09:00:52 crc kubenswrapper[4867]: I0101 09:00:52.594798 4867 scope.go:117] "RemoveContainer" containerID="a427096bdc8d52c036c3aaecb3f5ce96e01b2c654772d5d0fcf3342a5e745a56" Jan 01 09:00:52 crc kubenswrapper[4867]: I0101 09:00:52.595137 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ssp2c" podUID="ce6a8810-3e43-4b91-9c8e-c2e5c206fae2" containerName="registry-server" containerID="cri-o://8c4439e640457e0d4c14055f2d6e8363b86b38540dd99e9f25e4b4a5d9c1823a" gracePeriod=2 Jan 01 09:00:52 crc kubenswrapper[4867]: I0101 09:00:52.981511 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ssp2c" Jan 01 09:00:53 crc kubenswrapper[4867]: I0101 09:00:53.073517 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce6a8810-3e43-4b91-9c8e-c2e5c206fae2-utilities\") pod \"ce6a8810-3e43-4b91-9c8e-c2e5c206fae2\" (UID: \"ce6a8810-3e43-4b91-9c8e-c2e5c206fae2\") " Jan 01 09:00:53 crc kubenswrapper[4867]: I0101 09:00:53.073598 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce6a8810-3e43-4b91-9c8e-c2e5c206fae2-catalog-content\") pod \"ce6a8810-3e43-4b91-9c8e-c2e5c206fae2\" (UID: \"ce6a8810-3e43-4b91-9c8e-c2e5c206fae2\") " Jan 01 09:00:53 crc kubenswrapper[4867]: I0101 09:00:53.073710 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v66ms\" (UniqueName: \"kubernetes.io/projected/ce6a8810-3e43-4b91-9c8e-c2e5c206fae2-kube-api-access-v66ms\") pod \"ce6a8810-3e43-4b91-9c8e-c2e5c206fae2\" (UID: \"ce6a8810-3e43-4b91-9c8e-c2e5c206fae2\") " Jan 01 09:00:53 crc kubenswrapper[4867]: I0101 09:00:53.074450 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce6a8810-3e43-4b91-9c8e-c2e5c206fae2-utilities" (OuterVolumeSpecName: "utilities") pod "ce6a8810-3e43-4b91-9c8e-c2e5c206fae2" (UID: "ce6a8810-3e43-4b91-9c8e-c2e5c206fae2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:00:53 crc kubenswrapper[4867]: I0101 09:00:53.079016 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce6a8810-3e43-4b91-9c8e-c2e5c206fae2-kube-api-access-v66ms" (OuterVolumeSpecName: "kube-api-access-v66ms") pod "ce6a8810-3e43-4b91-9c8e-c2e5c206fae2" (UID: "ce6a8810-3e43-4b91-9c8e-c2e5c206fae2"). InnerVolumeSpecName "kube-api-access-v66ms". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:00:53 crc kubenswrapper[4867]: I0101 09:00:53.174791 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v66ms\" (UniqueName: \"kubernetes.io/projected/ce6a8810-3e43-4b91-9c8e-c2e5c206fae2-kube-api-access-v66ms\") on node \"crc\" DevicePath \"\"" Jan 01 09:00:53 crc kubenswrapper[4867]: I0101 09:00:53.174824 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce6a8810-3e43-4b91-9c8e-c2e5c206fae2-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 09:00:53 crc kubenswrapper[4867]: I0101 09:00:53.241931 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce6a8810-3e43-4b91-9c8e-c2e5c206fae2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce6a8810-3e43-4b91-9c8e-c2e5c206fae2" (UID: "ce6a8810-3e43-4b91-9c8e-c2e5c206fae2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:00:53 crc kubenswrapper[4867]: I0101 09:00:53.275722 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce6a8810-3e43-4b91-9c8e-c2e5c206fae2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 09:00:53 crc kubenswrapper[4867]: I0101 09:00:53.608713 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerStarted","Data":"609f1758578fd1e0359ee4d55ab7e708346e2b21307ff1d0616dfca955606c9d"} Jan 01 09:00:53 crc kubenswrapper[4867]: I0101 09:00:53.614530 4867 generic.go:334] "Generic (PLEG): container finished" podID="ce6a8810-3e43-4b91-9c8e-c2e5c206fae2" containerID="8c4439e640457e0d4c14055f2d6e8363b86b38540dd99e9f25e4b4a5d9c1823a" exitCode=0 Jan 01 09:00:53 crc kubenswrapper[4867]: I0101 09:00:53.614595 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ssp2c" event={"ID":"ce6a8810-3e43-4b91-9c8e-c2e5c206fae2","Type":"ContainerDied","Data":"8c4439e640457e0d4c14055f2d6e8363b86b38540dd99e9f25e4b4a5d9c1823a"} Jan 01 09:00:53 crc kubenswrapper[4867]: I0101 09:00:53.614660 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ssp2c" Jan 01 09:00:53 crc kubenswrapper[4867]: I0101 09:00:53.614691 4867 scope.go:117] "RemoveContainer" containerID="8c4439e640457e0d4c14055f2d6e8363b86b38540dd99e9f25e4b4a5d9c1823a" Jan 01 09:00:53 crc kubenswrapper[4867]: I0101 09:00:53.614665 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ssp2c" event={"ID":"ce6a8810-3e43-4b91-9c8e-c2e5c206fae2","Type":"ContainerDied","Data":"f5f57f5be23ad8d62ba9b44ae78c0d8f79703257afb710583026338c5dd405f2"} Jan 01 09:00:53 crc kubenswrapper[4867]: I0101 09:00:53.657299 4867 scope.go:117] "RemoveContainer" containerID="a2a2aee66e4f4f7da1e65e08bd66f30554f2cd72386e9c1f6d2bbd860574ff15" Jan 01 09:00:53 crc kubenswrapper[4867]: I0101 09:00:53.680376 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ssp2c"] Jan 01 09:00:53 crc kubenswrapper[4867]: I0101 09:00:53.690252 4867 scope.go:117] "RemoveContainer" containerID="24026077df665e11b15d9186cee491ac4643f1aa39747c29a5d2a3543c5ce7d3" Jan 01 09:00:53 crc kubenswrapper[4867]: I0101 09:00:53.691503 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ssp2c"] Jan 01 09:00:53 crc kubenswrapper[4867]: I0101 09:00:53.727623 4867 scope.go:117] "RemoveContainer" containerID="8c4439e640457e0d4c14055f2d6e8363b86b38540dd99e9f25e4b4a5d9c1823a" Jan 01 09:00:53 crc kubenswrapper[4867]: E0101 09:00:53.728253 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c4439e640457e0d4c14055f2d6e8363b86b38540dd99e9f25e4b4a5d9c1823a\": container with ID starting with 8c4439e640457e0d4c14055f2d6e8363b86b38540dd99e9f25e4b4a5d9c1823a not found: ID does not exist" containerID="8c4439e640457e0d4c14055f2d6e8363b86b38540dd99e9f25e4b4a5d9c1823a" Jan 01 09:00:53 crc kubenswrapper[4867]: I0101 09:00:53.728310 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c4439e640457e0d4c14055f2d6e8363b86b38540dd99e9f25e4b4a5d9c1823a"} err="failed to get container status \"8c4439e640457e0d4c14055f2d6e8363b86b38540dd99e9f25e4b4a5d9c1823a\": rpc error: code = NotFound desc = could not find container \"8c4439e640457e0d4c14055f2d6e8363b86b38540dd99e9f25e4b4a5d9c1823a\": container with ID starting with 8c4439e640457e0d4c14055f2d6e8363b86b38540dd99e9f25e4b4a5d9c1823a not found: ID does not exist" Jan 01 09:00:53 crc kubenswrapper[4867]: I0101 09:00:53.728345 4867 scope.go:117] "RemoveContainer" containerID="a2a2aee66e4f4f7da1e65e08bd66f30554f2cd72386e9c1f6d2bbd860574ff15" Jan 01 09:00:53 crc kubenswrapper[4867]: E0101 09:00:53.729075 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2a2aee66e4f4f7da1e65e08bd66f30554f2cd72386e9c1f6d2bbd860574ff15\": container with ID starting with a2a2aee66e4f4f7da1e65e08bd66f30554f2cd72386e9c1f6d2bbd860574ff15 not found: ID does not exist" containerID="a2a2aee66e4f4f7da1e65e08bd66f30554f2cd72386e9c1f6d2bbd860574ff15" Jan 01 09:00:53 crc kubenswrapper[4867]: I0101 09:00:53.729120 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2a2aee66e4f4f7da1e65e08bd66f30554f2cd72386e9c1f6d2bbd860574ff15"} err="failed to get container status \"a2a2aee66e4f4f7da1e65e08bd66f30554f2cd72386e9c1f6d2bbd860574ff15\": rpc error: code = NotFound desc = could not find container \"a2a2aee66e4f4f7da1e65e08bd66f30554f2cd72386e9c1f6d2bbd860574ff15\": container with ID starting with a2a2aee66e4f4f7da1e65e08bd66f30554f2cd72386e9c1f6d2bbd860574ff15 not found: ID does not exist" Jan 01 09:00:53 crc kubenswrapper[4867]: I0101 09:00:53.729145 4867 scope.go:117] "RemoveContainer" containerID="24026077df665e11b15d9186cee491ac4643f1aa39747c29a5d2a3543c5ce7d3" Jan 01 09:00:53 crc kubenswrapper[4867]: E0101 09:00:53.729601 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24026077df665e11b15d9186cee491ac4643f1aa39747c29a5d2a3543c5ce7d3\": container with ID starting with 24026077df665e11b15d9186cee491ac4643f1aa39747c29a5d2a3543c5ce7d3 not found: ID does not exist" containerID="24026077df665e11b15d9186cee491ac4643f1aa39747c29a5d2a3543c5ce7d3" Jan 01 09:00:53 crc kubenswrapper[4867]: I0101 09:00:53.729639 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24026077df665e11b15d9186cee491ac4643f1aa39747c29a5d2a3543c5ce7d3"} err="failed to get container status \"24026077df665e11b15d9186cee491ac4643f1aa39747c29a5d2a3543c5ce7d3\": rpc error: code = NotFound desc = could not find container \"24026077df665e11b15d9186cee491ac4643f1aa39747c29a5d2a3543c5ce7d3\": container with ID starting with 24026077df665e11b15d9186cee491ac4643f1aa39747c29a5d2a3543c5ce7d3 not found: ID does not exist" Jan 01 09:00:55 crc kubenswrapper[4867]: I0101 09:00:55.141942 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce6a8810-3e43-4b91-9c8e-c2e5c206fae2" path="/var/lib/kubelet/pods/ce6a8810-3e43-4b91-9c8e-c2e5c206fae2/volumes" Jan 01 09:02:32 crc kubenswrapper[4867]: I0101 09:02:32.313080 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nwn5j"] Jan 01 09:02:32 crc kubenswrapper[4867]: E0101 09:02:32.314291 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce6a8810-3e43-4b91-9c8e-c2e5c206fae2" containerName="extract-content" Jan 01 09:02:32 crc kubenswrapper[4867]: I0101 09:02:32.314315 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce6a8810-3e43-4b91-9c8e-c2e5c206fae2" containerName="extract-content" Jan 01 09:02:32 crc kubenswrapper[4867]: E0101 09:02:32.314343 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce6a8810-3e43-4b91-9c8e-c2e5c206fae2" containerName="extract-utilities" Jan 01 09:02:32 crc kubenswrapper[4867]: I0101 09:02:32.314356 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce6a8810-3e43-4b91-9c8e-c2e5c206fae2" containerName="extract-utilities" Jan 01 09:02:32 crc kubenswrapper[4867]: E0101 09:02:32.314399 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce6a8810-3e43-4b91-9c8e-c2e5c206fae2" containerName="registry-server" Jan 01 09:02:32 crc kubenswrapper[4867]: I0101 09:02:32.314411 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce6a8810-3e43-4b91-9c8e-c2e5c206fae2" containerName="registry-server" Jan 01 09:02:32 crc kubenswrapper[4867]: I0101 09:02:32.314635 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce6a8810-3e43-4b91-9c8e-c2e5c206fae2" containerName="registry-server" Jan 01 09:02:32 crc kubenswrapper[4867]: I0101 09:02:32.316572 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nwn5j" Jan 01 09:02:32 crc kubenswrapper[4867]: I0101 09:02:32.345049 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nwn5j"] Jan 01 09:02:32 crc kubenswrapper[4867]: I0101 09:02:32.433291 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfjq6\" (UniqueName: \"kubernetes.io/projected/6eda25f9-77e0-48e5-bcf0-007254ff593e-kube-api-access-jfjq6\") pod \"certified-operators-nwn5j\" (UID: \"6eda25f9-77e0-48e5-bcf0-007254ff593e\") " pod="openshift-marketplace/certified-operators-nwn5j" Jan 01 09:02:32 crc kubenswrapper[4867]: I0101 09:02:32.433467 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eda25f9-77e0-48e5-bcf0-007254ff593e-utilities\") pod \"certified-operators-nwn5j\" (UID: \"6eda25f9-77e0-48e5-bcf0-007254ff593e\") " pod="openshift-marketplace/certified-operators-nwn5j" Jan 01 09:02:32 crc kubenswrapper[4867]: I0101 09:02:32.433563 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eda25f9-77e0-48e5-bcf0-007254ff593e-catalog-content\") pod \"certified-operators-nwn5j\" (UID: \"6eda25f9-77e0-48e5-bcf0-007254ff593e\") " pod="openshift-marketplace/certified-operators-nwn5j" Jan 01 09:02:32 crc kubenswrapper[4867]: I0101 09:02:32.535611 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eda25f9-77e0-48e5-bcf0-007254ff593e-catalog-content\") pod \"certified-operators-nwn5j\" (UID: \"6eda25f9-77e0-48e5-bcf0-007254ff593e\") " pod="openshift-marketplace/certified-operators-nwn5j" Jan 01 09:02:32 crc kubenswrapper[4867]: I0101 09:02:32.535713 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfjq6\" (UniqueName: \"kubernetes.io/projected/6eda25f9-77e0-48e5-bcf0-007254ff593e-kube-api-access-jfjq6\") pod \"certified-operators-nwn5j\" (UID: \"6eda25f9-77e0-48e5-bcf0-007254ff593e\") " pod="openshift-marketplace/certified-operators-nwn5j" Jan 01 09:02:32 crc kubenswrapper[4867]: I0101 09:02:32.535812 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eda25f9-77e0-48e5-bcf0-007254ff593e-utilities\") pod \"certified-operators-nwn5j\" (UID: \"6eda25f9-77e0-48e5-bcf0-007254ff593e\") " pod="openshift-marketplace/certified-operators-nwn5j" Jan 01 09:02:32 crc kubenswrapper[4867]: I0101 09:02:32.536409 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eda25f9-77e0-48e5-bcf0-007254ff593e-catalog-content\") pod \"certified-operators-nwn5j\" (UID: \"6eda25f9-77e0-48e5-bcf0-007254ff593e\") " pod="openshift-marketplace/certified-operators-nwn5j" Jan 01 09:02:32 crc kubenswrapper[4867]: I0101 09:02:32.536809 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eda25f9-77e0-48e5-bcf0-007254ff593e-utilities\") pod \"certified-operators-nwn5j\" (UID: \"6eda25f9-77e0-48e5-bcf0-007254ff593e\") " pod="openshift-marketplace/certified-operators-nwn5j" Jan 01 09:02:32 crc kubenswrapper[4867]: I0101 09:02:32.563111 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfjq6\" (UniqueName: \"kubernetes.io/projected/6eda25f9-77e0-48e5-bcf0-007254ff593e-kube-api-access-jfjq6\") pod \"certified-operators-nwn5j\" (UID: \"6eda25f9-77e0-48e5-bcf0-007254ff593e\") " pod="openshift-marketplace/certified-operators-nwn5j" Jan 01 09:02:32 crc kubenswrapper[4867]: I0101 09:02:32.686348 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nwn5j" Jan 01 09:02:33 crc kubenswrapper[4867]: I0101 09:02:33.181030 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nwn5j"] Jan 01 09:02:33 crc kubenswrapper[4867]: I0101 09:02:33.503724 4867 generic.go:334] "Generic (PLEG): container finished" podID="6eda25f9-77e0-48e5-bcf0-007254ff593e" containerID="8c24024a0f9f7902871fbb6ab89e03b97800f881dce14a4e7ceae946bbd529aa" exitCode=0 Jan 01 09:02:33 crc kubenswrapper[4867]: I0101 09:02:33.503787 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nwn5j" event={"ID":"6eda25f9-77e0-48e5-bcf0-007254ff593e","Type":"ContainerDied","Data":"8c24024a0f9f7902871fbb6ab89e03b97800f881dce14a4e7ceae946bbd529aa"} Jan 01 09:02:33 crc kubenswrapper[4867]: I0101 09:02:33.503829 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nwn5j" event={"ID":"6eda25f9-77e0-48e5-bcf0-007254ff593e","Type":"ContainerStarted","Data":"68a17eb4c0847291bdaa37aee7c70f2c910cf04b3d81f5b39235314086d5799d"} Jan 01 09:02:34 crc kubenswrapper[4867]: I0101 09:02:34.692732 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fldbw"] Jan 01 09:02:34 crc kubenswrapper[4867]: I0101 09:02:34.694399 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fldbw" Jan 01 09:02:34 crc kubenswrapper[4867]: I0101 09:02:34.700638 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fldbw"] Jan 01 09:02:34 crc kubenswrapper[4867]: I0101 09:02:34.871250 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49bb3516-f7e9-4417-a18f-eb1690123f63-utilities\") pod \"redhat-marketplace-fldbw\" (UID: \"49bb3516-f7e9-4417-a18f-eb1690123f63\") " pod="openshift-marketplace/redhat-marketplace-fldbw" Jan 01 09:02:34 crc kubenswrapper[4867]: I0101 09:02:34.871541 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49bb3516-f7e9-4417-a18f-eb1690123f63-catalog-content\") pod \"redhat-marketplace-fldbw\" (UID: \"49bb3516-f7e9-4417-a18f-eb1690123f63\") " pod="openshift-marketplace/redhat-marketplace-fldbw" Jan 01 09:02:34 crc kubenswrapper[4867]: I0101 09:02:34.871683 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp9s8\" (UniqueName: \"kubernetes.io/projected/49bb3516-f7e9-4417-a18f-eb1690123f63-kube-api-access-mp9s8\") pod \"redhat-marketplace-fldbw\" (UID: \"49bb3516-f7e9-4417-a18f-eb1690123f63\") " pod="openshift-marketplace/redhat-marketplace-fldbw" Jan 01 09:02:34 crc kubenswrapper[4867]: I0101 09:02:34.972625 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49bb3516-f7e9-4417-a18f-eb1690123f63-catalog-content\") pod \"redhat-marketplace-fldbw\" (UID: \"49bb3516-f7e9-4417-a18f-eb1690123f63\") " pod="openshift-marketplace/redhat-marketplace-fldbw" Jan 01 09:02:34 crc kubenswrapper[4867]: I0101 09:02:34.972942 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp9s8\" (UniqueName: \"kubernetes.io/projected/49bb3516-f7e9-4417-a18f-eb1690123f63-kube-api-access-mp9s8\") pod \"redhat-marketplace-fldbw\" (UID: \"49bb3516-f7e9-4417-a18f-eb1690123f63\") " pod="openshift-marketplace/redhat-marketplace-fldbw" Jan 01 09:02:34 crc kubenswrapper[4867]: I0101 09:02:34.973046 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49bb3516-f7e9-4417-a18f-eb1690123f63-utilities\") pod \"redhat-marketplace-fldbw\" (UID: \"49bb3516-f7e9-4417-a18f-eb1690123f63\") " pod="openshift-marketplace/redhat-marketplace-fldbw" Jan 01 09:02:34 crc kubenswrapper[4867]: I0101 09:02:34.973416 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49bb3516-f7e9-4417-a18f-eb1690123f63-catalog-content\") pod \"redhat-marketplace-fldbw\" (UID: \"49bb3516-f7e9-4417-a18f-eb1690123f63\") " pod="openshift-marketplace/redhat-marketplace-fldbw" Jan 01 09:02:34 crc kubenswrapper[4867]: I0101 09:02:34.973489 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49bb3516-f7e9-4417-a18f-eb1690123f63-utilities\") pod \"redhat-marketplace-fldbw\" (UID: \"49bb3516-f7e9-4417-a18f-eb1690123f63\") " pod="openshift-marketplace/redhat-marketplace-fldbw" Jan 01 09:02:35 crc kubenswrapper[4867]: I0101 09:02:35.008865 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp9s8\" (UniqueName: \"kubernetes.io/projected/49bb3516-f7e9-4417-a18f-eb1690123f63-kube-api-access-mp9s8\") pod \"redhat-marketplace-fldbw\" (UID: \"49bb3516-f7e9-4417-a18f-eb1690123f63\") " pod="openshift-marketplace/redhat-marketplace-fldbw" Jan 01 09:02:35 crc kubenswrapper[4867]: I0101 09:02:35.308305 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fldbw" Jan 01 09:02:35 crc kubenswrapper[4867]: I0101 09:02:35.739073 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fldbw"] Jan 01 09:02:35 crc kubenswrapper[4867]: W0101 09:02:35.752088 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49bb3516_f7e9_4417_a18f_eb1690123f63.slice/crio-b8fde754c3e6465e7d26a653a44cc8a7933128b9f5de553d90defa884dfd0829 WatchSource:0}: Error finding container b8fde754c3e6465e7d26a653a44cc8a7933128b9f5de553d90defa884dfd0829: Status 404 returned error can't find the container with id b8fde754c3e6465e7d26a653a44cc8a7933128b9f5de553d90defa884dfd0829 Jan 01 09:02:36 crc kubenswrapper[4867]: I0101 09:02:36.525863 4867 generic.go:334] "Generic (PLEG): container finished" podID="49bb3516-f7e9-4417-a18f-eb1690123f63" containerID="e06b42f99a8cb4ddd21c8edc19e2ccc1c74c47ead32479ec032c803f43fb687a" exitCode=0 Jan 01 09:02:36 crc kubenswrapper[4867]: I0101 09:02:36.525965 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fldbw" event={"ID":"49bb3516-f7e9-4417-a18f-eb1690123f63","Type":"ContainerDied","Data":"e06b42f99a8cb4ddd21c8edc19e2ccc1c74c47ead32479ec032c803f43fb687a"} Jan 01 09:02:36 crc kubenswrapper[4867]: I0101 09:02:36.526270 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fldbw" event={"ID":"49bb3516-f7e9-4417-a18f-eb1690123f63","Type":"ContainerStarted","Data":"b8fde754c3e6465e7d26a653a44cc8a7933128b9f5de553d90defa884dfd0829"} Jan 01 09:02:38 crc kubenswrapper[4867]: I0101 09:02:38.547557 4867 generic.go:334] "Generic (PLEG): container finished" podID="6eda25f9-77e0-48e5-bcf0-007254ff593e" containerID="d178a62e70057a84ddf17c8191e7aa9b70532a4f15271932e1938f25f333252e" exitCode=0 Jan 01 09:02:38 crc kubenswrapper[4867]: I0101 09:02:38.547621 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nwn5j" event={"ID":"6eda25f9-77e0-48e5-bcf0-007254ff593e","Type":"ContainerDied","Data":"d178a62e70057a84ddf17c8191e7aa9b70532a4f15271932e1938f25f333252e"} Jan 01 09:02:38 crc kubenswrapper[4867]: I0101 09:02:38.549949 4867 generic.go:334] "Generic (PLEG): container finished" podID="49bb3516-f7e9-4417-a18f-eb1690123f63" containerID="86558d1038d88503c4d22513269789249070d13a176bb3796a6099e4bbd62cd7" exitCode=0 Jan 01 09:02:38 crc kubenswrapper[4867]: I0101 09:02:38.549979 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fldbw" event={"ID":"49bb3516-f7e9-4417-a18f-eb1690123f63","Type":"ContainerDied","Data":"86558d1038d88503c4d22513269789249070d13a176bb3796a6099e4bbd62cd7"} Jan 01 09:02:39 crc kubenswrapper[4867]: I0101 09:02:39.563208 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nwn5j" event={"ID":"6eda25f9-77e0-48e5-bcf0-007254ff593e","Type":"ContainerStarted","Data":"edad37c48d580f8c85824604243b18aa7eb2bf004793119145b42fd92c5075be"} Jan 01 09:02:39 crc kubenswrapper[4867]: I0101 09:02:39.567398 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fldbw" event={"ID":"49bb3516-f7e9-4417-a18f-eb1690123f63","Type":"ContainerStarted","Data":"5808abe1c5f42d479c242986e2aeba033d196ecc140837192546ec9006c7ef4e"} Jan 01 09:02:39 crc kubenswrapper[4867]: I0101 09:02:39.588072 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nwn5j" podStartSLOduration=2.085329411 podStartE2EDuration="7.588050716s" podCreationTimestamp="2026-01-01 09:02:32 +0000 UTC" firstStartedPulling="2026-01-01 09:02:33.505431079 +0000 UTC m=+2162.640699848" lastFinishedPulling="2026-01-01 09:02:39.008152354 +0000 UTC m=+2168.143421153" observedRunningTime="2026-01-01 09:02:39.587770188 +0000 UTC m=+2168.723038997" watchObservedRunningTime="2026-01-01 09:02:39.588050716 +0000 UTC m=+2168.723319495" Jan 01 09:02:39 crc kubenswrapper[4867]: I0101 09:02:39.616998 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fldbw" podStartSLOduration=4.041751861 podStartE2EDuration="5.61696741s" podCreationTimestamp="2026-01-01 09:02:34 +0000 UTC" firstStartedPulling="2026-01-01 09:02:37.448649683 +0000 UTC m=+2166.583918452" lastFinishedPulling="2026-01-01 09:02:39.023865202 +0000 UTC m=+2168.159134001" observedRunningTime="2026-01-01 09:02:39.609448406 +0000 UTC m=+2168.744717205" watchObservedRunningTime="2026-01-01 09:02:39.61696741 +0000 UTC m=+2168.752236189" Jan 01 09:02:42 crc kubenswrapper[4867]: I0101 09:02:42.687418 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nwn5j" Jan 01 09:02:42 crc kubenswrapper[4867]: I0101 09:02:42.689072 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nwn5j" Jan 01 09:02:42 crc kubenswrapper[4867]: I0101 09:02:42.752160 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nwn5j" Jan 01 09:02:44 crc kubenswrapper[4867]: I0101 09:02:44.685487 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nwn5j" Jan 01 09:02:44 crc kubenswrapper[4867]: I0101 09:02:44.802125 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nwn5j"] Jan 01 09:02:44 crc kubenswrapper[4867]: I0101 09:02:44.853710 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dtc7k"] Jan 01 09:02:44 crc kubenswrapper[4867]: I0101 09:02:44.854151 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dtc7k" podUID="727cf46e-4a29-4a4a-9c90-6052bc53068c" containerName="registry-server" containerID="cri-o://05d75b2897a341874112a6c7d878c3a812b8a981fdeddda6db7756f674bdc982" gracePeriod=2 Jan 01 09:02:45 crc kubenswrapper[4867]: I0101 09:02:45.309481 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fldbw" Jan 01 09:02:45 crc kubenswrapper[4867]: I0101 09:02:45.309536 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fldbw" Jan 01 09:02:45 crc kubenswrapper[4867]: I0101 09:02:45.373720 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fldbw" Jan 01 09:02:45 crc kubenswrapper[4867]: I0101 09:02:45.618472 4867 generic.go:334] "Generic (PLEG): container finished" podID="727cf46e-4a29-4a4a-9c90-6052bc53068c" containerID="05d75b2897a341874112a6c7d878c3a812b8a981fdeddda6db7756f674bdc982" exitCode=0 Jan 01 09:02:45 crc kubenswrapper[4867]: I0101 09:02:45.618547 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dtc7k" event={"ID":"727cf46e-4a29-4a4a-9c90-6052bc53068c","Type":"ContainerDied","Data":"05d75b2897a341874112a6c7d878c3a812b8a981fdeddda6db7756f674bdc982"} Jan 01 09:02:45 crc kubenswrapper[4867]: I0101 09:02:45.666980 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fldbw" Jan 01 09:02:46 crc kubenswrapper[4867]: I0101 09:02:46.373136 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dtc7k" Jan 01 09:02:46 crc kubenswrapper[4867]: I0101 09:02:46.571271 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lxrk\" (UniqueName: \"kubernetes.io/projected/727cf46e-4a29-4a4a-9c90-6052bc53068c-kube-api-access-2lxrk\") pod \"727cf46e-4a29-4a4a-9c90-6052bc53068c\" (UID: \"727cf46e-4a29-4a4a-9c90-6052bc53068c\") " Jan 01 09:02:46 crc kubenswrapper[4867]: I0101 09:02:46.571366 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/727cf46e-4a29-4a4a-9c90-6052bc53068c-utilities\") pod \"727cf46e-4a29-4a4a-9c90-6052bc53068c\" (UID: \"727cf46e-4a29-4a4a-9c90-6052bc53068c\") " Jan 01 09:02:46 crc kubenswrapper[4867]: I0101 09:02:46.571399 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/727cf46e-4a29-4a4a-9c90-6052bc53068c-catalog-content\") pod \"727cf46e-4a29-4a4a-9c90-6052bc53068c\" (UID: \"727cf46e-4a29-4a4a-9c90-6052bc53068c\") " Jan 01 09:02:46 crc kubenswrapper[4867]: I0101 09:02:46.572071 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/727cf46e-4a29-4a4a-9c90-6052bc53068c-utilities" (OuterVolumeSpecName: "utilities") pod "727cf46e-4a29-4a4a-9c90-6052bc53068c" (UID: "727cf46e-4a29-4a4a-9c90-6052bc53068c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:02:46 crc kubenswrapper[4867]: I0101 09:02:46.587210 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/727cf46e-4a29-4a4a-9c90-6052bc53068c-kube-api-access-2lxrk" (OuterVolumeSpecName: "kube-api-access-2lxrk") pod "727cf46e-4a29-4a4a-9c90-6052bc53068c" (UID: "727cf46e-4a29-4a4a-9c90-6052bc53068c"). InnerVolumeSpecName "kube-api-access-2lxrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:02:46 crc kubenswrapper[4867]: I0101 09:02:46.652633 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dtc7k" event={"ID":"727cf46e-4a29-4a4a-9c90-6052bc53068c","Type":"ContainerDied","Data":"dd54f5e1466b5a876252b660d7ed1e9f9a6b09db4b21d352e8414f42f2307adf"} Jan 01 09:02:46 crc kubenswrapper[4867]: I0101 09:02:46.652719 4867 scope.go:117] "RemoveContainer" containerID="05d75b2897a341874112a6c7d878c3a812b8a981fdeddda6db7756f674bdc982" Jan 01 09:02:46 crc kubenswrapper[4867]: I0101 09:02:46.653021 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dtc7k" Jan 01 09:02:46 crc kubenswrapper[4867]: I0101 09:02:46.672691 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lxrk\" (UniqueName: \"kubernetes.io/projected/727cf46e-4a29-4a4a-9c90-6052bc53068c-kube-api-access-2lxrk\") on node \"crc\" DevicePath \"\"" Jan 01 09:02:46 crc kubenswrapper[4867]: I0101 09:02:46.672730 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/727cf46e-4a29-4a4a-9c90-6052bc53068c-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 09:02:46 crc kubenswrapper[4867]: I0101 09:02:46.680389 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/727cf46e-4a29-4a4a-9c90-6052bc53068c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "727cf46e-4a29-4a4a-9c90-6052bc53068c" (UID: "727cf46e-4a29-4a4a-9c90-6052bc53068c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:02:46 crc kubenswrapper[4867]: I0101 09:02:46.689339 4867 scope.go:117] "RemoveContainer" containerID="b9994592459a1137cd454939a0f8244d95ab83c16dd5268b2c21ccc13d184892" Jan 01 09:02:46 crc kubenswrapper[4867]: I0101 09:02:46.714140 4867 scope.go:117] "RemoveContainer" containerID="255c90736e993e44457c67619f931b858e44c0c9c8d0fca3c6895087785dd67d" Jan 01 09:02:46 crc kubenswrapper[4867]: I0101 09:02:46.773934 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/727cf46e-4a29-4a4a-9c90-6052bc53068c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 09:02:47 crc kubenswrapper[4867]: I0101 09:02:47.009034 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dtc7k"] Jan 01 09:02:47 crc kubenswrapper[4867]: I0101 09:02:47.020452 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dtc7k"] Jan 01 09:02:47 crc kubenswrapper[4867]: I0101 09:02:47.145692 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="727cf46e-4a29-4a4a-9c90-6052bc53068c" path="/var/lib/kubelet/pods/727cf46e-4a29-4a4a-9c90-6052bc53068c/volumes" Jan 01 09:02:47 crc kubenswrapper[4867]: I0101 09:02:47.695146 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fldbw"] Jan 01 09:02:47 crc kubenswrapper[4867]: I0101 09:02:47.695784 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fldbw" podUID="49bb3516-f7e9-4417-a18f-eb1690123f63" containerName="registry-server" containerID="cri-o://5808abe1c5f42d479c242986e2aeba033d196ecc140837192546ec9006c7ef4e" gracePeriod=2 Jan 01 09:02:48 crc kubenswrapper[4867]: I0101 09:02:48.104192 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fldbw" Jan 01 09:02:48 crc kubenswrapper[4867]: I0101 09:02:48.296517 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mp9s8\" (UniqueName: \"kubernetes.io/projected/49bb3516-f7e9-4417-a18f-eb1690123f63-kube-api-access-mp9s8\") pod \"49bb3516-f7e9-4417-a18f-eb1690123f63\" (UID: \"49bb3516-f7e9-4417-a18f-eb1690123f63\") " Jan 01 09:02:48 crc kubenswrapper[4867]: I0101 09:02:48.296614 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49bb3516-f7e9-4417-a18f-eb1690123f63-catalog-content\") pod \"49bb3516-f7e9-4417-a18f-eb1690123f63\" (UID: \"49bb3516-f7e9-4417-a18f-eb1690123f63\") " Jan 01 09:02:48 crc kubenswrapper[4867]: I0101 09:02:48.296668 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49bb3516-f7e9-4417-a18f-eb1690123f63-utilities\") pod \"49bb3516-f7e9-4417-a18f-eb1690123f63\" (UID: \"49bb3516-f7e9-4417-a18f-eb1690123f63\") " Jan 01 09:02:48 crc kubenswrapper[4867]: I0101 09:02:48.298136 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49bb3516-f7e9-4417-a18f-eb1690123f63-utilities" (OuterVolumeSpecName: "utilities") pod "49bb3516-f7e9-4417-a18f-eb1690123f63" (UID: "49bb3516-f7e9-4417-a18f-eb1690123f63"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:02:48 crc kubenswrapper[4867]: I0101 09:02:48.327133 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49bb3516-f7e9-4417-a18f-eb1690123f63-kube-api-access-mp9s8" (OuterVolumeSpecName: "kube-api-access-mp9s8") pod "49bb3516-f7e9-4417-a18f-eb1690123f63" (UID: "49bb3516-f7e9-4417-a18f-eb1690123f63"). InnerVolumeSpecName "kube-api-access-mp9s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:02:48 crc kubenswrapper[4867]: I0101 09:02:48.337281 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49bb3516-f7e9-4417-a18f-eb1690123f63-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "49bb3516-f7e9-4417-a18f-eb1690123f63" (UID: "49bb3516-f7e9-4417-a18f-eb1690123f63"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:02:48 crc kubenswrapper[4867]: I0101 09:02:48.398401 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49bb3516-f7e9-4417-a18f-eb1690123f63-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 09:02:48 crc kubenswrapper[4867]: I0101 09:02:48.398451 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49bb3516-f7e9-4417-a18f-eb1690123f63-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 09:02:48 crc kubenswrapper[4867]: I0101 09:02:48.398465 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mp9s8\" (UniqueName: \"kubernetes.io/projected/49bb3516-f7e9-4417-a18f-eb1690123f63-kube-api-access-mp9s8\") on node \"crc\" DevicePath \"\"" Jan 01 09:02:48 crc kubenswrapper[4867]: I0101 09:02:48.676232 4867 generic.go:334] "Generic (PLEG): container finished" podID="49bb3516-f7e9-4417-a18f-eb1690123f63" containerID="5808abe1c5f42d479c242986e2aeba033d196ecc140837192546ec9006c7ef4e" exitCode=0 Jan 01 09:02:48 crc kubenswrapper[4867]: I0101 09:02:48.676285 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fldbw" event={"ID":"49bb3516-f7e9-4417-a18f-eb1690123f63","Type":"ContainerDied","Data":"5808abe1c5f42d479c242986e2aeba033d196ecc140837192546ec9006c7ef4e"} Jan 01 09:02:48 crc kubenswrapper[4867]: I0101 09:02:48.676335 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fldbw" event={"ID":"49bb3516-f7e9-4417-a18f-eb1690123f63","Type":"ContainerDied","Data":"b8fde754c3e6465e7d26a653a44cc8a7933128b9f5de553d90defa884dfd0829"} Jan 01 09:02:48 crc kubenswrapper[4867]: I0101 09:02:48.676361 4867 scope.go:117] "RemoveContainer" containerID="5808abe1c5f42d479c242986e2aeba033d196ecc140837192546ec9006c7ef4e" Jan 01 09:02:48 crc kubenswrapper[4867]: I0101 09:02:48.676364 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fldbw" Jan 01 09:02:48 crc kubenswrapper[4867]: I0101 09:02:48.701125 4867 scope.go:117] "RemoveContainer" containerID="86558d1038d88503c4d22513269789249070d13a176bb3796a6099e4bbd62cd7" Jan 01 09:02:48 crc kubenswrapper[4867]: I0101 09:02:48.722151 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fldbw"] Jan 01 09:02:48 crc kubenswrapper[4867]: I0101 09:02:48.726781 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fldbw"] Jan 01 09:02:48 crc kubenswrapper[4867]: I0101 09:02:48.757429 4867 scope.go:117] "RemoveContainer" containerID="e06b42f99a8cb4ddd21c8edc19e2ccc1c74c47ead32479ec032c803f43fb687a" Jan 01 09:02:48 crc kubenswrapper[4867]: I0101 09:02:48.780915 4867 scope.go:117] "RemoveContainer" containerID="5808abe1c5f42d479c242986e2aeba033d196ecc140837192546ec9006c7ef4e" Jan 01 09:02:48 crc kubenswrapper[4867]: E0101 09:02:48.781490 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5808abe1c5f42d479c242986e2aeba033d196ecc140837192546ec9006c7ef4e\": container with ID starting with 5808abe1c5f42d479c242986e2aeba033d196ecc140837192546ec9006c7ef4e not found: ID does not exist" containerID="5808abe1c5f42d479c242986e2aeba033d196ecc140837192546ec9006c7ef4e" Jan 01 09:02:48 crc kubenswrapper[4867]: I0101 09:02:48.781528 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5808abe1c5f42d479c242986e2aeba033d196ecc140837192546ec9006c7ef4e"} err="failed to get container status \"5808abe1c5f42d479c242986e2aeba033d196ecc140837192546ec9006c7ef4e\": rpc error: code = NotFound desc = could not find container \"5808abe1c5f42d479c242986e2aeba033d196ecc140837192546ec9006c7ef4e\": container with ID starting with 5808abe1c5f42d479c242986e2aeba033d196ecc140837192546ec9006c7ef4e not found: ID does not exist" Jan 01 09:02:48 crc kubenswrapper[4867]: I0101 09:02:48.781557 4867 scope.go:117] "RemoveContainer" containerID="86558d1038d88503c4d22513269789249070d13a176bb3796a6099e4bbd62cd7" Jan 01 09:02:48 crc kubenswrapper[4867]: E0101 09:02:48.782042 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86558d1038d88503c4d22513269789249070d13a176bb3796a6099e4bbd62cd7\": container with ID starting with 86558d1038d88503c4d22513269789249070d13a176bb3796a6099e4bbd62cd7 not found: ID does not exist" containerID="86558d1038d88503c4d22513269789249070d13a176bb3796a6099e4bbd62cd7" Jan 01 09:02:48 crc kubenswrapper[4867]: I0101 09:02:48.782091 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86558d1038d88503c4d22513269789249070d13a176bb3796a6099e4bbd62cd7"} err="failed to get container status \"86558d1038d88503c4d22513269789249070d13a176bb3796a6099e4bbd62cd7\": rpc error: code = NotFound desc = could not find container \"86558d1038d88503c4d22513269789249070d13a176bb3796a6099e4bbd62cd7\": container with ID starting with 86558d1038d88503c4d22513269789249070d13a176bb3796a6099e4bbd62cd7 not found: ID does not exist" Jan 01 09:02:48 crc kubenswrapper[4867]: I0101 09:02:48.782123 4867 scope.go:117] "RemoveContainer" containerID="e06b42f99a8cb4ddd21c8edc19e2ccc1c74c47ead32479ec032c803f43fb687a" Jan 01 09:02:48 crc kubenswrapper[4867]: E0101 09:02:48.782558 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e06b42f99a8cb4ddd21c8edc19e2ccc1c74c47ead32479ec032c803f43fb687a\": container with ID starting with e06b42f99a8cb4ddd21c8edc19e2ccc1c74c47ead32479ec032c803f43fb687a not found: ID does not exist" containerID="e06b42f99a8cb4ddd21c8edc19e2ccc1c74c47ead32479ec032c803f43fb687a" Jan 01 09:02:48 crc kubenswrapper[4867]: I0101 09:02:48.782626 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e06b42f99a8cb4ddd21c8edc19e2ccc1c74c47ead32479ec032c803f43fb687a"} err="failed to get container status \"e06b42f99a8cb4ddd21c8edc19e2ccc1c74c47ead32479ec032c803f43fb687a\": rpc error: code = NotFound desc = could not find container \"e06b42f99a8cb4ddd21c8edc19e2ccc1c74c47ead32479ec032c803f43fb687a\": container with ID starting with e06b42f99a8cb4ddd21c8edc19e2ccc1c74c47ead32479ec032c803f43fb687a not found: ID does not exist" Jan 01 09:02:49 crc kubenswrapper[4867]: I0101 09:02:49.146732 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49bb3516-f7e9-4417-a18f-eb1690123f63" path="/var/lib/kubelet/pods/49bb3516-f7e9-4417-a18f-eb1690123f63/volumes" Jan 01 09:03:18 crc kubenswrapper[4867]: I0101 09:03:18.931727 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tc4ww"] Jan 01 09:03:18 crc kubenswrapper[4867]: E0101 09:03:18.932670 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="727cf46e-4a29-4a4a-9c90-6052bc53068c" containerName="extract-content" Jan 01 09:03:18 crc kubenswrapper[4867]: I0101 09:03:18.932684 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="727cf46e-4a29-4a4a-9c90-6052bc53068c" containerName="extract-content" Jan 01 09:03:18 crc kubenswrapper[4867]: E0101 09:03:18.932706 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49bb3516-f7e9-4417-a18f-eb1690123f63" containerName="extract-utilities" Jan 01 09:03:18 crc kubenswrapper[4867]: I0101 09:03:18.932714 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="49bb3516-f7e9-4417-a18f-eb1690123f63" containerName="extract-utilities" Jan 01 09:03:18 crc kubenswrapper[4867]: E0101 09:03:18.932726 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="727cf46e-4a29-4a4a-9c90-6052bc53068c" containerName="extract-utilities" Jan 01 09:03:18 crc kubenswrapper[4867]: I0101 09:03:18.932735 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="727cf46e-4a29-4a4a-9c90-6052bc53068c" containerName="extract-utilities" Jan 01 09:03:18 crc kubenswrapper[4867]: E0101 09:03:18.932753 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49bb3516-f7e9-4417-a18f-eb1690123f63" containerName="extract-content" Jan 01 09:03:18 crc kubenswrapper[4867]: I0101 09:03:18.932763 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="49bb3516-f7e9-4417-a18f-eb1690123f63" containerName="extract-content" Jan 01 09:03:18 crc kubenswrapper[4867]: E0101 09:03:18.932780 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49bb3516-f7e9-4417-a18f-eb1690123f63" containerName="registry-server" Jan 01 09:03:18 crc kubenswrapper[4867]: I0101 09:03:18.932787 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="49bb3516-f7e9-4417-a18f-eb1690123f63" containerName="registry-server" Jan 01 09:03:18 crc kubenswrapper[4867]: E0101 09:03:18.932796 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="727cf46e-4a29-4a4a-9c90-6052bc53068c" containerName="registry-server" Jan 01 09:03:18 crc kubenswrapper[4867]: I0101 09:03:18.932803 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="727cf46e-4a29-4a4a-9c90-6052bc53068c" containerName="registry-server" Jan 01 09:03:18 crc kubenswrapper[4867]: I0101 09:03:18.933019 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="49bb3516-f7e9-4417-a18f-eb1690123f63" containerName="registry-server" Jan 01 09:03:18 crc kubenswrapper[4867]: I0101 09:03:18.933049 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="727cf46e-4a29-4a4a-9c90-6052bc53068c" containerName="registry-server" Jan 01 09:03:18 crc kubenswrapper[4867]: I0101 09:03:18.934206 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tc4ww" Jan 01 09:03:18 crc kubenswrapper[4867]: I0101 09:03:18.947193 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tc4ww"] Jan 01 09:03:19 crc kubenswrapper[4867]: I0101 09:03:19.048039 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a54ad58-03ef-48e4-b86c-14a583446cfe-utilities\") pod \"community-operators-tc4ww\" (UID: \"0a54ad58-03ef-48e4-b86c-14a583446cfe\") " pod="openshift-marketplace/community-operators-tc4ww" Jan 01 09:03:19 crc kubenswrapper[4867]: I0101 09:03:19.048554 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f984g\" (UniqueName: \"kubernetes.io/projected/0a54ad58-03ef-48e4-b86c-14a583446cfe-kube-api-access-f984g\") pod \"community-operators-tc4ww\" (UID: \"0a54ad58-03ef-48e4-b86c-14a583446cfe\") " pod="openshift-marketplace/community-operators-tc4ww" Jan 01 09:03:19 crc kubenswrapper[4867]: I0101 09:03:19.048671 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a54ad58-03ef-48e4-b86c-14a583446cfe-catalog-content\") pod \"community-operators-tc4ww\" (UID: \"0a54ad58-03ef-48e4-b86c-14a583446cfe\") " pod="openshift-marketplace/community-operators-tc4ww" Jan 01 09:03:19 crc kubenswrapper[4867]: I0101 09:03:19.150110 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a54ad58-03ef-48e4-b86c-14a583446cfe-catalog-content\") pod \"community-operators-tc4ww\" (UID: \"0a54ad58-03ef-48e4-b86c-14a583446cfe\") " pod="openshift-marketplace/community-operators-tc4ww" Jan 01 09:03:19 crc kubenswrapper[4867]: I0101 09:03:19.150233 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a54ad58-03ef-48e4-b86c-14a583446cfe-utilities\") pod \"community-operators-tc4ww\" (UID: \"0a54ad58-03ef-48e4-b86c-14a583446cfe\") " pod="openshift-marketplace/community-operators-tc4ww" Jan 01 09:03:19 crc kubenswrapper[4867]: I0101 09:03:19.150336 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f984g\" (UniqueName: \"kubernetes.io/projected/0a54ad58-03ef-48e4-b86c-14a583446cfe-kube-api-access-f984g\") pod \"community-operators-tc4ww\" (UID: \"0a54ad58-03ef-48e4-b86c-14a583446cfe\") " pod="openshift-marketplace/community-operators-tc4ww" Jan 01 09:03:19 crc kubenswrapper[4867]: I0101 09:03:19.150918 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a54ad58-03ef-48e4-b86c-14a583446cfe-catalog-content\") pod \"community-operators-tc4ww\" (UID: \"0a54ad58-03ef-48e4-b86c-14a583446cfe\") " pod="openshift-marketplace/community-operators-tc4ww" Jan 01 09:03:19 crc kubenswrapper[4867]: I0101 09:03:19.151056 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a54ad58-03ef-48e4-b86c-14a583446cfe-utilities\") pod \"community-operators-tc4ww\" (UID: \"0a54ad58-03ef-48e4-b86c-14a583446cfe\") " pod="openshift-marketplace/community-operators-tc4ww" Jan 01 09:03:19 crc kubenswrapper[4867]: I0101 09:03:19.178108 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f984g\" (UniqueName: \"kubernetes.io/projected/0a54ad58-03ef-48e4-b86c-14a583446cfe-kube-api-access-f984g\") pod \"community-operators-tc4ww\" (UID: \"0a54ad58-03ef-48e4-b86c-14a583446cfe\") " pod="openshift-marketplace/community-operators-tc4ww" Jan 01 09:03:19 crc kubenswrapper[4867]: I0101 09:03:19.261742 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tc4ww" Jan 01 09:03:19 crc kubenswrapper[4867]: I0101 09:03:19.750422 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tc4ww"] Jan 01 09:03:19 crc kubenswrapper[4867]: I0101 09:03:19.971982 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tc4ww" event={"ID":"0a54ad58-03ef-48e4-b86c-14a583446cfe","Type":"ContainerStarted","Data":"5d6267e8e01129d5a5a0fce055a0a21f0afa16eadd7cb59fea6e6596dfad3c6e"} Jan 01 09:03:19 crc kubenswrapper[4867]: I0101 09:03:19.972047 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tc4ww" event={"ID":"0a54ad58-03ef-48e4-b86c-14a583446cfe","Type":"ContainerStarted","Data":"24b7c375e5271d5a149ea27cbcd311540bdcf4e69f97638de6869c06c04de04b"} Jan 01 09:03:20 crc kubenswrapper[4867]: I0101 09:03:20.984395 4867 generic.go:334] "Generic (PLEG): container finished" podID="0a54ad58-03ef-48e4-b86c-14a583446cfe" containerID="5d6267e8e01129d5a5a0fce055a0a21f0afa16eadd7cb59fea6e6596dfad3c6e" exitCode=0 Jan 01 09:03:20 crc kubenswrapper[4867]: I0101 09:03:20.984505 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tc4ww" event={"ID":"0a54ad58-03ef-48e4-b86c-14a583446cfe","Type":"ContainerDied","Data":"5d6267e8e01129d5a5a0fce055a0a21f0afa16eadd7cb59fea6e6596dfad3c6e"} Jan 01 09:03:21 crc kubenswrapper[4867]: I0101 09:03:21.330967 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 09:03:21 crc kubenswrapper[4867]: I0101 09:03:21.331053 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 09:03:22 crc kubenswrapper[4867]: I0101 09:03:22.000086 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tc4ww" event={"ID":"0a54ad58-03ef-48e4-b86c-14a583446cfe","Type":"ContainerStarted","Data":"6dd7a5c991411567492673c8d5bb8688b70e3bd36730c442f0c6909ffca64337"} Jan 01 09:03:23 crc kubenswrapper[4867]: I0101 09:03:23.016081 4867 generic.go:334] "Generic (PLEG): container finished" podID="0a54ad58-03ef-48e4-b86c-14a583446cfe" containerID="6dd7a5c991411567492673c8d5bb8688b70e3bd36730c442f0c6909ffca64337" exitCode=0 Jan 01 09:03:23 crc kubenswrapper[4867]: I0101 09:03:23.016245 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tc4ww" event={"ID":"0a54ad58-03ef-48e4-b86c-14a583446cfe","Type":"ContainerDied","Data":"6dd7a5c991411567492673c8d5bb8688b70e3bd36730c442f0c6909ffca64337"} Jan 01 09:03:24 crc kubenswrapper[4867]: I0101 09:03:24.030154 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tc4ww" event={"ID":"0a54ad58-03ef-48e4-b86c-14a583446cfe","Type":"ContainerStarted","Data":"207a519eb05773c03a136d0b925191f2464b11a4a0f044dd04d96b4ed8909b33"} Jan 01 09:03:24 crc kubenswrapper[4867]: I0101 09:03:24.055197 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tc4ww" podStartSLOduration=3.516695368 podStartE2EDuration="6.05517326s" podCreationTimestamp="2026-01-01 09:03:18 +0000 UTC" firstStartedPulling="2026-01-01 09:03:20.987013297 +0000 UTC m=+2210.122282106" lastFinishedPulling="2026-01-01 09:03:23.525491199 +0000 UTC m=+2212.660759998" observedRunningTime="2026-01-01 09:03:24.051277429 +0000 UTC m=+2213.186546238" watchObservedRunningTime="2026-01-01 09:03:24.05517326 +0000 UTC m=+2213.190442069" Jan 01 09:03:29 crc kubenswrapper[4867]: I0101 09:03:29.262759 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tc4ww" Jan 01 09:03:29 crc kubenswrapper[4867]: I0101 09:03:29.263384 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tc4ww" Jan 01 09:03:29 crc kubenswrapper[4867]: I0101 09:03:29.342454 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tc4ww" Jan 01 09:03:30 crc kubenswrapper[4867]: I0101 09:03:30.158937 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tc4ww" Jan 01 09:03:30 crc kubenswrapper[4867]: I0101 09:03:30.220094 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tc4ww"] Jan 01 09:03:32 crc kubenswrapper[4867]: I0101 09:03:32.102699 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tc4ww" podUID="0a54ad58-03ef-48e4-b86c-14a583446cfe" containerName="registry-server" containerID="cri-o://207a519eb05773c03a136d0b925191f2464b11a4a0f044dd04d96b4ed8909b33" gracePeriod=2 Jan 01 09:03:33 crc kubenswrapper[4867]: I0101 09:03:33.120671 4867 generic.go:334] "Generic (PLEG): container finished" podID="0a54ad58-03ef-48e4-b86c-14a583446cfe" containerID="207a519eb05773c03a136d0b925191f2464b11a4a0f044dd04d96b4ed8909b33" exitCode=0 Jan 01 09:03:33 crc kubenswrapper[4867]: I0101 09:03:33.120730 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tc4ww" event={"ID":"0a54ad58-03ef-48e4-b86c-14a583446cfe","Type":"ContainerDied","Data":"207a519eb05773c03a136d0b925191f2464b11a4a0f044dd04d96b4ed8909b33"} Jan 01 09:03:33 crc kubenswrapper[4867]: I0101 09:03:33.291295 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tc4ww" Jan 01 09:03:33 crc kubenswrapper[4867]: I0101 09:03:33.481644 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a54ad58-03ef-48e4-b86c-14a583446cfe-utilities\") pod \"0a54ad58-03ef-48e4-b86c-14a583446cfe\" (UID: \"0a54ad58-03ef-48e4-b86c-14a583446cfe\") " Jan 01 09:03:33 crc kubenswrapper[4867]: I0101 09:03:33.481721 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a54ad58-03ef-48e4-b86c-14a583446cfe-catalog-content\") pod \"0a54ad58-03ef-48e4-b86c-14a583446cfe\" (UID: \"0a54ad58-03ef-48e4-b86c-14a583446cfe\") " Jan 01 09:03:33 crc kubenswrapper[4867]: I0101 09:03:33.481780 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f984g\" (UniqueName: \"kubernetes.io/projected/0a54ad58-03ef-48e4-b86c-14a583446cfe-kube-api-access-f984g\") pod \"0a54ad58-03ef-48e4-b86c-14a583446cfe\" (UID: \"0a54ad58-03ef-48e4-b86c-14a583446cfe\") " Jan 01 09:03:33 crc kubenswrapper[4867]: I0101 09:03:33.483459 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a54ad58-03ef-48e4-b86c-14a583446cfe-utilities" (OuterVolumeSpecName: "utilities") pod "0a54ad58-03ef-48e4-b86c-14a583446cfe" (UID: "0a54ad58-03ef-48e4-b86c-14a583446cfe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:03:33 crc kubenswrapper[4867]: I0101 09:03:33.502081 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a54ad58-03ef-48e4-b86c-14a583446cfe-kube-api-access-f984g" (OuterVolumeSpecName: "kube-api-access-f984g") pod "0a54ad58-03ef-48e4-b86c-14a583446cfe" (UID: "0a54ad58-03ef-48e4-b86c-14a583446cfe"). InnerVolumeSpecName "kube-api-access-f984g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:03:33 crc kubenswrapper[4867]: I0101 09:03:33.583835 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a54ad58-03ef-48e4-b86c-14a583446cfe-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 09:03:33 crc kubenswrapper[4867]: I0101 09:03:33.584097 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f984g\" (UniqueName: \"kubernetes.io/projected/0a54ad58-03ef-48e4-b86c-14a583446cfe-kube-api-access-f984g\") on node \"crc\" DevicePath \"\"" Jan 01 09:03:33 crc kubenswrapper[4867]: I0101 09:03:33.586334 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a54ad58-03ef-48e4-b86c-14a583446cfe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0a54ad58-03ef-48e4-b86c-14a583446cfe" (UID: "0a54ad58-03ef-48e4-b86c-14a583446cfe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:03:33 crc kubenswrapper[4867]: I0101 09:03:33.686162 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a54ad58-03ef-48e4-b86c-14a583446cfe-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 09:03:34 crc kubenswrapper[4867]: I0101 09:03:34.135101 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tc4ww" event={"ID":"0a54ad58-03ef-48e4-b86c-14a583446cfe","Type":"ContainerDied","Data":"24b7c375e5271d5a149ea27cbcd311540bdcf4e69f97638de6869c06c04de04b"} Jan 01 09:03:34 crc kubenswrapper[4867]: I0101 09:03:34.135182 4867 scope.go:117] "RemoveContainer" containerID="207a519eb05773c03a136d0b925191f2464b11a4a0f044dd04d96b4ed8909b33" Jan 01 09:03:34 crc kubenswrapper[4867]: I0101 09:03:34.135186 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tc4ww" Jan 01 09:03:34 crc kubenswrapper[4867]: I0101 09:03:34.167231 4867 scope.go:117] "RemoveContainer" containerID="6dd7a5c991411567492673c8d5bb8688b70e3bd36730c442f0c6909ffca64337" Jan 01 09:03:34 crc kubenswrapper[4867]: I0101 09:03:34.195275 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tc4ww"] Jan 01 09:03:34 crc kubenswrapper[4867]: I0101 09:03:34.205748 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tc4ww"] Jan 01 09:03:34 crc kubenswrapper[4867]: I0101 09:03:34.227625 4867 scope.go:117] "RemoveContainer" containerID="5d6267e8e01129d5a5a0fce055a0a21f0afa16eadd7cb59fea6e6596dfad3c6e" Jan 01 09:03:35 crc kubenswrapper[4867]: I0101 09:03:35.145953 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a54ad58-03ef-48e4-b86c-14a583446cfe" path="/var/lib/kubelet/pods/0a54ad58-03ef-48e4-b86c-14a583446cfe/volumes" Jan 01 09:03:51 crc kubenswrapper[4867]: I0101 09:03:51.330652 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 09:03:51 crc kubenswrapper[4867]: I0101 09:03:51.331266 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 09:04:21 crc kubenswrapper[4867]: I0101 09:04:21.331366 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 09:04:21 crc kubenswrapper[4867]: I0101 09:04:21.332228 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 09:04:21 crc kubenswrapper[4867]: I0101 09:04:21.332293 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-69jph" Jan 01 09:04:21 crc kubenswrapper[4867]: I0101 09:04:21.333104 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"609f1758578fd1e0359ee4d55ab7e708346e2b21307ff1d0616dfca955606c9d"} pod="openshift-machine-config-operator/machine-config-daemon-69jph" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 01 09:04:21 crc kubenswrapper[4867]: I0101 09:04:21.333210 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" containerID="cri-o://609f1758578fd1e0359ee4d55ab7e708346e2b21307ff1d0616dfca955606c9d" gracePeriod=600 Jan 01 09:04:21 crc kubenswrapper[4867]: E0101 09:04:21.463631 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:04:21 crc kubenswrapper[4867]: I0101 09:04:21.595315 4867 generic.go:334] "Generic (PLEG): container finished" podID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerID="609f1758578fd1e0359ee4d55ab7e708346e2b21307ff1d0616dfca955606c9d" exitCode=0 Jan 01 09:04:21 crc kubenswrapper[4867]: I0101 09:04:21.595379 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerDied","Data":"609f1758578fd1e0359ee4d55ab7e708346e2b21307ff1d0616dfca955606c9d"} Jan 01 09:04:21 crc kubenswrapper[4867]: I0101 09:04:21.595462 4867 scope.go:117] "RemoveContainer" containerID="05744890d4b5444f64f18b9e166da6da0d9404f93ba3a8da1439f955bbdb4673" Jan 01 09:04:21 crc kubenswrapper[4867]: I0101 09:04:21.596217 4867 scope.go:117] "RemoveContainer" containerID="609f1758578fd1e0359ee4d55ab7e708346e2b21307ff1d0616dfca955606c9d" Jan 01 09:04:21 crc kubenswrapper[4867]: E0101 09:04:21.596589 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:04:34 crc kubenswrapper[4867]: I0101 09:04:34.128373 4867 scope.go:117] "RemoveContainer" containerID="609f1758578fd1e0359ee4d55ab7e708346e2b21307ff1d0616dfca955606c9d" Jan 01 09:04:34 crc kubenswrapper[4867]: E0101 09:04:34.129217 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:04:48 crc kubenswrapper[4867]: I0101 09:04:48.129177 4867 scope.go:117] "RemoveContainer" containerID="609f1758578fd1e0359ee4d55ab7e708346e2b21307ff1d0616dfca955606c9d" Jan 01 09:04:48 crc kubenswrapper[4867]: E0101 09:04:48.130644 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:04:59 crc kubenswrapper[4867]: I0101 09:04:59.129290 4867 scope.go:117] "RemoveContainer" containerID="609f1758578fd1e0359ee4d55ab7e708346e2b21307ff1d0616dfca955606c9d" Jan 01 09:04:59 crc kubenswrapper[4867]: E0101 09:04:59.130416 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:05:04 crc kubenswrapper[4867]: E0101 09:05:04.851687 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 01 09:05:14 crc kubenswrapper[4867]: I0101 09:05:14.128347 4867 scope.go:117] "RemoveContainer" containerID="609f1758578fd1e0359ee4d55ab7e708346e2b21307ff1d0616dfca955606c9d" Jan 01 09:05:14 crc kubenswrapper[4867]: E0101 09:05:14.129321 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:05:28 crc kubenswrapper[4867]: I0101 09:05:28.128097 4867 scope.go:117] "RemoveContainer" containerID="609f1758578fd1e0359ee4d55ab7e708346e2b21307ff1d0616dfca955606c9d" Jan 01 09:05:28 crc kubenswrapper[4867]: E0101 09:05:28.128916 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:05:41 crc kubenswrapper[4867]: I0101 09:05:41.140061 4867 scope.go:117] "RemoveContainer" containerID="609f1758578fd1e0359ee4d55ab7e708346e2b21307ff1d0616dfca955606c9d" Jan 01 09:05:41 crc kubenswrapper[4867]: E0101 09:05:41.141330 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:05:55 crc kubenswrapper[4867]: I0101 09:05:55.148389 4867 scope.go:117] "RemoveContainer" containerID="609f1758578fd1e0359ee4d55ab7e708346e2b21307ff1d0616dfca955606c9d" Jan 01 09:05:55 crc kubenswrapper[4867]: E0101 09:05:55.149960 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:06:09 crc kubenswrapper[4867]: I0101 09:06:09.129373 4867 scope.go:117] "RemoveContainer" containerID="609f1758578fd1e0359ee4d55ab7e708346e2b21307ff1d0616dfca955606c9d" Jan 01 09:06:09 crc kubenswrapper[4867]: E0101 09:06:09.130647 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:06:21 crc kubenswrapper[4867]: I0101 09:06:21.138006 4867 scope.go:117] "RemoveContainer" containerID="609f1758578fd1e0359ee4d55ab7e708346e2b21307ff1d0616dfca955606c9d" Jan 01 09:06:21 crc kubenswrapper[4867]: E0101 09:06:21.139257 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:06:34 crc kubenswrapper[4867]: I0101 09:06:34.129520 4867 scope.go:117] "RemoveContainer" containerID="609f1758578fd1e0359ee4d55ab7e708346e2b21307ff1d0616dfca955606c9d" Jan 01 09:06:34 crc kubenswrapper[4867]: E0101 09:06:34.130423 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:06:47 crc kubenswrapper[4867]: I0101 09:06:47.129608 4867 scope.go:117] "RemoveContainer" containerID="609f1758578fd1e0359ee4d55ab7e708346e2b21307ff1d0616dfca955606c9d" Jan 01 09:06:47 crc kubenswrapper[4867]: E0101 09:06:47.130706 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:07:02 crc kubenswrapper[4867]: I0101 09:07:02.129018 4867 scope.go:117] "RemoveContainer" containerID="609f1758578fd1e0359ee4d55ab7e708346e2b21307ff1d0616dfca955606c9d" Jan 01 09:07:02 crc kubenswrapper[4867]: E0101 09:07:02.130820 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:07:13 crc kubenswrapper[4867]: I0101 09:07:13.129074 4867 scope.go:117] "RemoveContainer" containerID="609f1758578fd1e0359ee4d55ab7e708346e2b21307ff1d0616dfca955606c9d" Jan 01 09:07:13 crc kubenswrapper[4867]: E0101 09:07:13.130250 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:07:24 crc kubenswrapper[4867]: I0101 09:07:24.128738 4867 scope.go:117] "RemoveContainer" containerID="609f1758578fd1e0359ee4d55ab7e708346e2b21307ff1d0616dfca955606c9d" Jan 01 09:07:24 crc kubenswrapper[4867]: E0101 09:07:24.130033 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:07:38 crc kubenswrapper[4867]: I0101 09:07:38.129283 4867 scope.go:117] "RemoveContainer" containerID="609f1758578fd1e0359ee4d55ab7e708346e2b21307ff1d0616dfca955606c9d" Jan 01 09:07:38 crc kubenswrapper[4867]: E0101 09:07:38.130409 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:07:53 crc kubenswrapper[4867]: I0101 09:07:53.128410 4867 scope.go:117] "RemoveContainer" containerID="609f1758578fd1e0359ee4d55ab7e708346e2b21307ff1d0616dfca955606c9d" Jan 01 09:07:53 crc kubenswrapper[4867]: E0101 09:07:53.129786 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:08:07 crc kubenswrapper[4867]: I0101 09:08:07.129589 4867 scope.go:117] "RemoveContainer" containerID="609f1758578fd1e0359ee4d55ab7e708346e2b21307ff1d0616dfca955606c9d" Jan 01 09:08:07 crc kubenswrapper[4867]: E0101 09:08:07.130606 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:08:20 crc kubenswrapper[4867]: I0101 09:08:20.128831 4867 scope.go:117] "RemoveContainer" containerID="609f1758578fd1e0359ee4d55ab7e708346e2b21307ff1d0616dfca955606c9d" Jan 01 09:08:20 crc kubenswrapper[4867]: E0101 09:08:20.130139 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:08:34 crc kubenswrapper[4867]: I0101 09:08:34.128681 4867 scope.go:117] "RemoveContainer" containerID="609f1758578fd1e0359ee4d55ab7e708346e2b21307ff1d0616dfca955606c9d" Jan 01 09:08:34 crc kubenswrapper[4867]: E0101 09:08:34.129702 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:08:45 crc kubenswrapper[4867]: I0101 09:08:45.129127 4867 scope.go:117] "RemoveContainer" containerID="609f1758578fd1e0359ee4d55ab7e708346e2b21307ff1d0616dfca955606c9d" Jan 01 09:08:45 crc kubenswrapper[4867]: E0101 09:08:45.130347 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:08:58 crc kubenswrapper[4867]: I0101 09:08:58.128477 4867 scope.go:117] "RemoveContainer" containerID="609f1758578fd1e0359ee4d55ab7e708346e2b21307ff1d0616dfca955606c9d" Jan 01 09:08:58 crc kubenswrapper[4867]: E0101 09:08:58.129085 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:09:13 crc kubenswrapper[4867]: I0101 09:09:13.129570 4867 scope.go:117] "RemoveContainer" containerID="609f1758578fd1e0359ee4d55ab7e708346e2b21307ff1d0616dfca955606c9d" Jan 01 09:09:13 crc kubenswrapper[4867]: E0101 09:09:13.130887 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:09:27 crc kubenswrapper[4867]: I0101 09:09:27.129236 4867 scope.go:117] "RemoveContainer" containerID="609f1758578fd1e0359ee4d55ab7e708346e2b21307ff1d0616dfca955606c9d" Jan 01 09:09:27 crc kubenswrapper[4867]: I0101 09:09:27.496666 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerStarted","Data":"908a4602ceecff884c055e9699de4f997ff05cae9b6663c175d73f9402664972"} Jan 01 09:11:35 crc kubenswrapper[4867]: I0101 09:11:35.089088 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n2ws8"] Jan 01 09:11:35 crc kubenswrapper[4867]: E0101 09:11:35.090182 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a54ad58-03ef-48e4-b86c-14a583446cfe" containerName="extract-content" Jan 01 09:11:35 crc kubenswrapper[4867]: I0101 09:11:35.090205 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a54ad58-03ef-48e4-b86c-14a583446cfe" containerName="extract-content" Jan 01 09:11:35 crc kubenswrapper[4867]: E0101 09:11:35.090231 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a54ad58-03ef-48e4-b86c-14a583446cfe" containerName="extract-utilities" Jan 01 09:11:35 crc kubenswrapper[4867]: I0101 09:11:35.090243 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a54ad58-03ef-48e4-b86c-14a583446cfe" containerName="extract-utilities" Jan 01 09:11:35 crc kubenswrapper[4867]: E0101 09:11:35.090280 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a54ad58-03ef-48e4-b86c-14a583446cfe" containerName="registry-server" Jan 01 09:11:35 crc kubenswrapper[4867]: I0101 09:11:35.090294 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a54ad58-03ef-48e4-b86c-14a583446cfe" containerName="registry-server" Jan 01 09:11:35 crc kubenswrapper[4867]: I0101 09:11:35.090522 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a54ad58-03ef-48e4-b86c-14a583446cfe" containerName="registry-server" Jan 01 09:11:35 crc kubenswrapper[4867]: I0101 09:11:35.092240 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n2ws8" Jan 01 09:11:35 crc kubenswrapper[4867]: I0101 09:11:35.152368 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n2ws8"] Jan 01 09:11:35 crc kubenswrapper[4867]: I0101 09:11:35.155510 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58c7d\" (UniqueName: \"kubernetes.io/projected/ce4d51a6-9a77-4ddb-bfbb-98c325a574ea-kube-api-access-58c7d\") pod \"redhat-operators-n2ws8\" (UID: \"ce4d51a6-9a77-4ddb-bfbb-98c325a574ea\") " pod="openshift-marketplace/redhat-operators-n2ws8" Jan 01 09:11:35 crc kubenswrapper[4867]: I0101 09:11:35.155715 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce4d51a6-9a77-4ddb-bfbb-98c325a574ea-utilities\") pod \"redhat-operators-n2ws8\" (UID: \"ce4d51a6-9a77-4ddb-bfbb-98c325a574ea\") " pod="openshift-marketplace/redhat-operators-n2ws8" Jan 01 09:11:35 crc kubenswrapper[4867]: I0101 09:11:35.155859 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce4d51a6-9a77-4ddb-bfbb-98c325a574ea-catalog-content\") pod \"redhat-operators-n2ws8\" (UID: \"ce4d51a6-9a77-4ddb-bfbb-98c325a574ea\") " pod="openshift-marketplace/redhat-operators-n2ws8" Jan 01 09:11:35 crc kubenswrapper[4867]: I0101 09:11:35.256864 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce4d51a6-9a77-4ddb-bfbb-98c325a574ea-utilities\") pod \"redhat-operators-n2ws8\" (UID: \"ce4d51a6-9a77-4ddb-bfbb-98c325a574ea\") " pod="openshift-marketplace/redhat-operators-n2ws8" Jan 01 09:11:35 crc kubenswrapper[4867]: I0101 09:11:35.257239 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce4d51a6-9a77-4ddb-bfbb-98c325a574ea-catalog-content\") pod \"redhat-operators-n2ws8\" (UID: \"ce4d51a6-9a77-4ddb-bfbb-98c325a574ea\") " pod="openshift-marketplace/redhat-operators-n2ws8" Jan 01 09:11:35 crc kubenswrapper[4867]: I0101 09:11:35.257285 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58c7d\" (UniqueName: \"kubernetes.io/projected/ce4d51a6-9a77-4ddb-bfbb-98c325a574ea-kube-api-access-58c7d\") pod \"redhat-operators-n2ws8\" (UID: \"ce4d51a6-9a77-4ddb-bfbb-98c325a574ea\") " pod="openshift-marketplace/redhat-operators-n2ws8" Jan 01 09:11:35 crc kubenswrapper[4867]: I0101 09:11:35.257346 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce4d51a6-9a77-4ddb-bfbb-98c325a574ea-utilities\") pod \"redhat-operators-n2ws8\" (UID: \"ce4d51a6-9a77-4ddb-bfbb-98c325a574ea\") " pod="openshift-marketplace/redhat-operators-n2ws8" Jan 01 09:11:35 crc kubenswrapper[4867]: I0101 09:11:35.257717 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce4d51a6-9a77-4ddb-bfbb-98c325a574ea-catalog-content\") pod \"redhat-operators-n2ws8\" (UID: \"ce4d51a6-9a77-4ddb-bfbb-98c325a574ea\") " pod="openshift-marketplace/redhat-operators-n2ws8" Jan 01 09:11:35 crc kubenswrapper[4867]: I0101 09:11:35.282475 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58c7d\" (UniqueName: \"kubernetes.io/projected/ce4d51a6-9a77-4ddb-bfbb-98c325a574ea-kube-api-access-58c7d\") pod \"redhat-operators-n2ws8\" (UID: \"ce4d51a6-9a77-4ddb-bfbb-98c325a574ea\") " pod="openshift-marketplace/redhat-operators-n2ws8" Jan 01 09:11:35 crc kubenswrapper[4867]: I0101 09:11:35.426511 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n2ws8" Jan 01 09:11:35 crc kubenswrapper[4867]: I0101 09:11:35.896826 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n2ws8"] Jan 01 09:11:36 crc kubenswrapper[4867]: I0101 09:11:36.703542 4867 generic.go:334] "Generic (PLEG): container finished" podID="ce4d51a6-9a77-4ddb-bfbb-98c325a574ea" containerID="b76cc4eefe233a341a75ed826593d914ae7695e5b4083626f3822b5708f68d29" exitCode=0 Jan 01 09:11:36 crc kubenswrapper[4867]: I0101 09:11:36.703587 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n2ws8" event={"ID":"ce4d51a6-9a77-4ddb-bfbb-98c325a574ea","Type":"ContainerDied","Data":"b76cc4eefe233a341a75ed826593d914ae7695e5b4083626f3822b5708f68d29"} Jan 01 09:11:36 crc kubenswrapper[4867]: I0101 09:11:36.703613 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n2ws8" event={"ID":"ce4d51a6-9a77-4ddb-bfbb-98c325a574ea","Type":"ContainerStarted","Data":"23d1706adf859c13435c156bbf49d870aaf8a514ad2abbff77c264142f21e9c2"} Jan 01 09:11:36 crc kubenswrapper[4867]: I0101 09:11:36.707292 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 01 09:11:37 crc kubenswrapper[4867]: I0101 09:11:37.717596 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n2ws8" event={"ID":"ce4d51a6-9a77-4ddb-bfbb-98c325a574ea","Type":"ContainerStarted","Data":"b8ca39932497c658fe76f2c64eb3e0b4b2499bef02a309fe3d160c69edbab31d"} Jan 01 09:11:38 crc kubenswrapper[4867]: I0101 09:11:38.729807 4867 generic.go:334] "Generic (PLEG): container finished" podID="ce4d51a6-9a77-4ddb-bfbb-98c325a574ea" containerID="b8ca39932497c658fe76f2c64eb3e0b4b2499bef02a309fe3d160c69edbab31d" exitCode=0 Jan 01 09:11:38 crc kubenswrapper[4867]: I0101 09:11:38.729876 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n2ws8" event={"ID":"ce4d51a6-9a77-4ddb-bfbb-98c325a574ea","Type":"ContainerDied","Data":"b8ca39932497c658fe76f2c64eb3e0b4b2499bef02a309fe3d160c69edbab31d"} Jan 01 09:11:39 crc kubenswrapper[4867]: I0101 09:11:39.741182 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n2ws8" event={"ID":"ce4d51a6-9a77-4ddb-bfbb-98c325a574ea","Type":"ContainerStarted","Data":"3c0eddb8544db3a02e999fd8d4729427bcc17bcbb6c94f5ecc8ea6fd2a3f2d53"} Jan 01 09:11:39 crc kubenswrapper[4867]: I0101 09:11:39.775420 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n2ws8" podStartSLOduration=2.266436002 podStartE2EDuration="4.775387669s" podCreationTimestamp="2026-01-01 09:11:35 +0000 UTC" firstStartedPulling="2026-01-01 09:11:36.707012737 +0000 UTC m=+2705.842281506" lastFinishedPulling="2026-01-01 09:11:39.215964364 +0000 UTC m=+2708.351233173" observedRunningTime="2026-01-01 09:11:39.769055678 +0000 UTC m=+2708.904324447" watchObservedRunningTime="2026-01-01 09:11:39.775387669 +0000 UTC m=+2708.910656478" Jan 01 09:11:45 crc kubenswrapper[4867]: I0101 09:11:45.427497 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n2ws8" Jan 01 09:11:45 crc kubenswrapper[4867]: I0101 09:11:45.428519 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n2ws8" Jan 01 09:11:46 crc kubenswrapper[4867]: I0101 09:11:46.506776 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n2ws8" podUID="ce4d51a6-9a77-4ddb-bfbb-98c325a574ea" containerName="registry-server" probeResult="failure" output=< Jan 01 09:11:46 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Jan 01 09:11:46 crc kubenswrapper[4867]: > Jan 01 09:11:51 crc kubenswrapper[4867]: I0101 09:11:51.331652 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 09:11:51 crc kubenswrapper[4867]: I0101 09:11:51.332372 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 09:11:55 crc kubenswrapper[4867]: I0101 09:11:55.513968 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n2ws8" Jan 01 09:11:55 crc kubenswrapper[4867]: I0101 09:11:55.583568 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n2ws8" Jan 01 09:11:55 crc kubenswrapper[4867]: I0101 09:11:55.753007 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n2ws8"] Jan 01 09:11:56 crc kubenswrapper[4867]: I0101 09:11:56.909765 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n2ws8" podUID="ce4d51a6-9a77-4ddb-bfbb-98c325a574ea" containerName="registry-server" containerID="cri-o://3c0eddb8544db3a02e999fd8d4729427bcc17bcbb6c94f5ecc8ea6fd2a3f2d53" gracePeriod=2 Jan 01 09:11:57 crc kubenswrapper[4867]: I0101 09:11:57.405449 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n2ws8" Jan 01 09:11:57 crc kubenswrapper[4867]: I0101 09:11:57.502144 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58c7d\" (UniqueName: \"kubernetes.io/projected/ce4d51a6-9a77-4ddb-bfbb-98c325a574ea-kube-api-access-58c7d\") pod \"ce4d51a6-9a77-4ddb-bfbb-98c325a574ea\" (UID: \"ce4d51a6-9a77-4ddb-bfbb-98c325a574ea\") " Jan 01 09:11:57 crc kubenswrapper[4867]: I0101 09:11:57.502261 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce4d51a6-9a77-4ddb-bfbb-98c325a574ea-catalog-content\") pod \"ce4d51a6-9a77-4ddb-bfbb-98c325a574ea\" (UID: \"ce4d51a6-9a77-4ddb-bfbb-98c325a574ea\") " Jan 01 09:11:57 crc kubenswrapper[4867]: I0101 09:11:57.502320 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce4d51a6-9a77-4ddb-bfbb-98c325a574ea-utilities\") pod \"ce4d51a6-9a77-4ddb-bfbb-98c325a574ea\" (UID: \"ce4d51a6-9a77-4ddb-bfbb-98c325a574ea\") " Jan 01 09:11:57 crc kubenswrapper[4867]: I0101 09:11:57.503461 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce4d51a6-9a77-4ddb-bfbb-98c325a574ea-utilities" (OuterVolumeSpecName: "utilities") pod "ce4d51a6-9a77-4ddb-bfbb-98c325a574ea" (UID: "ce4d51a6-9a77-4ddb-bfbb-98c325a574ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:11:57 crc kubenswrapper[4867]: I0101 09:11:57.509861 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce4d51a6-9a77-4ddb-bfbb-98c325a574ea-kube-api-access-58c7d" (OuterVolumeSpecName: "kube-api-access-58c7d") pod "ce4d51a6-9a77-4ddb-bfbb-98c325a574ea" (UID: "ce4d51a6-9a77-4ddb-bfbb-98c325a574ea"). InnerVolumeSpecName "kube-api-access-58c7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:11:57 crc kubenswrapper[4867]: I0101 09:11:57.604286 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58c7d\" (UniqueName: \"kubernetes.io/projected/ce4d51a6-9a77-4ddb-bfbb-98c325a574ea-kube-api-access-58c7d\") on node \"crc\" DevicePath \"\"" Jan 01 09:11:57 crc kubenswrapper[4867]: I0101 09:11:57.604354 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce4d51a6-9a77-4ddb-bfbb-98c325a574ea-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 09:11:57 crc kubenswrapper[4867]: I0101 09:11:57.633052 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce4d51a6-9a77-4ddb-bfbb-98c325a574ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce4d51a6-9a77-4ddb-bfbb-98c325a574ea" (UID: "ce4d51a6-9a77-4ddb-bfbb-98c325a574ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:11:57 crc kubenswrapper[4867]: I0101 09:11:57.705425 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce4d51a6-9a77-4ddb-bfbb-98c325a574ea-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 09:11:57 crc kubenswrapper[4867]: I0101 09:11:57.923702 4867 generic.go:334] "Generic (PLEG): container finished" podID="ce4d51a6-9a77-4ddb-bfbb-98c325a574ea" containerID="3c0eddb8544db3a02e999fd8d4729427bcc17bcbb6c94f5ecc8ea6fd2a3f2d53" exitCode=0 Jan 01 09:11:57 crc kubenswrapper[4867]: I0101 09:11:57.923771 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n2ws8" event={"ID":"ce4d51a6-9a77-4ddb-bfbb-98c325a574ea","Type":"ContainerDied","Data":"3c0eddb8544db3a02e999fd8d4729427bcc17bcbb6c94f5ecc8ea6fd2a3f2d53"} Jan 01 09:11:57 crc kubenswrapper[4867]: I0101 09:11:57.923830 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n2ws8" Jan 01 09:11:57 crc kubenswrapper[4867]: I0101 09:11:57.923857 4867 scope.go:117] "RemoveContainer" containerID="3c0eddb8544db3a02e999fd8d4729427bcc17bcbb6c94f5ecc8ea6fd2a3f2d53" Jan 01 09:11:57 crc kubenswrapper[4867]: I0101 09:11:57.923837 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n2ws8" event={"ID":"ce4d51a6-9a77-4ddb-bfbb-98c325a574ea","Type":"ContainerDied","Data":"23d1706adf859c13435c156bbf49d870aaf8a514ad2abbff77c264142f21e9c2"} Jan 01 09:11:57 crc kubenswrapper[4867]: I0101 09:11:57.964451 4867 scope.go:117] "RemoveContainer" containerID="b8ca39932497c658fe76f2c64eb3e0b4b2499bef02a309fe3d160c69edbab31d" Jan 01 09:11:57 crc kubenswrapper[4867]: I0101 09:11:57.979545 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n2ws8"] Jan 01 09:11:57 crc kubenswrapper[4867]: I0101 09:11:57.984467 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n2ws8"] Jan 01 09:11:58 crc kubenswrapper[4867]: I0101 09:11:58.017218 4867 scope.go:117] "RemoveContainer" containerID="b76cc4eefe233a341a75ed826593d914ae7695e5b4083626f3822b5708f68d29" Jan 01 09:11:58 crc kubenswrapper[4867]: I0101 09:11:58.041984 4867 scope.go:117] "RemoveContainer" containerID="3c0eddb8544db3a02e999fd8d4729427bcc17bcbb6c94f5ecc8ea6fd2a3f2d53" Jan 01 09:11:58 crc kubenswrapper[4867]: E0101 09:11:58.042472 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c0eddb8544db3a02e999fd8d4729427bcc17bcbb6c94f5ecc8ea6fd2a3f2d53\": container with ID starting with 3c0eddb8544db3a02e999fd8d4729427bcc17bcbb6c94f5ecc8ea6fd2a3f2d53 not found: ID does not exist" containerID="3c0eddb8544db3a02e999fd8d4729427bcc17bcbb6c94f5ecc8ea6fd2a3f2d53" Jan 01 09:11:58 crc kubenswrapper[4867]: I0101 09:11:58.042510 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c0eddb8544db3a02e999fd8d4729427bcc17bcbb6c94f5ecc8ea6fd2a3f2d53"} err="failed to get container status \"3c0eddb8544db3a02e999fd8d4729427bcc17bcbb6c94f5ecc8ea6fd2a3f2d53\": rpc error: code = NotFound desc = could not find container \"3c0eddb8544db3a02e999fd8d4729427bcc17bcbb6c94f5ecc8ea6fd2a3f2d53\": container with ID starting with 3c0eddb8544db3a02e999fd8d4729427bcc17bcbb6c94f5ecc8ea6fd2a3f2d53 not found: ID does not exist" Jan 01 09:11:58 crc kubenswrapper[4867]: I0101 09:11:58.042534 4867 scope.go:117] "RemoveContainer" containerID="b8ca39932497c658fe76f2c64eb3e0b4b2499bef02a309fe3d160c69edbab31d" Jan 01 09:11:58 crc kubenswrapper[4867]: E0101 09:11:58.042845 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8ca39932497c658fe76f2c64eb3e0b4b2499bef02a309fe3d160c69edbab31d\": container with ID starting with b8ca39932497c658fe76f2c64eb3e0b4b2499bef02a309fe3d160c69edbab31d not found: ID does not exist" containerID="b8ca39932497c658fe76f2c64eb3e0b4b2499bef02a309fe3d160c69edbab31d" Jan 01 09:11:58 crc kubenswrapper[4867]: I0101 09:11:58.042873 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8ca39932497c658fe76f2c64eb3e0b4b2499bef02a309fe3d160c69edbab31d"} err="failed to get container status \"b8ca39932497c658fe76f2c64eb3e0b4b2499bef02a309fe3d160c69edbab31d\": rpc error: code = NotFound desc = could not find container \"b8ca39932497c658fe76f2c64eb3e0b4b2499bef02a309fe3d160c69edbab31d\": container with ID starting with b8ca39932497c658fe76f2c64eb3e0b4b2499bef02a309fe3d160c69edbab31d not found: ID does not exist" Jan 01 09:11:58 crc kubenswrapper[4867]: I0101 09:11:58.042939 4867 scope.go:117] "RemoveContainer" containerID="b76cc4eefe233a341a75ed826593d914ae7695e5b4083626f3822b5708f68d29" Jan 01 09:11:58 crc kubenswrapper[4867]: E0101 09:11:58.043216 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b76cc4eefe233a341a75ed826593d914ae7695e5b4083626f3822b5708f68d29\": container with ID starting with b76cc4eefe233a341a75ed826593d914ae7695e5b4083626f3822b5708f68d29 not found: ID does not exist" containerID="b76cc4eefe233a341a75ed826593d914ae7695e5b4083626f3822b5708f68d29" Jan 01 09:11:58 crc kubenswrapper[4867]: I0101 09:11:58.043242 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b76cc4eefe233a341a75ed826593d914ae7695e5b4083626f3822b5708f68d29"} err="failed to get container status \"b76cc4eefe233a341a75ed826593d914ae7695e5b4083626f3822b5708f68d29\": rpc error: code = NotFound desc = could not find container \"b76cc4eefe233a341a75ed826593d914ae7695e5b4083626f3822b5708f68d29\": container with ID starting with b76cc4eefe233a341a75ed826593d914ae7695e5b4083626f3822b5708f68d29 not found: ID does not exist" Jan 01 09:11:59 crc kubenswrapper[4867]: I0101 09:11:59.137421 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce4d51a6-9a77-4ddb-bfbb-98c325a574ea" path="/var/lib/kubelet/pods/ce4d51a6-9a77-4ddb-bfbb-98c325a574ea/volumes" Jan 01 09:12:21 crc kubenswrapper[4867]: I0101 09:12:21.331242 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 09:12:21 crc kubenswrapper[4867]: I0101 09:12:21.332082 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 09:12:51 crc kubenswrapper[4867]: I0101 09:12:51.331356 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 09:12:51 crc kubenswrapper[4867]: I0101 09:12:51.332122 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 09:12:51 crc kubenswrapper[4867]: I0101 09:12:51.332195 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-69jph" Jan 01 09:12:51 crc kubenswrapper[4867]: I0101 09:12:51.333136 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"908a4602ceecff884c055e9699de4f997ff05cae9b6663c175d73f9402664972"} pod="openshift-machine-config-operator/machine-config-daemon-69jph" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 01 09:12:51 crc kubenswrapper[4867]: I0101 09:12:51.333260 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" containerID="cri-o://908a4602ceecff884c055e9699de4f997ff05cae9b6663c175d73f9402664972" gracePeriod=600 Jan 01 09:12:52 crc kubenswrapper[4867]: I0101 09:12:52.436837 4867 generic.go:334] "Generic (PLEG): container finished" podID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerID="908a4602ceecff884c055e9699de4f997ff05cae9b6663c175d73f9402664972" exitCode=0 Jan 01 09:12:52 crc kubenswrapper[4867]: I0101 09:12:52.436958 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerDied","Data":"908a4602ceecff884c055e9699de4f997ff05cae9b6663c175d73f9402664972"} Jan 01 09:12:52 crc kubenswrapper[4867]: I0101 09:12:52.437319 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerStarted","Data":"2c2b8d53aabb550eac28585c931868494601cc65edbec6600854a2632e762792"} Jan 01 09:12:52 crc kubenswrapper[4867]: I0101 09:12:52.437340 4867 scope.go:117] "RemoveContainer" containerID="609f1758578fd1e0359ee4d55ab7e708346e2b21307ff1d0616dfca955606c9d" Jan 01 09:13:18 crc kubenswrapper[4867]: I0101 09:13:18.755970 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6pfsn"] Jan 01 09:13:18 crc kubenswrapper[4867]: E0101 09:13:18.757052 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce4d51a6-9a77-4ddb-bfbb-98c325a574ea" containerName="registry-server" Jan 01 09:13:18 crc kubenswrapper[4867]: I0101 09:13:18.757083 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce4d51a6-9a77-4ddb-bfbb-98c325a574ea" containerName="registry-server" Jan 01 09:13:18 crc kubenswrapper[4867]: E0101 09:13:18.757147 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce4d51a6-9a77-4ddb-bfbb-98c325a574ea" containerName="extract-content" Jan 01 09:13:18 crc kubenswrapper[4867]: I0101 09:13:18.757165 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce4d51a6-9a77-4ddb-bfbb-98c325a574ea" containerName="extract-content" Jan 01 09:13:18 crc kubenswrapper[4867]: E0101 09:13:18.757194 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce4d51a6-9a77-4ddb-bfbb-98c325a574ea" containerName="extract-utilities" Jan 01 09:13:18 crc kubenswrapper[4867]: I0101 09:13:18.757207 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce4d51a6-9a77-4ddb-bfbb-98c325a574ea" containerName="extract-utilities" Jan 01 09:13:18 crc kubenswrapper[4867]: I0101 09:13:18.757458 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce4d51a6-9a77-4ddb-bfbb-98c325a574ea" containerName="registry-server" Jan 01 09:13:18 crc kubenswrapper[4867]: I0101 09:13:18.759164 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6pfsn" Jan 01 09:13:18 crc kubenswrapper[4867]: I0101 09:13:18.766074 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6pfsn"] Jan 01 09:13:18 crc kubenswrapper[4867]: I0101 09:13:18.818456 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb3cde0e-a56c-4171-aade-85b73c0e9685-utilities\") pod \"certified-operators-6pfsn\" (UID: \"bb3cde0e-a56c-4171-aade-85b73c0e9685\") " pod="openshift-marketplace/certified-operators-6pfsn" Jan 01 09:13:18 crc kubenswrapper[4867]: I0101 09:13:18.818506 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb3cde0e-a56c-4171-aade-85b73c0e9685-catalog-content\") pod \"certified-operators-6pfsn\" (UID: \"bb3cde0e-a56c-4171-aade-85b73c0e9685\") " pod="openshift-marketplace/certified-operators-6pfsn" Jan 01 09:13:18 crc kubenswrapper[4867]: I0101 09:13:18.818536 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skzb4\" (UniqueName: \"kubernetes.io/projected/bb3cde0e-a56c-4171-aade-85b73c0e9685-kube-api-access-skzb4\") pod \"certified-operators-6pfsn\" (UID: \"bb3cde0e-a56c-4171-aade-85b73c0e9685\") " pod="openshift-marketplace/certified-operators-6pfsn" Jan 01 09:13:18 crc kubenswrapper[4867]: I0101 09:13:18.920144 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb3cde0e-a56c-4171-aade-85b73c0e9685-utilities\") pod \"certified-operators-6pfsn\" (UID: \"bb3cde0e-a56c-4171-aade-85b73c0e9685\") " pod="openshift-marketplace/certified-operators-6pfsn" Jan 01 09:13:18 crc kubenswrapper[4867]: I0101 09:13:18.920224 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb3cde0e-a56c-4171-aade-85b73c0e9685-catalog-content\") pod \"certified-operators-6pfsn\" (UID: \"bb3cde0e-a56c-4171-aade-85b73c0e9685\") " pod="openshift-marketplace/certified-operators-6pfsn" Jan 01 09:13:18 crc kubenswrapper[4867]: I0101 09:13:18.920267 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skzb4\" (UniqueName: \"kubernetes.io/projected/bb3cde0e-a56c-4171-aade-85b73c0e9685-kube-api-access-skzb4\") pod \"certified-operators-6pfsn\" (UID: \"bb3cde0e-a56c-4171-aade-85b73c0e9685\") " pod="openshift-marketplace/certified-operators-6pfsn" Jan 01 09:13:18 crc kubenswrapper[4867]: I0101 09:13:18.920901 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb3cde0e-a56c-4171-aade-85b73c0e9685-utilities\") pod \"certified-operators-6pfsn\" (UID: \"bb3cde0e-a56c-4171-aade-85b73c0e9685\") " pod="openshift-marketplace/certified-operators-6pfsn" Jan 01 09:13:18 crc kubenswrapper[4867]: I0101 09:13:18.921203 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb3cde0e-a56c-4171-aade-85b73c0e9685-catalog-content\") pod \"certified-operators-6pfsn\" (UID: \"bb3cde0e-a56c-4171-aade-85b73c0e9685\") " pod="openshift-marketplace/certified-operators-6pfsn" Jan 01 09:13:18 crc kubenswrapper[4867]: I0101 09:13:18.950430 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skzb4\" (UniqueName: \"kubernetes.io/projected/bb3cde0e-a56c-4171-aade-85b73c0e9685-kube-api-access-skzb4\") pod \"certified-operators-6pfsn\" (UID: \"bb3cde0e-a56c-4171-aade-85b73c0e9685\") " pod="openshift-marketplace/certified-operators-6pfsn" Jan 01 09:13:19 crc kubenswrapper[4867]: I0101 09:13:19.090961 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6pfsn" Jan 01 09:13:19 crc kubenswrapper[4867]: I0101 09:13:19.669107 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6pfsn"] Jan 01 09:13:20 crc kubenswrapper[4867]: I0101 09:13:20.527854 4867 generic.go:334] "Generic (PLEG): container finished" podID="bb3cde0e-a56c-4171-aade-85b73c0e9685" containerID="4e2a208d702b3a77590645a56d69bd6feb774a8211a330d9526e869665b5f108" exitCode=0 Jan 01 09:13:20 crc kubenswrapper[4867]: I0101 09:13:20.527958 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pfsn" event={"ID":"bb3cde0e-a56c-4171-aade-85b73c0e9685","Type":"ContainerDied","Data":"4e2a208d702b3a77590645a56d69bd6feb774a8211a330d9526e869665b5f108"} Jan 01 09:13:20 crc kubenswrapper[4867]: I0101 09:13:20.528257 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pfsn" event={"ID":"bb3cde0e-a56c-4171-aade-85b73c0e9685","Type":"ContainerStarted","Data":"31a89f708d7b0a0cceeff72bbe294f4412e98cc520f9c72c8072fdb274afdec6"} Jan 01 09:13:21 crc kubenswrapper[4867]: I0101 09:13:21.542716 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pfsn" event={"ID":"bb3cde0e-a56c-4171-aade-85b73c0e9685","Type":"ContainerStarted","Data":"7d3b6b702eaa35503e8be12fb7e15612599329ec8f3b66c1d677764f2afbd029"} Jan 01 09:13:22 crc kubenswrapper[4867]: I0101 09:13:22.555153 4867 generic.go:334] "Generic (PLEG): container finished" podID="bb3cde0e-a56c-4171-aade-85b73c0e9685" containerID="7d3b6b702eaa35503e8be12fb7e15612599329ec8f3b66c1d677764f2afbd029" exitCode=0 Jan 01 09:13:22 crc kubenswrapper[4867]: I0101 09:13:22.555283 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pfsn" event={"ID":"bb3cde0e-a56c-4171-aade-85b73c0e9685","Type":"ContainerDied","Data":"7d3b6b702eaa35503e8be12fb7e15612599329ec8f3b66c1d677764f2afbd029"} Jan 01 09:13:23 crc kubenswrapper[4867]: I0101 09:13:23.566871 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pfsn" event={"ID":"bb3cde0e-a56c-4171-aade-85b73c0e9685","Type":"ContainerStarted","Data":"c6b704515fc485ce42fc808d0f5537172c79977cac9861425442ddce1043a71d"} Jan 01 09:13:23 crc kubenswrapper[4867]: I0101 09:13:23.591582 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6pfsn" podStartSLOduration=3.171455072 podStartE2EDuration="5.591561178s" podCreationTimestamp="2026-01-01 09:13:18 +0000 UTC" firstStartedPulling="2026-01-01 09:13:20.530141605 +0000 UTC m=+2809.665410404" lastFinishedPulling="2026-01-01 09:13:22.950247711 +0000 UTC m=+2812.085516510" observedRunningTime="2026-01-01 09:13:23.584496706 +0000 UTC m=+2812.719765505" watchObservedRunningTime="2026-01-01 09:13:23.591561178 +0000 UTC m=+2812.726829957" Jan 01 09:13:26 crc kubenswrapper[4867]: I0101 09:13:26.785691 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s5dvz"] Jan 01 09:13:26 crc kubenswrapper[4867]: I0101 09:13:26.788198 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5dvz" Jan 01 09:13:26 crc kubenswrapper[4867]: I0101 09:13:26.804111 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5dvz"] Jan 01 09:13:26 crc kubenswrapper[4867]: I0101 09:13:26.844599 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzzjh\" (UniqueName: \"kubernetes.io/projected/7dfa6a3d-d6f8-401b-b0b2-227b16d3a682-kube-api-access-rzzjh\") pod \"redhat-marketplace-s5dvz\" (UID: \"7dfa6a3d-d6f8-401b-b0b2-227b16d3a682\") " pod="openshift-marketplace/redhat-marketplace-s5dvz" Jan 01 09:13:26 crc kubenswrapper[4867]: I0101 09:13:26.844772 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dfa6a3d-d6f8-401b-b0b2-227b16d3a682-catalog-content\") pod \"redhat-marketplace-s5dvz\" (UID: \"7dfa6a3d-d6f8-401b-b0b2-227b16d3a682\") " pod="openshift-marketplace/redhat-marketplace-s5dvz" Jan 01 09:13:26 crc kubenswrapper[4867]: I0101 09:13:26.844862 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dfa6a3d-d6f8-401b-b0b2-227b16d3a682-utilities\") pod \"redhat-marketplace-s5dvz\" (UID: \"7dfa6a3d-d6f8-401b-b0b2-227b16d3a682\") " pod="openshift-marketplace/redhat-marketplace-s5dvz" Jan 01 09:13:26 crc kubenswrapper[4867]: I0101 09:13:26.946786 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dfa6a3d-d6f8-401b-b0b2-227b16d3a682-utilities\") pod \"redhat-marketplace-s5dvz\" (UID: \"7dfa6a3d-d6f8-401b-b0b2-227b16d3a682\") " pod="openshift-marketplace/redhat-marketplace-s5dvz" Jan 01 09:13:26 crc kubenswrapper[4867]: I0101 09:13:26.946874 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzzjh\" (UniqueName: \"kubernetes.io/projected/7dfa6a3d-d6f8-401b-b0b2-227b16d3a682-kube-api-access-rzzjh\") pod \"redhat-marketplace-s5dvz\" (UID: \"7dfa6a3d-d6f8-401b-b0b2-227b16d3a682\") " pod="openshift-marketplace/redhat-marketplace-s5dvz" Jan 01 09:13:26 crc kubenswrapper[4867]: I0101 09:13:26.946943 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dfa6a3d-d6f8-401b-b0b2-227b16d3a682-catalog-content\") pod \"redhat-marketplace-s5dvz\" (UID: \"7dfa6a3d-d6f8-401b-b0b2-227b16d3a682\") " pod="openshift-marketplace/redhat-marketplace-s5dvz" Jan 01 09:13:26 crc kubenswrapper[4867]: I0101 09:13:26.947303 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dfa6a3d-d6f8-401b-b0b2-227b16d3a682-utilities\") pod \"redhat-marketplace-s5dvz\" (UID: \"7dfa6a3d-d6f8-401b-b0b2-227b16d3a682\") " pod="openshift-marketplace/redhat-marketplace-s5dvz" Jan 01 09:13:26 crc kubenswrapper[4867]: I0101 09:13:26.947376 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dfa6a3d-d6f8-401b-b0b2-227b16d3a682-catalog-content\") pod \"redhat-marketplace-s5dvz\" (UID: \"7dfa6a3d-d6f8-401b-b0b2-227b16d3a682\") " pod="openshift-marketplace/redhat-marketplace-s5dvz" Jan 01 09:13:26 crc kubenswrapper[4867]: I0101 09:13:26.968457 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzzjh\" (UniqueName: \"kubernetes.io/projected/7dfa6a3d-d6f8-401b-b0b2-227b16d3a682-kube-api-access-rzzjh\") pod \"redhat-marketplace-s5dvz\" (UID: \"7dfa6a3d-d6f8-401b-b0b2-227b16d3a682\") " pod="openshift-marketplace/redhat-marketplace-s5dvz" Jan 01 09:13:27 crc kubenswrapper[4867]: I0101 09:13:27.111774 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5dvz" Jan 01 09:13:27 crc kubenswrapper[4867]: I0101 09:13:27.583585 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5dvz"] Jan 01 09:13:27 crc kubenswrapper[4867]: W0101 09:13:27.587512 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7dfa6a3d_d6f8_401b_b0b2_227b16d3a682.slice/crio-ee6ac664bb459c4d27a130c9e853e0ba932e0accec7f37e181dddaac74c142e3 WatchSource:0}: Error finding container ee6ac664bb459c4d27a130c9e853e0ba932e0accec7f37e181dddaac74c142e3: Status 404 returned error can't find the container with id ee6ac664bb459c4d27a130c9e853e0ba932e0accec7f37e181dddaac74c142e3 Jan 01 09:13:27 crc kubenswrapper[4867]: I0101 09:13:27.603242 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5dvz" event={"ID":"7dfa6a3d-d6f8-401b-b0b2-227b16d3a682","Type":"ContainerStarted","Data":"ee6ac664bb459c4d27a130c9e853e0ba932e0accec7f37e181dddaac74c142e3"} Jan 01 09:13:28 crc kubenswrapper[4867]: I0101 09:13:28.611966 4867 generic.go:334] "Generic (PLEG): container finished" podID="7dfa6a3d-d6f8-401b-b0b2-227b16d3a682" containerID="65088dd6daac86d5043e5772f5a3adfa78185e4ed0d8ed293cae607aadb6657b" exitCode=0 Jan 01 09:13:28 crc kubenswrapper[4867]: I0101 09:13:28.612019 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5dvz" event={"ID":"7dfa6a3d-d6f8-401b-b0b2-227b16d3a682","Type":"ContainerDied","Data":"65088dd6daac86d5043e5772f5a3adfa78185e4ed0d8ed293cae607aadb6657b"} Jan 01 09:13:29 crc kubenswrapper[4867]: I0101 09:13:29.091913 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6pfsn" Jan 01 09:13:29 crc kubenswrapper[4867]: I0101 09:13:29.092448 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6pfsn" Jan 01 09:13:29 crc kubenswrapper[4867]: I0101 09:13:29.167448 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6pfsn" Jan 01 09:13:29 crc kubenswrapper[4867]: I0101 09:13:29.621967 4867 generic.go:334] "Generic (PLEG): container finished" podID="7dfa6a3d-d6f8-401b-b0b2-227b16d3a682" containerID="3b8b7174fa60057027a455f035a28249b8b3504ee23c34d91c29409ee7ecf501" exitCode=0 Jan 01 09:13:29 crc kubenswrapper[4867]: I0101 09:13:29.622026 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5dvz" event={"ID":"7dfa6a3d-d6f8-401b-b0b2-227b16d3a682","Type":"ContainerDied","Data":"3b8b7174fa60057027a455f035a28249b8b3504ee23c34d91c29409ee7ecf501"} Jan 01 09:13:29 crc kubenswrapper[4867]: I0101 09:13:29.695276 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6pfsn" Jan 01 09:13:30 crc kubenswrapper[4867]: I0101 09:13:30.637256 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5dvz" event={"ID":"7dfa6a3d-d6f8-401b-b0b2-227b16d3a682","Type":"ContainerStarted","Data":"bf5b726dfe102d5eba7b35c7ef92e6b7b77155e0d3294dc57c3053b199777f63"} Jan 01 09:13:30 crc kubenswrapper[4867]: I0101 09:13:30.669428 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s5dvz" podStartSLOduration=3.090283558 podStartE2EDuration="4.66940804s" podCreationTimestamp="2026-01-01 09:13:26 +0000 UTC" firstStartedPulling="2026-01-01 09:13:28.61398091 +0000 UTC m=+2817.749249679" lastFinishedPulling="2026-01-01 09:13:30.193105382 +0000 UTC m=+2819.328374161" observedRunningTime="2026-01-01 09:13:30.664809989 +0000 UTC m=+2819.800078798" watchObservedRunningTime="2026-01-01 09:13:30.66940804 +0000 UTC m=+2819.804676819" Jan 01 09:13:31 crc kubenswrapper[4867]: I0101 09:13:31.526109 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6pfsn"] Jan 01 09:13:32 crc kubenswrapper[4867]: I0101 09:13:32.651985 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6pfsn" podUID="bb3cde0e-a56c-4171-aade-85b73c0e9685" containerName="registry-server" containerID="cri-o://c6b704515fc485ce42fc808d0f5537172c79977cac9861425442ddce1043a71d" gracePeriod=2 Jan 01 09:13:33 crc kubenswrapper[4867]: I0101 09:13:33.670119 4867 generic.go:334] "Generic (PLEG): container finished" podID="bb3cde0e-a56c-4171-aade-85b73c0e9685" containerID="c6b704515fc485ce42fc808d0f5537172c79977cac9861425442ddce1043a71d" exitCode=0 Jan 01 09:13:33 crc kubenswrapper[4867]: I0101 09:13:33.670322 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pfsn" event={"ID":"bb3cde0e-a56c-4171-aade-85b73c0e9685","Type":"ContainerDied","Data":"c6b704515fc485ce42fc808d0f5537172c79977cac9861425442ddce1043a71d"} Jan 01 09:13:33 crc kubenswrapper[4867]: I0101 09:13:33.670621 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pfsn" event={"ID":"bb3cde0e-a56c-4171-aade-85b73c0e9685","Type":"ContainerDied","Data":"31a89f708d7b0a0cceeff72bbe294f4412e98cc520f9c72c8072fdb274afdec6"} Jan 01 09:13:33 crc kubenswrapper[4867]: I0101 09:13:33.670645 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31a89f708d7b0a0cceeff72bbe294f4412e98cc520f9c72c8072fdb274afdec6" Jan 01 09:13:33 crc kubenswrapper[4867]: I0101 09:13:33.704455 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6pfsn" Jan 01 09:13:33 crc kubenswrapper[4867]: I0101 09:13:33.750979 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb3cde0e-a56c-4171-aade-85b73c0e9685-utilities\") pod \"bb3cde0e-a56c-4171-aade-85b73c0e9685\" (UID: \"bb3cde0e-a56c-4171-aade-85b73c0e9685\") " Jan 01 09:13:33 crc kubenswrapper[4867]: I0101 09:13:33.751061 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb3cde0e-a56c-4171-aade-85b73c0e9685-catalog-content\") pod \"bb3cde0e-a56c-4171-aade-85b73c0e9685\" (UID: \"bb3cde0e-a56c-4171-aade-85b73c0e9685\") " Jan 01 09:13:33 crc kubenswrapper[4867]: I0101 09:13:33.751095 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skzb4\" (UniqueName: \"kubernetes.io/projected/bb3cde0e-a56c-4171-aade-85b73c0e9685-kube-api-access-skzb4\") pod \"bb3cde0e-a56c-4171-aade-85b73c0e9685\" (UID: \"bb3cde0e-a56c-4171-aade-85b73c0e9685\") " Jan 01 09:13:33 crc kubenswrapper[4867]: I0101 09:13:33.752737 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb3cde0e-a56c-4171-aade-85b73c0e9685-utilities" (OuterVolumeSpecName: "utilities") pod "bb3cde0e-a56c-4171-aade-85b73c0e9685" (UID: "bb3cde0e-a56c-4171-aade-85b73c0e9685"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:13:33 crc kubenswrapper[4867]: I0101 09:13:33.759378 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb3cde0e-a56c-4171-aade-85b73c0e9685-kube-api-access-skzb4" (OuterVolumeSpecName: "kube-api-access-skzb4") pod "bb3cde0e-a56c-4171-aade-85b73c0e9685" (UID: "bb3cde0e-a56c-4171-aade-85b73c0e9685"). InnerVolumeSpecName "kube-api-access-skzb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:13:33 crc kubenswrapper[4867]: I0101 09:13:33.813775 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb3cde0e-a56c-4171-aade-85b73c0e9685-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb3cde0e-a56c-4171-aade-85b73c0e9685" (UID: "bb3cde0e-a56c-4171-aade-85b73c0e9685"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:13:33 crc kubenswrapper[4867]: I0101 09:13:33.852650 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb3cde0e-a56c-4171-aade-85b73c0e9685-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 09:13:33 crc kubenswrapper[4867]: I0101 09:13:33.852679 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb3cde0e-a56c-4171-aade-85b73c0e9685-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 09:13:33 crc kubenswrapper[4867]: I0101 09:13:33.852694 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skzb4\" (UniqueName: \"kubernetes.io/projected/bb3cde0e-a56c-4171-aade-85b73c0e9685-kube-api-access-skzb4\") on node \"crc\" DevicePath \"\"" Jan 01 09:13:34 crc kubenswrapper[4867]: I0101 09:13:34.680354 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6pfsn" Jan 01 09:13:34 crc kubenswrapper[4867]: I0101 09:13:34.724828 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6pfsn"] Jan 01 09:13:34 crc kubenswrapper[4867]: I0101 09:13:34.735462 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6pfsn"] Jan 01 09:13:35 crc kubenswrapper[4867]: I0101 09:13:35.144217 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb3cde0e-a56c-4171-aade-85b73c0e9685" path="/var/lib/kubelet/pods/bb3cde0e-a56c-4171-aade-85b73c0e9685/volumes" Jan 01 09:13:37 crc kubenswrapper[4867]: I0101 09:13:37.112392 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s5dvz" Jan 01 09:13:37 crc kubenswrapper[4867]: I0101 09:13:37.112907 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s5dvz" Jan 01 09:13:37 crc kubenswrapper[4867]: I0101 09:13:37.194284 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s5dvz" Jan 01 09:13:37 crc kubenswrapper[4867]: I0101 09:13:37.782823 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s5dvz" Jan 01 09:13:38 crc kubenswrapper[4867]: I0101 09:13:38.732513 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5dvz"] Jan 01 09:13:39 crc kubenswrapper[4867]: I0101 09:13:39.731469 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s5dvz" podUID="7dfa6a3d-d6f8-401b-b0b2-227b16d3a682" containerName="registry-server" containerID="cri-o://bf5b726dfe102d5eba7b35c7ef92e6b7b77155e0d3294dc57c3053b199777f63" gracePeriod=2 Jan 01 09:13:40 crc kubenswrapper[4867]: I0101 09:13:40.733638 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5dvz" Jan 01 09:13:40 crc kubenswrapper[4867]: I0101 09:13:40.741570 4867 generic.go:334] "Generic (PLEG): container finished" podID="7dfa6a3d-d6f8-401b-b0b2-227b16d3a682" containerID="bf5b726dfe102d5eba7b35c7ef92e6b7b77155e0d3294dc57c3053b199777f63" exitCode=0 Jan 01 09:13:40 crc kubenswrapper[4867]: I0101 09:13:40.741602 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5dvz" event={"ID":"7dfa6a3d-d6f8-401b-b0b2-227b16d3a682","Type":"ContainerDied","Data":"bf5b726dfe102d5eba7b35c7ef92e6b7b77155e0d3294dc57c3053b199777f63"} Jan 01 09:13:40 crc kubenswrapper[4867]: I0101 09:13:40.741625 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5dvz" event={"ID":"7dfa6a3d-d6f8-401b-b0b2-227b16d3a682","Type":"ContainerDied","Data":"ee6ac664bb459c4d27a130c9e853e0ba932e0accec7f37e181dddaac74c142e3"} Jan 01 09:13:40 crc kubenswrapper[4867]: I0101 09:13:40.741641 4867 scope.go:117] "RemoveContainer" containerID="bf5b726dfe102d5eba7b35c7ef92e6b7b77155e0d3294dc57c3053b199777f63" Jan 01 09:13:40 crc kubenswrapper[4867]: I0101 09:13:40.741656 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5dvz" Jan 01 09:13:40 crc kubenswrapper[4867]: I0101 09:13:40.775165 4867 scope.go:117] "RemoveContainer" containerID="3b8b7174fa60057027a455f035a28249b8b3504ee23c34d91c29409ee7ecf501" Jan 01 09:13:40 crc kubenswrapper[4867]: I0101 09:13:40.820222 4867 scope.go:117] "RemoveContainer" containerID="65088dd6daac86d5043e5772f5a3adfa78185e4ed0d8ed293cae607aadb6657b" Jan 01 09:13:40 crc kubenswrapper[4867]: I0101 09:13:40.844526 4867 scope.go:117] "RemoveContainer" containerID="bf5b726dfe102d5eba7b35c7ef92e6b7b77155e0d3294dc57c3053b199777f63" Jan 01 09:13:40 crc kubenswrapper[4867]: E0101 09:13:40.845007 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf5b726dfe102d5eba7b35c7ef92e6b7b77155e0d3294dc57c3053b199777f63\": container with ID starting with bf5b726dfe102d5eba7b35c7ef92e6b7b77155e0d3294dc57c3053b199777f63 not found: ID does not exist" containerID="bf5b726dfe102d5eba7b35c7ef92e6b7b77155e0d3294dc57c3053b199777f63" Jan 01 09:13:40 crc kubenswrapper[4867]: I0101 09:13:40.845047 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf5b726dfe102d5eba7b35c7ef92e6b7b77155e0d3294dc57c3053b199777f63"} err="failed to get container status \"bf5b726dfe102d5eba7b35c7ef92e6b7b77155e0d3294dc57c3053b199777f63\": rpc error: code = NotFound desc = could not find container \"bf5b726dfe102d5eba7b35c7ef92e6b7b77155e0d3294dc57c3053b199777f63\": container with ID starting with bf5b726dfe102d5eba7b35c7ef92e6b7b77155e0d3294dc57c3053b199777f63 not found: ID does not exist" Jan 01 09:13:40 crc kubenswrapper[4867]: I0101 09:13:40.845074 4867 scope.go:117] "RemoveContainer" containerID="3b8b7174fa60057027a455f035a28249b8b3504ee23c34d91c29409ee7ecf501" Jan 01 09:13:40 crc kubenswrapper[4867]: E0101 09:13:40.845625 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b8b7174fa60057027a455f035a28249b8b3504ee23c34d91c29409ee7ecf501\": container with ID starting with 3b8b7174fa60057027a455f035a28249b8b3504ee23c34d91c29409ee7ecf501 not found: ID does not exist" containerID="3b8b7174fa60057027a455f035a28249b8b3504ee23c34d91c29409ee7ecf501" Jan 01 09:13:40 crc kubenswrapper[4867]: I0101 09:13:40.845668 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b8b7174fa60057027a455f035a28249b8b3504ee23c34d91c29409ee7ecf501"} err="failed to get container status \"3b8b7174fa60057027a455f035a28249b8b3504ee23c34d91c29409ee7ecf501\": rpc error: code = NotFound desc = could not find container \"3b8b7174fa60057027a455f035a28249b8b3504ee23c34d91c29409ee7ecf501\": container with ID starting with 3b8b7174fa60057027a455f035a28249b8b3504ee23c34d91c29409ee7ecf501 not found: ID does not exist" Jan 01 09:13:40 crc kubenswrapper[4867]: I0101 09:13:40.845697 4867 scope.go:117] "RemoveContainer" containerID="65088dd6daac86d5043e5772f5a3adfa78185e4ed0d8ed293cae607aadb6657b" Jan 01 09:13:40 crc kubenswrapper[4867]: E0101 09:13:40.846018 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65088dd6daac86d5043e5772f5a3adfa78185e4ed0d8ed293cae607aadb6657b\": container with ID starting with 65088dd6daac86d5043e5772f5a3adfa78185e4ed0d8ed293cae607aadb6657b not found: ID does not exist" containerID="65088dd6daac86d5043e5772f5a3adfa78185e4ed0d8ed293cae607aadb6657b" Jan 01 09:13:40 crc kubenswrapper[4867]: I0101 09:13:40.846042 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65088dd6daac86d5043e5772f5a3adfa78185e4ed0d8ed293cae607aadb6657b"} err="failed to get container status \"65088dd6daac86d5043e5772f5a3adfa78185e4ed0d8ed293cae607aadb6657b\": rpc error: code = NotFound desc = could not find container \"65088dd6daac86d5043e5772f5a3adfa78185e4ed0d8ed293cae607aadb6657b\": container with ID starting with 65088dd6daac86d5043e5772f5a3adfa78185e4ed0d8ed293cae607aadb6657b not found: ID does not exist" Jan 01 09:13:40 crc kubenswrapper[4867]: I0101 09:13:40.886410 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dfa6a3d-d6f8-401b-b0b2-227b16d3a682-catalog-content\") pod \"7dfa6a3d-d6f8-401b-b0b2-227b16d3a682\" (UID: \"7dfa6a3d-d6f8-401b-b0b2-227b16d3a682\") " Jan 01 09:13:40 crc kubenswrapper[4867]: I0101 09:13:40.886492 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzzjh\" (UniqueName: \"kubernetes.io/projected/7dfa6a3d-d6f8-401b-b0b2-227b16d3a682-kube-api-access-rzzjh\") pod \"7dfa6a3d-d6f8-401b-b0b2-227b16d3a682\" (UID: \"7dfa6a3d-d6f8-401b-b0b2-227b16d3a682\") " Jan 01 09:13:40 crc kubenswrapper[4867]: I0101 09:13:40.886552 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dfa6a3d-d6f8-401b-b0b2-227b16d3a682-utilities\") pod \"7dfa6a3d-d6f8-401b-b0b2-227b16d3a682\" (UID: \"7dfa6a3d-d6f8-401b-b0b2-227b16d3a682\") " Jan 01 09:13:40 crc kubenswrapper[4867]: I0101 09:13:40.887541 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dfa6a3d-d6f8-401b-b0b2-227b16d3a682-utilities" (OuterVolumeSpecName: "utilities") pod "7dfa6a3d-d6f8-401b-b0b2-227b16d3a682" (UID: "7dfa6a3d-d6f8-401b-b0b2-227b16d3a682"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:13:40 crc kubenswrapper[4867]: I0101 09:13:40.894208 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dfa6a3d-d6f8-401b-b0b2-227b16d3a682-kube-api-access-rzzjh" (OuterVolumeSpecName: "kube-api-access-rzzjh") pod "7dfa6a3d-d6f8-401b-b0b2-227b16d3a682" (UID: "7dfa6a3d-d6f8-401b-b0b2-227b16d3a682"). InnerVolumeSpecName "kube-api-access-rzzjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:13:40 crc kubenswrapper[4867]: I0101 09:13:40.930454 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dfa6a3d-d6f8-401b-b0b2-227b16d3a682-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7dfa6a3d-d6f8-401b-b0b2-227b16d3a682" (UID: "7dfa6a3d-d6f8-401b-b0b2-227b16d3a682"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:13:40 crc kubenswrapper[4867]: I0101 09:13:40.989545 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dfa6a3d-d6f8-401b-b0b2-227b16d3a682-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 09:13:40 crc kubenswrapper[4867]: I0101 09:13:40.989647 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzzjh\" (UniqueName: \"kubernetes.io/projected/7dfa6a3d-d6f8-401b-b0b2-227b16d3a682-kube-api-access-rzzjh\") on node \"crc\" DevicePath \"\"" Jan 01 09:13:40 crc kubenswrapper[4867]: I0101 09:13:40.989683 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dfa6a3d-d6f8-401b-b0b2-227b16d3a682-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 09:13:41 crc kubenswrapper[4867]: I0101 09:13:41.085263 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5dvz"] Jan 01 09:13:41 crc kubenswrapper[4867]: I0101 09:13:41.093115 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5dvz"] Jan 01 09:13:41 crc kubenswrapper[4867]: I0101 09:13:41.144857 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dfa6a3d-d6f8-401b-b0b2-227b16d3a682" path="/var/lib/kubelet/pods/7dfa6a3d-d6f8-401b-b0b2-227b16d3a682/volumes" Jan 01 09:14:41 crc kubenswrapper[4867]: I0101 09:14:41.523462 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vm79k"] Jan 01 09:14:41 crc kubenswrapper[4867]: E0101 09:14:41.524778 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb3cde0e-a56c-4171-aade-85b73c0e9685" containerName="registry-server" Jan 01 09:14:41 crc kubenswrapper[4867]: I0101 09:14:41.524802 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb3cde0e-a56c-4171-aade-85b73c0e9685" containerName="registry-server" Jan 01 09:14:41 crc kubenswrapper[4867]: E0101 09:14:41.524870 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dfa6a3d-d6f8-401b-b0b2-227b16d3a682" containerName="registry-server" Jan 01 09:14:41 crc kubenswrapper[4867]: I0101 09:14:41.524908 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dfa6a3d-d6f8-401b-b0b2-227b16d3a682" containerName="registry-server" Jan 01 09:14:41 crc kubenswrapper[4867]: E0101 09:14:41.524935 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb3cde0e-a56c-4171-aade-85b73c0e9685" containerName="extract-utilities" Jan 01 09:14:41 crc kubenswrapper[4867]: I0101 09:14:41.524951 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb3cde0e-a56c-4171-aade-85b73c0e9685" containerName="extract-utilities" Jan 01 09:14:41 crc kubenswrapper[4867]: E0101 09:14:41.524966 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb3cde0e-a56c-4171-aade-85b73c0e9685" containerName="extract-content" Jan 01 09:14:41 crc kubenswrapper[4867]: I0101 09:14:41.524978 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb3cde0e-a56c-4171-aade-85b73c0e9685" containerName="extract-content" Jan 01 09:14:41 crc kubenswrapper[4867]: E0101 09:14:41.524996 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dfa6a3d-d6f8-401b-b0b2-227b16d3a682" containerName="extract-utilities" Jan 01 09:14:41 crc kubenswrapper[4867]: I0101 09:14:41.525007 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dfa6a3d-d6f8-401b-b0b2-227b16d3a682" containerName="extract-utilities" Jan 01 09:14:41 crc kubenswrapper[4867]: E0101 09:14:41.525031 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dfa6a3d-d6f8-401b-b0b2-227b16d3a682" containerName="extract-content" Jan 01 09:14:41 crc kubenswrapper[4867]: I0101 09:14:41.525045 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dfa6a3d-d6f8-401b-b0b2-227b16d3a682" containerName="extract-content" Jan 01 09:14:41 crc kubenswrapper[4867]: I0101 09:14:41.525476 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dfa6a3d-d6f8-401b-b0b2-227b16d3a682" containerName="registry-server" Jan 01 09:14:41 crc kubenswrapper[4867]: I0101 09:14:41.525538 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb3cde0e-a56c-4171-aade-85b73c0e9685" containerName="registry-server" Jan 01 09:14:41 crc kubenswrapper[4867]: I0101 09:14:41.528578 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vm79k" Jan 01 09:14:41 crc kubenswrapper[4867]: I0101 09:14:41.560359 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vm79k"] Jan 01 09:14:41 crc kubenswrapper[4867]: I0101 09:14:41.589165 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf8bm\" (UniqueName: \"kubernetes.io/projected/f622b0f9-ee68-46f6-a4f1-42d368b350e1-kube-api-access-zf8bm\") pod \"community-operators-vm79k\" (UID: \"f622b0f9-ee68-46f6-a4f1-42d368b350e1\") " pod="openshift-marketplace/community-operators-vm79k" Jan 01 09:14:41 crc kubenswrapper[4867]: I0101 09:14:41.589986 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f622b0f9-ee68-46f6-a4f1-42d368b350e1-catalog-content\") pod \"community-operators-vm79k\" (UID: \"f622b0f9-ee68-46f6-a4f1-42d368b350e1\") " pod="openshift-marketplace/community-operators-vm79k" Jan 01 09:14:41 crc kubenswrapper[4867]: I0101 09:14:41.590095 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f622b0f9-ee68-46f6-a4f1-42d368b350e1-utilities\") pod \"community-operators-vm79k\" (UID: \"f622b0f9-ee68-46f6-a4f1-42d368b350e1\") " pod="openshift-marketplace/community-operators-vm79k" Jan 01 09:14:41 crc kubenswrapper[4867]: I0101 09:14:41.691234 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f622b0f9-ee68-46f6-a4f1-42d368b350e1-catalog-content\") pod \"community-operators-vm79k\" (UID: \"f622b0f9-ee68-46f6-a4f1-42d368b350e1\") " pod="openshift-marketplace/community-operators-vm79k" Jan 01 09:14:41 crc kubenswrapper[4867]: I0101 09:14:41.691286 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f622b0f9-ee68-46f6-a4f1-42d368b350e1-utilities\") pod \"community-operators-vm79k\" (UID: \"f622b0f9-ee68-46f6-a4f1-42d368b350e1\") " pod="openshift-marketplace/community-operators-vm79k" Jan 01 09:14:41 crc kubenswrapper[4867]: I0101 09:14:41.691347 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf8bm\" (UniqueName: \"kubernetes.io/projected/f622b0f9-ee68-46f6-a4f1-42d368b350e1-kube-api-access-zf8bm\") pod \"community-operators-vm79k\" (UID: \"f622b0f9-ee68-46f6-a4f1-42d368b350e1\") " pod="openshift-marketplace/community-operators-vm79k" Jan 01 09:14:41 crc kubenswrapper[4867]: I0101 09:14:41.691836 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f622b0f9-ee68-46f6-a4f1-42d368b350e1-catalog-content\") pod \"community-operators-vm79k\" (UID: \"f622b0f9-ee68-46f6-a4f1-42d368b350e1\") " pod="openshift-marketplace/community-operators-vm79k" Jan 01 09:14:41 crc kubenswrapper[4867]: I0101 09:14:41.691859 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f622b0f9-ee68-46f6-a4f1-42d368b350e1-utilities\") pod \"community-operators-vm79k\" (UID: \"f622b0f9-ee68-46f6-a4f1-42d368b350e1\") " pod="openshift-marketplace/community-operators-vm79k" Jan 01 09:14:41 crc kubenswrapper[4867]: I0101 09:14:41.726681 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf8bm\" (UniqueName: \"kubernetes.io/projected/f622b0f9-ee68-46f6-a4f1-42d368b350e1-kube-api-access-zf8bm\") pod \"community-operators-vm79k\" (UID: \"f622b0f9-ee68-46f6-a4f1-42d368b350e1\") " pod="openshift-marketplace/community-operators-vm79k" Jan 01 09:14:41 crc kubenswrapper[4867]: I0101 09:14:41.869575 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vm79k" Jan 01 09:14:42 crc kubenswrapper[4867]: I0101 09:14:42.398246 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vm79k"] Jan 01 09:14:42 crc kubenswrapper[4867]: W0101 09:14:42.405359 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf622b0f9_ee68_46f6_a4f1_42d368b350e1.slice/crio-db2b104601c2eef4410121e83ddfc13fd31c588991c11f78ae42e62ddd03a94c WatchSource:0}: Error finding container db2b104601c2eef4410121e83ddfc13fd31c588991c11f78ae42e62ddd03a94c: Status 404 returned error can't find the container with id db2b104601c2eef4410121e83ddfc13fd31c588991c11f78ae42e62ddd03a94c Jan 01 09:14:43 crc kubenswrapper[4867]: I0101 09:14:43.416207 4867 generic.go:334] "Generic (PLEG): container finished" podID="f622b0f9-ee68-46f6-a4f1-42d368b350e1" containerID="8ac15dd3920ceb1fca397bdaf04ba1bddbd3a33792c76555502b304c048b5daf" exitCode=0 Jan 01 09:14:43 crc kubenswrapper[4867]: I0101 09:14:43.416314 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vm79k" event={"ID":"f622b0f9-ee68-46f6-a4f1-42d368b350e1","Type":"ContainerDied","Data":"8ac15dd3920ceb1fca397bdaf04ba1bddbd3a33792c76555502b304c048b5daf"} Jan 01 09:14:43 crc kubenswrapper[4867]: I0101 09:14:43.416752 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vm79k" event={"ID":"f622b0f9-ee68-46f6-a4f1-42d368b350e1","Type":"ContainerStarted","Data":"db2b104601c2eef4410121e83ddfc13fd31c588991c11f78ae42e62ddd03a94c"} Jan 01 09:14:44 crc kubenswrapper[4867]: I0101 09:14:44.434304 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vm79k" event={"ID":"f622b0f9-ee68-46f6-a4f1-42d368b350e1","Type":"ContainerStarted","Data":"2896797fe16b5017788326ba3e64643d089c5e8c4c59b52e93a052eef272e060"} Jan 01 09:14:45 crc kubenswrapper[4867]: I0101 09:14:45.448504 4867 generic.go:334] "Generic (PLEG): container finished" podID="f622b0f9-ee68-46f6-a4f1-42d368b350e1" containerID="2896797fe16b5017788326ba3e64643d089c5e8c4c59b52e93a052eef272e060" exitCode=0 Jan 01 09:14:45 crc kubenswrapper[4867]: I0101 09:14:45.448569 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vm79k" event={"ID":"f622b0f9-ee68-46f6-a4f1-42d368b350e1","Type":"ContainerDied","Data":"2896797fe16b5017788326ba3e64643d089c5e8c4c59b52e93a052eef272e060"} Jan 01 09:14:46 crc kubenswrapper[4867]: I0101 09:14:46.464117 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vm79k" event={"ID":"f622b0f9-ee68-46f6-a4f1-42d368b350e1","Type":"ContainerStarted","Data":"dac57c3b76fdc7148132129d56951f480bca93d26dece6a8a24834360f3ab2aa"} Jan 01 09:14:46 crc kubenswrapper[4867]: I0101 09:14:46.497551 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vm79k" podStartSLOduration=3.065038324 podStartE2EDuration="5.497520695s" podCreationTimestamp="2026-01-01 09:14:41 +0000 UTC" firstStartedPulling="2026-01-01 09:14:43.418465057 +0000 UTC m=+2892.553733866" lastFinishedPulling="2026-01-01 09:14:45.850947468 +0000 UTC m=+2894.986216237" observedRunningTime="2026-01-01 09:14:46.49560112 +0000 UTC m=+2895.630869959" watchObservedRunningTime="2026-01-01 09:14:46.497520695 +0000 UTC m=+2895.632789504" Jan 01 09:14:51 crc kubenswrapper[4867]: I0101 09:14:51.331035 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 09:14:51 crc kubenswrapper[4867]: I0101 09:14:51.331780 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 09:14:51 crc kubenswrapper[4867]: I0101 09:14:51.869791 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vm79k" Jan 01 09:14:51 crc kubenswrapper[4867]: I0101 09:14:51.869938 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vm79k" Jan 01 09:14:51 crc kubenswrapper[4867]: I0101 09:14:51.928420 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vm79k" Jan 01 09:14:52 crc kubenswrapper[4867]: I0101 09:14:52.596214 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vm79k" Jan 01 09:14:52 crc kubenswrapper[4867]: I0101 09:14:52.666622 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vm79k"] Jan 01 09:14:54 crc kubenswrapper[4867]: I0101 09:14:54.558507 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vm79k" podUID="f622b0f9-ee68-46f6-a4f1-42d368b350e1" containerName="registry-server" containerID="cri-o://dac57c3b76fdc7148132129d56951f480bca93d26dece6a8a24834360f3ab2aa" gracePeriod=2 Jan 01 09:14:55 crc kubenswrapper[4867]: I0101 09:14:55.578354 4867 generic.go:334] "Generic (PLEG): container finished" podID="f622b0f9-ee68-46f6-a4f1-42d368b350e1" containerID="dac57c3b76fdc7148132129d56951f480bca93d26dece6a8a24834360f3ab2aa" exitCode=0 Jan 01 09:14:55 crc kubenswrapper[4867]: I0101 09:14:55.578398 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vm79k" event={"ID":"f622b0f9-ee68-46f6-a4f1-42d368b350e1","Type":"ContainerDied","Data":"dac57c3b76fdc7148132129d56951f480bca93d26dece6a8a24834360f3ab2aa"} Jan 01 09:14:55 crc kubenswrapper[4867]: I0101 09:14:55.689037 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vm79k" Jan 01 09:14:55 crc kubenswrapper[4867]: I0101 09:14:55.725212 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f622b0f9-ee68-46f6-a4f1-42d368b350e1-utilities\") pod \"f622b0f9-ee68-46f6-a4f1-42d368b350e1\" (UID: \"f622b0f9-ee68-46f6-a4f1-42d368b350e1\") " Jan 01 09:14:55 crc kubenswrapper[4867]: I0101 09:14:55.725337 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf8bm\" (UniqueName: \"kubernetes.io/projected/f622b0f9-ee68-46f6-a4f1-42d368b350e1-kube-api-access-zf8bm\") pod \"f622b0f9-ee68-46f6-a4f1-42d368b350e1\" (UID: \"f622b0f9-ee68-46f6-a4f1-42d368b350e1\") " Jan 01 09:14:55 crc kubenswrapper[4867]: I0101 09:14:55.725564 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f622b0f9-ee68-46f6-a4f1-42d368b350e1-catalog-content\") pod \"f622b0f9-ee68-46f6-a4f1-42d368b350e1\" (UID: \"f622b0f9-ee68-46f6-a4f1-42d368b350e1\") " Jan 01 09:14:55 crc kubenswrapper[4867]: I0101 09:14:55.728767 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f622b0f9-ee68-46f6-a4f1-42d368b350e1-utilities" (OuterVolumeSpecName: "utilities") pod "f622b0f9-ee68-46f6-a4f1-42d368b350e1" (UID: "f622b0f9-ee68-46f6-a4f1-42d368b350e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:14:55 crc kubenswrapper[4867]: I0101 09:14:55.733263 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f622b0f9-ee68-46f6-a4f1-42d368b350e1-kube-api-access-zf8bm" (OuterVolumeSpecName: "kube-api-access-zf8bm") pod "f622b0f9-ee68-46f6-a4f1-42d368b350e1" (UID: "f622b0f9-ee68-46f6-a4f1-42d368b350e1"). InnerVolumeSpecName "kube-api-access-zf8bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:14:55 crc kubenswrapper[4867]: I0101 09:14:55.784312 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f622b0f9-ee68-46f6-a4f1-42d368b350e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f622b0f9-ee68-46f6-a4f1-42d368b350e1" (UID: "f622b0f9-ee68-46f6-a4f1-42d368b350e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:14:55 crc kubenswrapper[4867]: I0101 09:14:55.828142 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f622b0f9-ee68-46f6-a4f1-42d368b350e1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 09:14:55 crc kubenswrapper[4867]: I0101 09:14:55.828360 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f622b0f9-ee68-46f6-a4f1-42d368b350e1-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 09:14:55 crc kubenswrapper[4867]: I0101 09:14:55.828546 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zf8bm\" (UniqueName: \"kubernetes.io/projected/f622b0f9-ee68-46f6-a4f1-42d368b350e1-kube-api-access-zf8bm\") on node \"crc\" DevicePath \"\"" Jan 01 09:14:56 crc kubenswrapper[4867]: I0101 09:14:56.593667 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vm79k" event={"ID":"f622b0f9-ee68-46f6-a4f1-42d368b350e1","Type":"ContainerDied","Data":"db2b104601c2eef4410121e83ddfc13fd31c588991c11f78ae42e62ddd03a94c"} Jan 01 09:14:56 crc kubenswrapper[4867]: I0101 09:14:56.593744 4867 scope.go:117] "RemoveContainer" containerID="dac57c3b76fdc7148132129d56951f480bca93d26dece6a8a24834360f3ab2aa" Jan 01 09:14:56 crc kubenswrapper[4867]: I0101 09:14:56.595165 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vm79k" Jan 01 09:14:56 crc kubenswrapper[4867]: I0101 09:14:56.623727 4867 scope.go:117] "RemoveContainer" containerID="2896797fe16b5017788326ba3e64643d089c5e8c4c59b52e93a052eef272e060" Jan 01 09:14:56 crc kubenswrapper[4867]: I0101 09:14:56.653546 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vm79k"] Jan 01 09:14:56 crc kubenswrapper[4867]: I0101 09:14:56.663592 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vm79k"] Jan 01 09:14:56 crc kubenswrapper[4867]: I0101 09:14:56.664952 4867 scope.go:117] "RemoveContainer" containerID="8ac15dd3920ceb1fca397bdaf04ba1bddbd3a33792c76555502b304c048b5daf" Jan 01 09:14:57 crc kubenswrapper[4867]: I0101 09:14:57.147480 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f622b0f9-ee68-46f6-a4f1-42d368b350e1" path="/var/lib/kubelet/pods/f622b0f9-ee68-46f6-a4f1-42d368b350e1/volumes" Jan 01 09:15:00 crc kubenswrapper[4867]: I0101 09:15:00.182213 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29454315-g2ds6"] Jan 01 09:15:00 crc kubenswrapper[4867]: E0101 09:15:00.183367 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f622b0f9-ee68-46f6-a4f1-42d368b350e1" containerName="registry-server" Jan 01 09:15:00 crc kubenswrapper[4867]: I0101 09:15:00.183388 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f622b0f9-ee68-46f6-a4f1-42d368b350e1" containerName="registry-server" Jan 01 09:15:00 crc kubenswrapper[4867]: E0101 09:15:00.183419 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f622b0f9-ee68-46f6-a4f1-42d368b350e1" containerName="extract-utilities" Jan 01 09:15:00 crc kubenswrapper[4867]: I0101 09:15:00.183430 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f622b0f9-ee68-46f6-a4f1-42d368b350e1" containerName="extract-utilities" Jan 01 09:15:00 crc kubenswrapper[4867]: E0101 09:15:00.183454 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f622b0f9-ee68-46f6-a4f1-42d368b350e1" containerName="extract-content" Jan 01 09:15:00 crc kubenswrapper[4867]: I0101 09:15:00.183464 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f622b0f9-ee68-46f6-a4f1-42d368b350e1" containerName="extract-content" Jan 01 09:15:00 crc kubenswrapper[4867]: I0101 09:15:00.183727 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f622b0f9-ee68-46f6-a4f1-42d368b350e1" containerName="registry-server" Jan 01 09:15:00 crc kubenswrapper[4867]: I0101 09:15:00.184405 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29454315-g2ds6" Jan 01 09:15:00 crc kubenswrapper[4867]: I0101 09:15:00.194415 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 01 09:15:00 crc kubenswrapper[4867]: I0101 09:15:00.194713 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 01 09:15:00 crc kubenswrapper[4867]: I0101 09:15:00.195194 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29454315-g2ds6"] Jan 01 09:15:00 crc kubenswrapper[4867]: I0101 09:15:00.303635 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ccd453a-3c47-4fad-88e6-e5dc9b9bd631-secret-volume\") pod \"collect-profiles-29454315-g2ds6\" (UID: \"7ccd453a-3c47-4fad-88e6-e5dc9b9bd631\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454315-g2ds6" Jan 01 09:15:00 crc kubenswrapper[4867]: I0101 09:15:00.303741 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ccd453a-3c47-4fad-88e6-e5dc9b9bd631-config-volume\") pod \"collect-profiles-29454315-g2ds6\" (UID: \"7ccd453a-3c47-4fad-88e6-e5dc9b9bd631\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454315-g2ds6" Jan 01 09:15:00 crc kubenswrapper[4867]: I0101 09:15:00.303789 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vswn4\" (UniqueName: \"kubernetes.io/projected/7ccd453a-3c47-4fad-88e6-e5dc9b9bd631-kube-api-access-vswn4\") pod \"collect-profiles-29454315-g2ds6\" (UID: \"7ccd453a-3c47-4fad-88e6-e5dc9b9bd631\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454315-g2ds6" Jan 01 09:15:00 crc kubenswrapper[4867]: I0101 09:15:00.405278 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ccd453a-3c47-4fad-88e6-e5dc9b9bd631-secret-volume\") pod \"collect-profiles-29454315-g2ds6\" (UID: \"7ccd453a-3c47-4fad-88e6-e5dc9b9bd631\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454315-g2ds6" Jan 01 09:15:00 crc kubenswrapper[4867]: I0101 09:15:00.405434 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ccd453a-3c47-4fad-88e6-e5dc9b9bd631-config-volume\") pod \"collect-profiles-29454315-g2ds6\" (UID: \"7ccd453a-3c47-4fad-88e6-e5dc9b9bd631\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454315-g2ds6" Jan 01 09:15:00 crc kubenswrapper[4867]: I0101 09:15:00.406804 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ccd453a-3c47-4fad-88e6-e5dc9b9bd631-config-volume\") pod \"collect-profiles-29454315-g2ds6\" (UID: \"7ccd453a-3c47-4fad-88e6-e5dc9b9bd631\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454315-g2ds6" Jan 01 09:15:00 crc kubenswrapper[4867]: I0101 09:15:00.406971 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vswn4\" (UniqueName: \"kubernetes.io/projected/7ccd453a-3c47-4fad-88e6-e5dc9b9bd631-kube-api-access-vswn4\") pod \"collect-profiles-29454315-g2ds6\" (UID: \"7ccd453a-3c47-4fad-88e6-e5dc9b9bd631\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454315-g2ds6" Jan 01 09:15:00 crc kubenswrapper[4867]: I0101 09:15:00.415723 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ccd453a-3c47-4fad-88e6-e5dc9b9bd631-secret-volume\") pod \"collect-profiles-29454315-g2ds6\" (UID: \"7ccd453a-3c47-4fad-88e6-e5dc9b9bd631\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454315-g2ds6" Jan 01 09:15:00 crc kubenswrapper[4867]: I0101 09:15:00.436260 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vswn4\" (UniqueName: \"kubernetes.io/projected/7ccd453a-3c47-4fad-88e6-e5dc9b9bd631-kube-api-access-vswn4\") pod \"collect-profiles-29454315-g2ds6\" (UID: \"7ccd453a-3c47-4fad-88e6-e5dc9b9bd631\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454315-g2ds6" Jan 01 09:15:00 crc kubenswrapper[4867]: I0101 09:15:00.509475 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29454315-g2ds6" Jan 01 09:15:00 crc kubenswrapper[4867]: W0101 09:15:00.974190 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ccd453a_3c47_4fad_88e6_e5dc9b9bd631.slice/crio-2ca7528d5ee3a6d0b1fed5afcf39fc56981213e126119d32739f653f7b253521 WatchSource:0}: Error finding container 2ca7528d5ee3a6d0b1fed5afcf39fc56981213e126119d32739f653f7b253521: Status 404 returned error can't find the container with id 2ca7528d5ee3a6d0b1fed5afcf39fc56981213e126119d32739f653f7b253521 Jan 01 09:15:00 crc kubenswrapper[4867]: I0101 09:15:00.975290 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29454315-g2ds6"] Jan 01 09:15:01 crc kubenswrapper[4867]: I0101 09:15:01.665084 4867 generic.go:334] "Generic (PLEG): container finished" podID="7ccd453a-3c47-4fad-88e6-e5dc9b9bd631" containerID="c80bd55e3c9000c010da52c346ec10195daeacc554c41911326baceb7894f5c2" exitCode=0 Jan 01 09:15:01 crc kubenswrapper[4867]: I0101 09:15:01.665145 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29454315-g2ds6" event={"ID":"7ccd453a-3c47-4fad-88e6-e5dc9b9bd631","Type":"ContainerDied","Data":"c80bd55e3c9000c010da52c346ec10195daeacc554c41911326baceb7894f5c2"} Jan 01 09:15:01 crc kubenswrapper[4867]: I0101 09:15:01.665451 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29454315-g2ds6" event={"ID":"7ccd453a-3c47-4fad-88e6-e5dc9b9bd631","Type":"ContainerStarted","Data":"2ca7528d5ee3a6d0b1fed5afcf39fc56981213e126119d32739f653f7b253521"} Jan 01 09:15:03 crc kubenswrapper[4867]: I0101 09:15:03.003080 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29454315-g2ds6" Jan 01 09:15:03 crc kubenswrapper[4867]: I0101 09:15:03.075329 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ccd453a-3c47-4fad-88e6-e5dc9b9bd631-secret-volume\") pod \"7ccd453a-3c47-4fad-88e6-e5dc9b9bd631\" (UID: \"7ccd453a-3c47-4fad-88e6-e5dc9b9bd631\") " Jan 01 09:15:03 crc kubenswrapper[4867]: I0101 09:15:03.075423 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vswn4\" (UniqueName: \"kubernetes.io/projected/7ccd453a-3c47-4fad-88e6-e5dc9b9bd631-kube-api-access-vswn4\") pod \"7ccd453a-3c47-4fad-88e6-e5dc9b9bd631\" (UID: \"7ccd453a-3c47-4fad-88e6-e5dc9b9bd631\") " Jan 01 09:15:03 crc kubenswrapper[4867]: I0101 09:15:03.075542 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ccd453a-3c47-4fad-88e6-e5dc9b9bd631-config-volume\") pod \"7ccd453a-3c47-4fad-88e6-e5dc9b9bd631\" (UID: \"7ccd453a-3c47-4fad-88e6-e5dc9b9bd631\") " Jan 01 09:15:03 crc kubenswrapper[4867]: I0101 09:15:03.076920 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ccd453a-3c47-4fad-88e6-e5dc9b9bd631-config-volume" (OuterVolumeSpecName: "config-volume") pod "7ccd453a-3c47-4fad-88e6-e5dc9b9bd631" (UID: "7ccd453a-3c47-4fad-88e6-e5dc9b9bd631"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 09:15:03 crc kubenswrapper[4867]: I0101 09:15:03.081392 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ccd453a-3c47-4fad-88e6-e5dc9b9bd631-kube-api-access-vswn4" (OuterVolumeSpecName: "kube-api-access-vswn4") pod "7ccd453a-3c47-4fad-88e6-e5dc9b9bd631" (UID: "7ccd453a-3c47-4fad-88e6-e5dc9b9bd631"). InnerVolumeSpecName "kube-api-access-vswn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:15:03 crc kubenswrapper[4867]: I0101 09:15:03.081990 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ccd453a-3c47-4fad-88e6-e5dc9b9bd631-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7ccd453a-3c47-4fad-88e6-e5dc9b9bd631" (UID: "7ccd453a-3c47-4fad-88e6-e5dc9b9bd631"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 09:15:03 crc kubenswrapper[4867]: I0101 09:15:03.176506 4867 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ccd453a-3c47-4fad-88e6-e5dc9b9bd631-config-volume\") on node \"crc\" DevicePath \"\"" Jan 01 09:15:03 crc kubenswrapper[4867]: I0101 09:15:03.176538 4867 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ccd453a-3c47-4fad-88e6-e5dc9b9bd631-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 01 09:15:03 crc kubenswrapper[4867]: I0101 09:15:03.176552 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vswn4\" (UniqueName: \"kubernetes.io/projected/7ccd453a-3c47-4fad-88e6-e5dc9b9bd631-kube-api-access-vswn4\") on node \"crc\" DevicePath \"\"" Jan 01 09:15:03 crc kubenswrapper[4867]: I0101 09:15:03.696108 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29454315-g2ds6" event={"ID":"7ccd453a-3c47-4fad-88e6-e5dc9b9bd631","Type":"ContainerDied","Data":"2ca7528d5ee3a6d0b1fed5afcf39fc56981213e126119d32739f653f7b253521"} Jan 01 09:15:03 crc kubenswrapper[4867]: I0101 09:15:03.696182 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ca7528d5ee3a6d0b1fed5afcf39fc56981213e126119d32739f653f7b253521" Jan 01 09:15:03 crc kubenswrapper[4867]: I0101 09:15:03.696207 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29454315-g2ds6" Jan 01 09:15:04 crc kubenswrapper[4867]: I0101 09:15:04.104139 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29454270-wfdng"] Jan 01 09:15:04 crc kubenswrapper[4867]: I0101 09:15:04.115538 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29454270-wfdng"] Jan 01 09:15:05 crc kubenswrapper[4867]: I0101 09:15:05.147640 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c691144-adcf-4de6-b068-db1692decd23" path="/var/lib/kubelet/pods/2c691144-adcf-4de6-b068-db1692decd23/volumes" Jan 01 09:15:21 crc kubenswrapper[4867]: I0101 09:15:21.331218 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 09:15:21 crc kubenswrapper[4867]: I0101 09:15:21.332001 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 09:15:39 crc kubenswrapper[4867]: I0101 09:15:39.618659 4867 scope.go:117] "RemoveContainer" containerID="4b9ebd95cc3faeb089504ece3ff02d7506d20aa66d81d7e28f72045e46f07a0f" Jan 01 09:15:51 crc kubenswrapper[4867]: I0101 09:15:51.331353 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 09:15:51 crc kubenswrapper[4867]: I0101 09:15:51.335238 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 09:15:51 crc kubenswrapper[4867]: I0101 09:15:51.335396 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-69jph" Jan 01 09:15:51 crc kubenswrapper[4867]: I0101 09:15:51.336506 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2c2b8d53aabb550eac28585c931868494601cc65edbec6600854a2632e762792"} pod="openshift-machine-config-operator/machine-config-daemon-69jph" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 01 09:15:51 crc kubenswrapper[4867]: I0101 09:15:51.336608 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" containerID="cri-o://2c2b8d53aabb550eac28585c931868494601cc65edbec6600854a2632e762792" gracePeriod=600 Jan 01 09:15:51 crc kubenswrapper[4867]: E0101 09:15:51.458529 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:15:52 crc kubenswrapper[4867]: I0101 09:15:52.164368 4867 generic.go:334] "Generic (PLEG): container finished" podID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerID="2c2b8d53aabb550eac28585c931868494601cc65edbec6600854a2632e762792" exitCode=0 Jan 01 09:15:52 crc kubenswrapper[4867]: I0101 09:15:52.164432 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerDied","Data":"2c2b8d53aabb550eac28585c931868494601cc65edbec6600854a2632e762792"} Jan 01 09:15:52 crc kubenswrapper[4867]: I0101 09:15:52.164463 4867 scope.go:117] "RemoveContainer" containerID="908a4602ceecff884c055e9699de4f997ff05cae9b6663c175d73f9402664972" Jan 01 09:15:52 crc kubenswrapper[4867]: I0101 09:15:52.165585 4867 scope.go:117] "RemoveContainer" containerID="2c2b8d53aabb550eac28585c931868494601cc65edbec6600854a2632e762792" Jan 01 09:15:52 crc kubenswrapper[4867]: E0101 09:15:52.166387 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:16:04 crc kubenswrapper[4867]: I0101 09:16:04.129803 4867 scope.go:117] "RemoveContainer" containerID="2c2b8d53aabb550eac28585c931868494601cc65edbec6600854a2632e762792" Jan 01 09:16:04 crc kubenswrapper[4867]: E0101 09:16:04.130985 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:16:17 crc kubenswrapper[4867]: I0101 09:16:17.129260 4867 scope.go:117] "RemoveContainer" containerID="2c2b8d53aabb550eac28585c931868494601cc65edbec6600854a2632e762792" Jan 01 09:16:17 crc kubenswrapper[4867]: E0101 09:16:17.130172 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:16:29 crc kubenswrapper[4867]: I0101 09:16:29.128936 4867 scope.go:117] "RemoveContainer" containerID="2c2b8d53aabb550eac28585c931868494601cc65edbec6600854a2632e762792" Jan 01 09:16:29 crc kubenswrapper[4867]: E0101 09:16:29.129924 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:16:42 crc kubenswrapper[4867]: I0101 09:16:42.130147 4867 scope.go:117] "RemoveContainer" containerID="2c2b8d53aabb550eac28585c931868494601cc65edbec6600854a2632e762792" Jan 01 09:16:42 crc kubenswrapper[4867]: E0101 09:16:42.131272 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:16:53 crc kubenswrapper[4867]: I0101 09:16:53.129145 4867 scope.go:117] "RemoveContainer" containerID="2c2b8d53aabb550eac28585c931868494601cc65edbec6600854a2632e762792" Jan 01 09:16:53 crc kubenswrapper[4867]: E0101 09:16:53.131305 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:17:04 crc kubenswrapper[4867]: I0101 09:17:04.129092 4867 scope.go:117] "RemoveContainer" containerID="2c2b8d53aabb550eac28585c931868494601cc65edbec6600854a2632e762792" Jan 01 09:17:04 crc kubenswrapper[4867]: E0101 09:17:04.130052 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:17:18 crc kubenswrapper[4867]: I0101 09:17:18.128787 4867 scope.go:117] "RemoveContainer" containerID="2c2b8d53aabb550eac28585c931868494601cc65edbec6600854a2632e762792" Jan 01 09:17:18 crc kubenswrapper[4867]: E0101 09:17:18.129623 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:17:32 crc kubenswrapper[4867]: I0101 09:17:32.129720 4867 scope.go:117] "RemoveContainer" containerID="2c2b8d53aabb550eac28585c931868494601cc65edbec6600854a2632e762792" Jan 01 09:17:32 crc kubenswrapper[4867]: E0101 09:17:32.130957 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:17:43 crc kubenswrapper[4867]: I0101 09:17:43.129041 4867 scope.go:117] "RemoveContainer" containerID="2c2b8d53aabb550eac28585c931868494601cc65edbec6600854a2632e762792" Jan 01 09:17:43 crc kubenswrapper[4867]: E0101 09:17:43.130507 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:17:54 crc kubenswrapper[4867]: I0101 09:17:54.128958 4867 scope.go:117] "RemoveContainer" containerID="2c2b8d53aabb550eac28585c931868494601cc65edbec6600854a2632e762792" Jan 01 09:17:54 crc kubenswrapper[4867]: E0101 09:17:54.130143 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:18:06 crc kubenswrapper[4867]: I0101 09:18:06.129165 4867 scope.go:117] "RemoveContainer" containerID="2c2b8d53aabb550eac28585c931868494601cc65edbec6600854a2632e762792" Jan 01 09:18:06 crc kubenswrapper[4867]: E0101 09:18:06.130305 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:18:21 crc kubenswrapper[4867]: I0101 09:18:21.134233 4867 scope.go:117] "RemoveContainer" containerID="2c2b8d53aabb550eac28585c931868494601cc65edbec6600854a2632e762792" Jan 01 09:18:21 crc kubenswrapper[4867]: E0101 09:18:21.135000 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:18:33 crc kubenswrapper[4867]: I0101 09:18:33.128848 4867 scope.go:117] "RemoveContainer" containerID="2c2b8d53aabb550eac28585c931868494601cc65edbec6600854a2632e762792" Jan 01 09:18:33 crc kubenswrapper[4867]: E0101 09:18:33.131178 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:18:44 crc kubenswrapper[4867]: I0101 09:18:44.128776 4867 scope.go:117] "RemoveContainer" containerID="2c2b8d53aabb550eac28585c931868494601cc65edbec6600854a2632e762792" Jan 01 09:18:44 crc kubenswrapper[4867]: E0101 09:18:44.129649 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:18:59 crc kubenswrapper[4867]: I0101 09:18:59.129275 4867 scope.go:117] "RemoveContainer" containerID="2c2b8d53aabb550eac28585c931868494601cc65edbec6600854a2632e762792" Jan 01 09:18:59 crc kubenswrapper[4867]: E0101 09:18:59.130123 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:19:14 crc kubenswrapper[4867]: I0101 09:19:14.129012 4867 scope.go:117] "RemoveContainer" containerID="2c2b8d53aabb550eac28585c931868494601cc65edbec6600854a2632e762792" Jan 01 09:19:14 crc kubenswrapper[4867]: E0101 09:19:14.130346 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:19:26 crc kubenswrapper[4867]: I0101 09:19:26.129304 4867 scope.go:117] "RemoveContainer" containerID="2c2b8d53aabb550eac28585c931868494601cc65edbec6600854a2632e762792" Jan 01 09:19:26 crc kubenswrapper[4867]: E0101 09:19:26.130630 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:19:39 crc kubenswrapper[4867]: I0101 09:19:39.747664 4867 scope.go:117] "RemoveContainer" containerID="4e2a208d702b3a77590645a56d69bd6feb774a8211a330d9526e869665b5f108" Jan 01 09:19:39 crc kubenswrapper[4867]: I0101 09:19:39.781439 4867 scope.go:117] "RemoveContainer" containerID="c6b704515fc485ce42fc808d0f5537172c79977cac9861425442ddce1043a71d" Jan 01 09:19:39 crc kubenswrapper[4867]: I0101 09:19:39.809413 4867 scope.go:117] "RemoveContainer" containerID="7d3b6b702eaa35503e8be12fb7e15612599329ec8f3b66c1d677764f2afbd029" Jan 01 09:19:40 crc kubenswrapper[4867]: I0101 09:19:40.129177 4867 scope.go:117] "RemoveContainer" containerID="2c2b8d53aabb550eac28585c931868494601cc65edbec6600854a2632e762792" Jan 01 09:19:40 crc kubenswrapper[4867]: E0101 09:19:40.129599 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:19:52 crc kubenswrapper[4867]: I0101 09:19:52.129191 4867 scope.go:117] "RemoveContainer" containerID="2c2b8d53aabb550eac28585c931868494601cc65edbec6600854a2632e762792" Jan 01 09:19:52 crc kubenswrapper[4867]: E0101 09:19:52.131288 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:20:05 crc kubenswrapper[4867]: I0101 09:20:05.129143 4867 scope.go:117] "RemoveContainer" containerID="2c2b8d53aabb550eac28585c931868494601cc65edbec6600854a2632e762792" Jan 01 09:20:05 crc kubenswrapper[4867]: E0101 09:20:05.130317 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:20:18 crc kubenswrapper[4867]: I0101 09:20:18.128544 4867 scope.go:117] "RemoveContainer" containerID="2c2b8d53aabb550eac28585c931868494601cc65edbec6600854a2632e762792" Jan 01 09:20:18 crc kubenswrapper[4867]: E0101 09:20:18.129771 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:20:29 crc kubenswrapper[4867]: I0101 09:20:29.128322 4867 scope.go:117] "RemoveContainer" containerID="2c2b8d53aabb550eac28585c931868494601cc65edbec6600854a2632e762792" Jan 01 09:20:29 crc kubenswrapper[4867]: E0101 09:20:29.129247 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:20:43 crc kubenswrapper[4867]: I0101 09:20:43.129596 4867 scope.go:117] "RemoveContainer" containerID="2c2b8d53aabb550eac28585c931868494601cc65edbec6600854a2632e762792" Jan 01 09:20:43 crc kubenswrapper[4867]: E0101 09:20:43.131332 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:20:57 crc kubenswrapper[4867]: I0101 09:20:57.128696 4867 scope.go:117] "RemoveContainer" containerID="2c2b8d53aabb550eac28585c931868494601cc65edbec6600854a2632e762792" Jan 01 09:20:57 crc kubenswrapper[4867]: I0101 09:20:57.964540 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerStarted","Data":"15ec261e737e02f10bb4e58227cc31403721a5336c7e9233d62e654fee1472f1"} Jan 01 09:21:45 crc kubenswrapper[4867]: I0101 09:21:45.435323 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b4795"] Jan 01 09:21:45 crc kubenswrapper[4867]: E0101 09:21:45.436418 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ccd453a-3c47-4fad-88e6-e5dc9b9bd631" containerName="collect-profiles" Jan 01 09:21:45 crc kubenswrapper[4867]: I0101 09:21:45.436440 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ccd453a-3c47-4fad-88e6-e5dc9b9bd631" containerName="collect-profiles" Jan 01 09:21:45 crc kubenswrapper[4867]: I0101 09:21:45.436728 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ccd453a-3c47-4fad-88e6-e5dc9b9bd631" containerName="collect-profiles" Jan 01 09:21:45 crc kubenswrapper[4867]: I0101 09:21:45.438779 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b4795" Jan 01 09:21:45 crc kubenswrapper[4867]: I0101 09:21:45.462531 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b4795"] Jan 01 09:21:45 crc kubenswrapper[4867]: I0101 09:21:45.539128 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgsq6\" (UniqueName: \"kubernetes.io/projected/ef982ae9-c3f7-4a42-8b36-380f41dd6285-kube-api-access-xgsq6\") pod \"redhat-operators-b4795\" (UID: \"ef982ae9-c3f7-4a42-8b36-380f41dd6285\") " pod="openshift-marketplace/redhat-operators-b4795" Jan 01 09:21:45 crc kubenswrapper[4867]: I0101 09:21:45.539528 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef982ae9-c3f7-4a42-8b36-380f41dd6285-utilities\") pod \"redhat-operators-b4795\" (UID: \"ef982ae9-c3f7-4a42-8b36-380f41dd6285\") " pod="openshift-marketplace/redhat-operators-b4795" Jan 01 09:21:45 crc kubenswrapper[4867]: I0101 09:21:45.539758 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef982ae9-c3f7-4a42-8b36-380f41dd6285-catalog-content\") pod \"redhat-operators-b4795\" (UID: \"ef982ae9-c3f7-4a42-8b36-380f41dd6285\") " pod="openshift-marketplace/redhat-operators-b4795" Jan 01 09:21:45 crc kubenswrapper[4867]: I0101 09:21:45.641401 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgsq6\" (UniqueName: \"kubernetes.io/projected/ef982ae9-c3f7-4a42-8b36-380f41dd6285-kube-api-access-xgsq6\") pod \"redhat-operators-b4795\" (UID: \"ef982ae9-c3f7-4a42-8b36-380f41dd6285\") " pod="openshift-marketplace/redhat-operators-b4795" Jan 01 09:21:45 crc kubenswrapper[4867]: I0101 09:21:45.641486 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef982ae9-c3f7-4a42-8b36-380f41dd6285-utilities\") pod \"redhat-operators-b4795\" (UID: \"ef982ae9-c3f7-4a42-8b36-380f41dd6285\") " pod="openshift-marketplace/redhat-operators-b4795" Jan 01 09:21:45 crc kubenswrapper[4867]: I0101 09:21:45.641622 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef982ae9-c3f7-4a42-8b36-380f41dd6285-catalog-content\") pod \"redhat-operators-b4795\" (UID: \"ef982ae9-c3f7-4a42-8b36-380f41dd6285\") " pod="openshift-marketplace/redhat-operators-b4795" Jan 01 09:21:45 crc kubenswrapper[4867]: I0101 09:21:45.642417 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef982ae9-c3f7-4a42-8b36-380f41dd6285-utilities\") pod \"redhat-operators-b4795\" (UID: \"ef982ae9-c3f7-4a42-8b36-380f41dd6285\") " pod="openshift-marketplace/redhat-operators-b4795" Jan 01 09:21:45 crc kubenswrapper[4867]: I0101 09:21:45.642527 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef982ae9-c3f7-4a42-8b36-380f41dd6285-catalog-content\") pod \"redhat-operators-b4795\" (UID: \"ef982ae9-c3f7-4a42-8b36-380f41dd6285\") " pod="openshift-marketplace/redhat-operators-b4795" Jan 01 09:21:45 crc kubenswrapper[4867]: I0101 09:21:45.662671 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgsq6\" (UniqueName: \"kubernetes.io/projected/ef982ae9-c3f7-4a42-8b36-380f41dd6285-kube-api-access-xgsq6\") pod \"redhat-operators-b4795\" (UID: \"ef982ae9-c3f7-4a42-8b36-380f41dd6285\") " pod="openshift-marketplace/redhat-operators-b4795" Jan 01 09:21:45 crc kubenswrapper[4867]: I0101 09:21:45.784311 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b4795" Jan 01 09:21:46 crc kubenswrapper[4867]: I0101 09:21:46.251965 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b4795"] Jan 01 09:21:46 crc kubenswrapper[4867]: I0101 09:21:46.356169 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4795" event={"ID":"ef982ae9-c3f7-4a42-8b36-380f41dd6285","Type":"ContainerStarted","Data":"895a48eb4c698ec03154b80e83673168663b79943e06eabaab20f11c178e89f2"} Jan 01 09:21:47 crc kubenswrapper[4867]: I0101 09:21:47.367330 4867 generic.go:334] "Generic (PLEG): container finished" podID="ef982ae9-c3f7-4a42-8b36-380f41dd6285" containerID="6a81599f1f69a717a13dfa834983f55a8b86810c5686191e361e06250489045e" exitCode=0 Jan 01 09:21:47 crc kubenswrapper[4867]: I0101 09:21:47.367450 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4795" event={"ID":"ef982ae9-c3f7-4a42-8b36-380f41dd6285","Type":"ContainerDied","Data":"6a81599f1f69a717a13dfa834983f55a8b86810c5686191e361e06250489045e"} Jan 01 09:21:47 crc kubenswrapper[4867]: I0101 09:21:47.370776 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 01 09:21:49 crc kubenswrapper[4867]: I0101 09:21:49.386228 4867 generic.go:334] "Generic (PLEG): container finished" podID="ef982ae9-c3f7-4a42-8b36-380f41dd6285" containerID="a5ecfed56073372b99c3394e4bd07686e0aa1b13da770f0d51218001c391a667" exitCode=0 Jan 01 09:21:49 crc kubenswrapper[4867]: I0101 09:21:49.386347 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4795" event={"ID":"ef982ae9-c3f7-4a42-8b36-380f41dd6285","Type":"ContainerDied","Data":"a5ecfed56073372b99c3394e4bd07686e0aa1b13da770f0d51218001c391a667"} Jan 01 09:21:50 crc kubenswrapper[4867]: I0101 09:21:50.398291 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4795" event={"ID":"ef982ae9-c3f7-4a42-8b36-380f41dd6285","Type":"ContainerStarted","Data":"760471310a6f6b3a65f7cdb1e91621dcb9528ffe71bfdecc86a3b351cde12634"} Jan 01 09:21:55 crc kubenswrapper[4867]: I0101 09:21:55.784793 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b4795" Jan 01 09:21:55 crc kubenswrapper[4867]: I0101 09:21:55.785404 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b4795" Jan 01 09:21:56 crc kubenswrapper[4867]: I0101 09:21:56.836538 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b4795" podUID="ef982ae9-c3f7-4a42-8b36-380f41dd6285" containerName="registry-server" probeResult="failure" output=< Jan 01 09:21:56 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Jan 01 09:21:56 crc kubenswrapper[4867]: > Jan 01 09:22:05 crc kubenswrapper[4867]: I0101 09:22:05.855532 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b4795" Jan 01 09:22:05 crc kubenswrapper[4867]: I0101 09:22:05.892305 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b4795" podStartSLOduration=18.446175459 podStartE2EDuration="20.892279042s" podCreationTimestamp="2026-01-01 09:21:45 +0000 UTC" firstStartedPulling="2026-01-01 09:21:47.37029231 +0000 UTC m=+3316.505561119" lastFinishedPulling="2026-01-01 09:21:49.816395933 +0000 UTC m=+3318.951664702" observedRunningTime="2026-01-01 09:21:50.422194823 +0000 UTC m=+3319.557463662" watchObservedRunningTime="2026-01-01 09:22:05.892279042 +0000 UTC m=+3335.027547841" Jan 01 09:22:05 crc kubenswrapper[4867]: I0101 09:22:05.913402 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b4795" Jan 01 09:22:06 crc kubenswrapper[4867]: I0101 09:22:06.098840 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b4795"] Jan 01 09:22:07 crc kubenswrapper[4867]: I0101 09:22:07.568336 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b4795" podUID="ef982ae9-c3f7-4a42-8b36-380f41dd6285" containerName="registry-server" containerID="cri-o://760471310a6f6b3a65f7cdb1e91621dcb9528ffe71bfdecc86a3b351cde12634" gracePeriod=2 Jan 01 09:22:08 crc kubenswrapper[4867]: I0101 09:22:08.575609 4867 generic.go:334] "Generic (PLEG): container finished" podID="ef982ae9-c3f7-4a42-8b36-380f41dd6285" containerID="760471310a6f6b3a65f7cdb1e91621dcb9528ffe71bfdecc86a3b351cde12634" exitCode=0 Jan 01 09:22:08 crc kubenswrapper[4867]: I0101 09:22:08.575919 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4795" event={"ID":"ef982ae9-c3f7-4a42-8b36-380f41dd6285","Type":"ContainerDied","Data":"760471310a6f6b3a65f7cdb1e91621dcb9528ffe71bfdecc86a3b351cde12634"} Jan 01 09:22:08 crc kubenswrapper[4867]: I0101 09:22:08.627055 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b4795" Jan 01 09:22:08 crc kubenswrapper[4867]: I0101 09:22:08.662567 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef982ae9-c3f7-4a42-8b36-380f41dd6285-utilities\") pod \"ef982ae9-c3f7-4a42-8b36-380f41dd6285\" (UID: \"ef982ae9-c3f7-4a42-8b36-380f41dd6285\") " Jan 01 09:22:08 crc kubenswrapper[4867]: I0101 09:22:08.662752 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef982ae9-c3f7-4a42-8b36-380f41dd6285-catalog-content\") pod \"ef982ae9-c3f7-4a42-8b36-380f41dd6285\" (UID: \"ef982ae9-c3f7-4a42-8b36-380f41dd6285\") " Jan 01 09:22:08 crc kubenswrapper[4867]: I0101 09:22:08.662788 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgsq6\" (UniqueName: \"kubernetes.io/projected/ef982ae9-c3f7-4a42-8b36-380f41dd6285-kube-api-access-xgsq6\") pod \"ef982ae9-c3f7-4a42-8b36-380f41dd6285\" (UID: \"ef982ae9-c3f7-4a42-8b36-380f41dd6285\") " Jan 01 09:22:08 crc kubenswrapper[4867]: I0101 09:22:08.664738 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef982ae9-c3f7-4a42-8b36-380f41dd6285-utilities" (OuterVolumeSpecName: "utilities") pod "ef982ae9-c3f7-4a42-8b36-380f41dd6285" (UID: "ef982ae9-c3f7-4a42-8b36-380f41dd6285"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:22:08 crc kubenswrapper[4867]: I0101 09:22:08.668047 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef982ae9-c3f7-4a42-8b36-380f41dd6285-kube-api-access-xgsq6" (OuterVolumeSpecName: "kube-api-access-xgsq6") pod "ef982ae9-c3f7-4a42-8b36-380f41dd6285" (UID: "ef982ae9-c3f7-4a42-8b36-380f41dd6285"). InnerVolumeSpecName "kube-api-access-xgsq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:22:08 crc kubenswrapper[4867]: I0101 09:22:08.765168 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef982ae9-c3f7-4a42-8b36-380f41dd6285-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 09:22:08 crc kubenswrapper[4867]: I0101 09:22:08.765200 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgsq6\" (UniqueName: \"kubernetes.io/projected/ef982ae9-c3f7-4a42-8b36-380f41dd6285-kube-api-access-xgsq6\") on node \"crc\" DevicePath \"\"" Jan 01 09:22:08 crc kubenswrapper[4867]: I0101 09:22:08.802355 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef982ae9-c3f7-4a42-8b36-380f41dd6285-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef982ae9-c3f7-4a42-8b36-380f41dd6285" (UID: "ef982ae9-c3f7-4a42-8b36-380f41dd6285"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:22:08 crc kubenswrapper[4867]: I0101 09:22:08.867577 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef982ae9-c3f7-4a42-8b36-380f41dd6285-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 09:22:09 crc kubenswrapper[4867]: I0101 09:22:09.589006 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4795" event={"ID":"ef982ae9-c3f7-4a42-8b36-380f41dd6285","Type":"ContainerDied","Data":"895a48eb4c698ec03154b80e83673168663b79943e06eabaab20f11c178e89f2"} Jan 01 09:22:09 crc kubenswrapper[4867]: I0101 09:22:09.589095 4867 scope.go:117] "RemoveContainer" containerID="760471310a6f6b3a65f7cdb1e91621dcb9528ffe71bfdecc86a3b351cde12634" Jan 01 09:22:09 crc kubenswrapper[4867]: I0101 09:22:09.589134 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b4795" Jan 01 09:22:09 crc kubenswrapper[4867]: I0101 09:22:09.623849 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b4795"] Jan 01 09:22:09 crc kubenswrapper[4867]: I0101 09:22:09.630999 4867 scope.go:117] "RemoveContainer" containerID="a5ecfed56073372b99c3394e4bd07686e0aa1b13da770f0d51218001c391a667" Jan 01 09:22:09 crc kubenswrapper[4867]: I0101 09:22:09.634462 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b4795"] Jan 01 09:22:09 crc kubenswrapper[4867]: I0101 09:22:09.667752 4867 scope.go:117] "RemoveContainer" containerID="6a81599f1f69a717a13dfa834983f55a8b86810c5686191e361e06250489045e" Jan 01 09:22:11 crc kubenswrapper[4867]: I0101 09:22:11.147416 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef982ae9-c3f7-4a42-8b36-380f41dd6285" path="/var/lib/kubelet/pods/ef982ae9-c3f7-4a42-8b36-380f41dd6285/volumes" Jan 01 09:23:21 crc kubenswrapper[4867]: I0101 09:23:21.331280 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 09:23:21 crc kubenswrapper[4867]: I0101 09:23:21.332031 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 09:23:36 crc kubenswrapper[4867]: I0101 09:23:36.561116 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xnpbw"] Jan 01 09:23:36 crc kubenswrapper[4867]: E0101 09:23:36.562394 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef982ae9-c3f7-4a42-8b36-380f41dd6285" containerName="extract-utilities" Jan 01 09:23:36 crc kubenswrapper[4867]: I0101 09:23:36.562428 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef982ae9-c3f7-4a42-8b36-380f41dd6285" containerName="extract-utilities" Jan 01 09:23:36 crc kubenswrapper[4867]: E0101 09:23:36.562481 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef982ae9-c3f7-4a42-8b36-380f41dd6285" containerName="registry-server" Jan 01 09:23:36 crc kubenswrapper[4867]: I0101 09:23:36.562500 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef982ae9-c3f7-4a42-8b36-380f41dd6285" containerName="registry-server" Jan 01 09:23:36 crc kubenswrapper[4867]: E0101 09:23:36.562533 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef982ae9-c3f7-4a42-8b36-380f41dd6285" containerName="extract-content" Jan 01 09:23:36 crc kubenswrapper[4867]: I0101 09:23:36.562548 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef982ae9-c3f7-4a42-8b36-380f41dd6285" containerName="extract-content" Jan 01 09:23:36 crc kubenswrapper[4867]: I0101 09:23:36.562917 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef982ae9-c3f7-4a42-8b36-380f41dd6285" containerName="registry-server" Jan 01 09:23:36 crc kubenswrapper[4867]: I0101 09:23:36.565553 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xnpbw" Jan 01 09:23:36 crc kubenswrapper[4867]: I0101 09:23:36.581202 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xnpbw"] Jan 01 09:23:36 crc kubenswrapper[4867]: I0101 09:23:36.692230 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sqjw\" (UniqueName: \"kubernetes.io/projected/604074a0-8c40-49b3-a057-721cebf23541-kube-api-access-2sqjw\") pod \"certified-operators-xnpbw\" (UID: \"604074a0-8c40-49b3-a057-721cebf23541\") " pod="openshift-marketplace/certified-operators-xnpbw" Jan 01 09:23:36 crc kubenswrapper[4867]: I0101 09:23:36.692505 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/604074a0-8c40-49b3-a057-721cebf23541-utilities\") pod \"certified-operators-xnpbw\" (UID: \"604074a0-8c40-49b3-a057-721cebf23541\") " pod="openshift-marketplace/certified-operators-xnpbw" Jan 01 09:23:36 crc kubenswrapper[4867]: I0101 09:23:36.692544 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/604074a0-8c40-49b3-a057-721cebf23541-catalog-content\") pod \"certified-operators-xnpbw\" (UID: \"604074a0-8c40-49b3-a057-721cebf23541\") " pod="openshift-marketplace/certified-operators-xnpbw" Jan 01 09:23:36 crc kubenswrapper[4867]: I0101 09:23:36.793318 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sqjw\" (UniqueName: \"kubernetes.io/projected/604074a0-8c40-49b3-a057-721cebf23541-kube-api-access-2sqjw\") pod \"certified-operators-xnpbw\" (UID: \"604074a0-8c40-49b3-a057-721cebf23541\") " pod="openshift-marketplace/certified-operators-xnpbw" Jan 01 09:23:36 crc kubenswrapper[4867]: I0101 09:23:36.793376 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/604074a0-8c40-49b3-a057-721cebf23541-utilities\") pod \"certified-operators-xnpbw\" (UID: \"604074a0-8c40-49b3-a057-721cebf23541\") " pod="openshift-marketplace/certified-operators-xnpbw" Jan 01 09:23:36 crc kubenswrapper[4867]: I0101 09:23:36.793408 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/604074a0-8c40-49b3-a057-721cebf23541-catalog-content\") pod \"certified-operators-xnpbw\" (UID: \"604074a0-8c40-49b3-a057-721cebf23541\") " pod="openshift-marketplace/certified-operators-xnpbw" Jan 01 09:23:36 crc kubenswrapper[4867]: I0101 09:23:36.793907 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/604074a0-8c40-49b3-a057-721cebf23541-utilities\") pod \"certified-operators-xnpbw\" (UID: \"604074a0-8c40-49b3-a057-721cebf23541\") " pod="openshift-marketplace/certified-operators-xnpbw" Jan 01 09:23:36 crc kubenswrapper[4867]: I0101 09:23:36.794431 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/604074a0-8c40-49b3-a057-721cebf23541-catalog-content\") pod \"certified-operators-xnpbw\" (UID: \"604074a0-8c40-49b3-a057-721cebf23541\") " pod="openshift-marketplace/certified-operators-xnpbw" Jan 01 09:23:36 crc kubenswrapper[4867]: I0101 09:23:36.822102 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sqjw\" (UniqueName: \"kubernetes.io/projected/604074a0-8c40-49b3-a057-721cebf23541-kube-api-access-2sqjw\") pod \"certified-operators-xnpbw\" (UID: \"604074a0-8c40-49b3-a057-721cebf23541\") " pod="openshift-marketplace/certified-operators-xnpbw" Jan 01 09:23:36 crc kubenswrapper[4867]: I0101 09:23:36.889758 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xnpbw" Jan 01 09:23:37 crc kubenswrapper[4867]: I0101 09:23:37.358452 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xnpbw"] Jan 01 09:23:37 crc kubenswrapper[4867]: I0101 09:23:37.459572 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xnpbw" event={"ID":"604074a0-8c40-49b3-a057-721cebf23541","Type":"ContainerStarted","Data":"f7b99e1f58aef27bc4310fab503e96443248afe5d5dc7b5d788b053c77b25458"} Jan 01 09:23:38 crc kubenswrapper[4867]: I0101 09:23:38.473170 4867 generic.go:334] "Generic (PLEG): container finished" podID="604074a0-8c40-49b3-a057-721cebf23541" containerID="a909c219ff9e492485967da0f24a940ebcfd855cbddd305ea6de84e0f7f13ca7" exitCode=0 Jan 01 09:23:38 crc kubenswrapper[4867]: I0101 09:23:38.473651 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xnpbw" event={"ID":"604074a0-8c40-49b3-a057-721cebf23541","Type":"ContainerDied","Data":"a909c219ff9e492485967da0f24a940ebcfd855cbddd305ea6de84e0f7f13ca7"} Jan 01 09:23:39 crc kubenswrapper[4867]: I0101 09:23:39.482047 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xnpbw" event={"ID":"604074a0-8c40-49b3-a057-721cebf23541","Type":"ContainerStarted","Data":"ba4e548f8f3ef7bbb3ec037b4bcf3ba764389d2c63ea5211cf1f561f5efaabb7"} Jan 01 09:23:40 crc kubenswrapper[4867]: I0101 09:23:40.508384 4867 generic.go:334] "Generic (PLEG): container finished" podID="604074a0-8c40-49b3-a057-721cebf23541" containerID="ba4e548f8f3ef7bbb3ec037b4bcf3ba764389d2c63ea5211cf1f561f5efaabb7" exitCode=0 Jan 01 09:23:40 crc kubenswrapper[4867]: I0101 09:23:40.508476 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xnpbw" event={"ID":"604074a0-8c40-49b3-a057-721cebf23541","Type":"ContainerDied","Data":"ba4e548f8f3ef7bbb3ec037b4bcf3ba764389d2c63ea5211cf1f561f5efaabb7"} Jan 01 09:23:41 crc kubenswrapper[4867]: I0101 09:23:41.523967 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xnpbw" event={"ID":"604074a0-8c40-49b3-a057-721cebf23541","Type":"ContainerStarted","Data":"b62ea306b398d8cde8c2181bf459680a916f2f04ac7585478da5fff15855a34f"} Jan 01 09:23:41 crc kubenswrapper[4867]: I0101 09:23:41.559246 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xnpbw" podStartSLOduration=3.050679152 podStartE2EDuration="5.559223627s" podCreationTimestamp="2026-01-01 09:23:36 +0000 UTC" firstStartedPulling="2026-01-01 09:23:38.476076272 +0000 UTC m=+3427.611345081" lastFinishedPulling="2026-01-01 09:23:40.984620777 +0000 UTC m=+3430.119889556" observedRunningTime="2026-01-01 09:23:41.555183632 +0000 UTC m=+3430.690452431" watchObservedRunningTime="2026-01-01 09:23:41.559223627 +0000 UTC m=+3430.694492426" Jan 01 09:23:46 crc kubenswrapper[4867]: I0101 09:23:46.890003 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xnpbw" Jan 01 09:23:46 crc kubenswrapper[4867]: I0101 09:23:46.890252 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xnpbw" Jan 01 09:23:46 crc kubenswrapper[4867]: I0101 09:23:46.966553 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xnpbw" Jan 01 09:23:47 crc kubenswrapper[4867]: I0101 09:23:47.659247 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xnpbw" Jan 01 09:23:47 crc kubenswrapper[4867]: I0101 09:23:47.732164 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xnpbw"] Jan 01 09:23:49 crc kubenswrapper[4867]: I0101 09:23:49.604220 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xnpbw" podUID="604074a0-8c40-49b3-a057-721cebf23541" containerName="registry-server" containerID="cri-o://b62ea306b398d8cde8c2181bf459680a916f2f04ac7585478da5fff15855a34f" gracePeriod=2 Jan 01 09:23:50 crc kubenswrapper[4867]: I0101 09:23:50.541395 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xnpbw" Jan 01 09:23:50 crc kubenswrapper[4867]: I0101 09:23:50.613262 4867 generic.go:334] "Generic (PLEG): container finished" podID="604074a0-8c40-49b3-a057-721cebf23541" containerID="b62ea306b398d8cde8c2181bf459680a916f2f04ac7585478da5fff15855a34f" exitCode=0 Jan 01 09:23:50 crc kubenswrapper[4867]: I0101 09:23:50.613305 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xnpbw" event={"ID":"604074a0-8c40-49b3-a057-721cebf23541","Type":"ContainerDied","Data":"b62ea306b398d8cde8c2181bf459680a916f2f04ac7585478da5fff15855a34f"} Jan 01 09:23:50 crc kubenswrapper[4867]: I0101 09:23:50.613332 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xnpbw" event={"ID":"604074a0-8c40-49b3-a057-721cebf23541","Type":"ContainerDied","Data":"f7b99e1f58aef27bc4310fab503e96443248afe5d5dc7b5d788b053c77b25458"} Jan 01 09:23:50 crc kubenswrapper[4867]: I0101 09:23:50.613350 4867 scope.go:117] "RemoveContainer" containerID="b62ea306b398d8cde8c2181bf459680a916f2f04ac7585478da5fff15855a34f" Jan 01 09:23:50 crc kubenswrapper[4867]: I0101 09:23:50.613342 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xnpbw" Jan 01 09:23:50 crc kubenswrapper[4867]: I0101 09:23:50.635117 4867 scope.go:117] "RemoveContainer" containerID="ba4e548f8f3ef7bbb3ec037b4bcf3ba764389d2c63ea5211cf1f561f5efaabb7" Jan 01 09:23:50 crc kubenswrapper[4867]: I0101 09:23:50.671607 4867 scope.go:117] "RemoveContainer" containerID="a909c219ff9e492485967da0f24a940ebcfd855cbddd305ea6de84e0f7f13ca7" Jan 01 09:23:50 crc kubenswrapper[4867]: I0101 09:23:50.735115 4867 scope.go:117] "RemoveContainer" containerID="b62ea306b398d8cde8c2181bf459680a916f2f04ac7585478da5fff15855a34f" Jan 01 09:23:50 crc kubenswrapper[4867]: I0101 09:23:50.735650 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sqjw\" (UniqueName: \"kubernetes.io/projected/604074a0-8c40-49b3-a057-721cebf23541-kube-api-access-2sqjw\") pod \"604074a0-8c40-49b3-a057-721cebf23541\" (UID: \"604074a0-8c40-49b3-a057-721cebf23541\") " Jan 01 09:23:50 crc kubenswrapper[4867]: I0101 09:23:50.735799 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/604074a0-8c40-49b3-a057-721cebf23541-catalog-content\") pod \"604074a0-8c40-49b3-a057-721cebf23541\" (UID: \"604074a0-8c40-49b3-a057-721cebf23541\") " Jan 01 09:23:50 crc kubenswrapper[4867]: I0101 09:23:50.735856 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/604074a0-8c40-49b3-a057-721cebf23541-utilities\") pod \"604074a0-8c40-49b3-a057-721cebf23541\" (UID: \"604074a0-8c40-49b3-a057-721cebf23541\") " Jan 01 09:23:50 crc kubenswrapper[4867]: I0101 09:23:50.736846 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/604074a0-8c40-49b3-a057-721cebf23541-utilities" (OuterVolumeSpecName: "utilities") pod "604074a0-8c40-49b3-a057-721cebf23541" (UID: "604074a0-8c40-49b3-a057-721cebf23541"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:23:50 crc kubenswrapper[4867]: I0101 09:23:50.737180 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/604074a0-8c40-49b3-a057-721cebf23541-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 09:23:50 crc kubenswrapper[4867]: I0101 09:23:50.800137 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/604074a0-8c40-49b3-a057-721cebf23541-kube-api-access-2sqjw" (OuterVolumeSpecName: "kube-api-access-2sqjw") pod "604074a0-8c40-49b3-a057-721cebf23541" (UID: "604074a0-8c40-49b3-a057-721cebf23541"). InnerVolumeSpecName "kube-api-access-2sqjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:23:50 crc kubenswrapper[4867]: E0101 09:23:50.800277 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b62ea306b398d8cde8c2181bf459680a916f2f04ac7585478da5fff15855a34f\": container with ID starting with b62ea306b398d8cde8c2181bf459680a916f2f04ac7585478da5fff15855a34f not found: ID does not exist" containerID="b62ea306b398d8cde8c2181bf459680a916f2f04ac7585478da5fff15855a34f" Jan 01 09:23:50 crc kubenswrapper[4867]: I0101 09:23:50.800307 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b62ea306b398d8cde8c2181bf459680a916f2f04ac7585478da5fff15855a34f"} err="failed to get container status \"b62ea306b398d8cde8c2181bf459680a916f2f04ac7585478da5fff15855a34f\": rpc error: code = NotFound desc = could not find container \"b62ea306b398d8cde8c2181bf459680a916f2f04ac7585478da5fff15855a34f\": container with ID starting with b62ea306b398d8cde8c2181bf459680a916f2f04ac7585478da5fff15855a34f not found: ID does not exist" Jan 01 09:23:50 crc kubenswrapper[4867]: I0101 09:23:50.800345 4867 scope.go:117] "RemoveContainer" containerID="ba4e548f8f3ef7bbb3ec037b4bcf3ba764389d2c63ea5211cf1f561f5efaabb7" Jan 01 09:23:50 crc kubenswrapper[4867]: E0101 09:23:50.800860 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba4e548f8f3ef7bbb3ec037b4bcf3ba764389d2c63ea5211cf1f561f5efaabb7\": container with ID starting with ba4e548f8f3ef7bbb3ec037b4bcf3ba764389d2c63ea5211cf1f561f5efaabb7 not found: ID does not exist" containerID="ba4e548f8f3ef7bbb3ec037b4bcf3ba764389d2c63ea5211cf1f561f5efaabb7" Jan 01 09:23:50 crc kubenswrapper[4867]: I0101 09:23:50.800930 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba4e548f8f3ef7bbb3ec037b4bcf3ba764389d2c63ea5211cf1f561f5efaabb7"} err="failed to get container status \"ba4e548f8f3ef7bbb3ec037b4bcf3ba764389d2c63ea5211cf1f561f5efaabb7\": rpc error: code = NotFound desc = could not find container \"ba4e548f8f3ef7bbb3ec037b4bcf3ba764389d2c63ea5211cf1f561f5efaabb7\": container with ID starting with ba4e548f8f3ef7bbb3ec037b4bcf3ba764389d2c63ea5211cf1f561f5efaabb7 not found: ID does not exist" Jan 01 09:23:50 crc kubenswrapper[4867]: I0101 09:23:50.800969 4867 scope.go:117] "RemoveContainer" containerID="a909c219ff9e492485967da0f24a940ebcfd855cbddd305ea6de84e0f7f13ca7" Jan 01 09:23:50 crc kubenswrapper[4867]: E0101 09:23:50.805023 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a909c219ff9e492485967da0f24a940ebcfd855cbddd305ea6de84e0f7f13ca7\": container with ID starting with a909c219ff9e492485967da0f24a940ebcfd855cbddd305ea6de84e0f7f13ca7 not found: ID does not exist" containerID="a909c219ff9e492485967da0f24a940ebcfd855cbddd305ea6de84e0f7f13ca7" Jan 01 09:23:50 crc kubenswrapper[4867]: I0101 09:23:50.805072 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a909c219ff9e492485967da0f24a940ebcfd855cbddd305ea6de84e0f7f13ca7"} err="failed to get container status \"a909c219ff9e492485967da0f24a940ebcfd855cbddd305ea6de84e0f7f13ca7\": rpc error: code = NotFound desc = could not find container \"a909c219ff9e492485967da0f24a940ebcfd855cbddd305ea6de84e0f7f13ca7\": container with ID starting with a909c219ff9e492485967da0f24a940ebcfd855cbddd305ea6de84e0f7f13ca7 not found: ID does not exist" Jan 01 09:23:50 crc kubenswrapper[4867]: I0101 09:23:50.837927 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sqjw\" (UniqueName: \"kubernetes.io/projected/604074a0-8c40-49b3-a057-721cebf23541-kube-api-access-2sqjw\") on node \"crc\" DevicePath \"\"" Jan 01 09:23:50 crc kubenswrapper[4867]: I0101 09:23:50.862182 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/604074a0-8c40-49b3-a057-721cebf23541-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "604074a0-8c40-49b3-a057-721cebf23541" (UID: "604074a0-8c40-49b3-a057-721cebf23541"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:23:50 crc kubenswrapper[4867]: I0101 09:23:50.939551 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xnpbw"] Jan 01 09:23:50 crc kubenswrapper[4867]: I0101 09:23:50.939995 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/604074a0-8c40-49b3-a057-721cebf23541-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 09:23:50 crc kubenswrapper[4867]: I0101 09:23:50.946667 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xnpbw"] Jan 01 09:23:51 crc kubenswrapper[4867]: I0101 09:23:51.138739 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="604074a0-8c40-49b3-a057-721cebf23541" path="/var/lib/kubelet/pods/604074a0-8c40-49b3-a057-721cebf23541/volumes" Jan 01 09:23:51 crc kubenswrapper[4867]: I0101 09:23:51.331404 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 09:23:51 crc kubenswrapper[4867]: I0101 09:23:51.331504 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 09:24:21 crc kubenswrapper[4867]: I0101 09:24:21.331320 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 09:24:21 crc kubenswrapper[4867]: I0101 09:24:21.332021 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 09:24:21 crc kubenswrapper[4867]: I0101 09:24:21.332085 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-69jph" Jan 01 09:24:21 crc kubenswrapper[4867]: I0101 09:24:21.332978 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"15ec261e737e02f10bb4e58227cc31403721a5336c7e9233d62e654fee1472f1"} pod="openshift-machine-config-operator/machine-config-daemon-69jph" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 01 09:24:21 crc kubenswrapper[4867]: I0101 09:24:21.333076 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" containerID="cri-o://15ec261e737e02f10bb4e58227cc31403721a5336c7e9233d62e654fee1472f1" gracePeriod=600 Jan 01 09:24:21 crc kubenswrapper[4867]: I0101 09:24:21.906678 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerDied","Data":"15ec261e737e02f10bb4e58227cc31403721a5336c7e9233d62e654fee1472f1"} Jan 01 09:24:21 crc kubenswrapper[4867]: I0101 09:24:21.907155 4867 scope.go:117] "RemoveContainer" containerID="2c2b8d53aabb550eac28585c931868494601cc65edbec6600854a2632e762792" Jan 01 09:24:21 crc kubenswrapper[4867]: I0101 09:24:21.906622 4867 generic.go:334] "Generic (PLEG): container finished" podID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerID="15ec261e737e02f10bb4e58227cc31403721a5336c7e9233d62e654fee1472f1" exitCode=0 Jan 01 09:24:21 crc kubenswrapper[4867]: I0101 09:24:21.907377 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerStarted","Data":"fe8217f3f465aea496163b54d945ea6122fe0cdc382c79e94a49c603f1007815"} Jan 01 09:24:24 crc kubenswrapper[4867]: I0101 09:24:24.472923 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bptq5"] Jan 01 09:24:24 crc kubenswrapper[4867]: E0101 09:24:24.473623 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="604074a0-8c40-49b3-a057-721cebf23541" containerName="extract-utilities" Jan 01 09:24:24 crc kubenswrapper[4867]: I0101 09:24:24.473644 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="604074a0-8c40-49b3-a057-721cebf23541" containerName="extract-utilities" Jan 01 09:24:24 crc kubenswrapper[4867]: E0101 09:24:24.473672 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="604074a0-8c40-49b3-a057-721cebf23541" containerName="extract-content" Jan 01 09:24:24 crc kubenswrapper[4867]: I0101 09:24:24.473684 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="604074a0-8c40-49b3-a057-721cebf23541" containerName="extract-content" Jan 01 09:24:24 crc kubenswrapper[4867]: E0101 09:24:24.473729 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="604074a0-8c40-49b3-a057-721cebf23541" containerName="registry-server" Jan 01 09:24:24 crc kubenswrapper[4867]: I0101 09:24:24.473741 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="604074a0-8c40-49b3-a057-721cebf23541" containerName="registry-server" Jan 01 09:24:24 crc kubenswrapper[4867]: I0101 09:24:24.474008 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="604074a0-8c40-49b3-a057-721cebf23541" containerName="registry-server" Jan 01 09:24:24 crc kubenswrapper[4867]: I0101 09:24:24.475637 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bptq5" Jan 01 09:24:24 crc kubenswrapper[4867]: I0101 09:24:24.511626 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bptq5"] Jan 01 09:24:24 crc kubenswrapper[4867]: I0101 09:24:24.578272 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18a4da67-7097-498e-9ef8-14c82563963f-utilities\") pod \"redhat-marketplace-bptq5\" (UID: \"18a4da67-7097-498e-9ef8-14c82563963f\") " pod="openshift-marketplace/redhat-marketplace-bptq5" Jan 01 09:24:24 crc kubenswrapper[4867]: I0101 09:24:24.578336 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18a4da67-7097-498e-9ef8-14c82563963f-catalog-content\") pod \"redhat-marketplace-bptq5\" (UID: \"18a4da67-7097-498e-9ef8-14c82563963f\") " pod="openshift-marketplace/redhat-marketplace-bptq5" Jan 01 09:24:24 crc kubenswrapper[4867]: I0101 09:24:24.578443 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6qgz\" (UniqueName: \"kubernetes.io/projected/18a4da67-7097-498e-9ef8-14c82563963f-kube-api-access-n6qgz\") pod \"redhat-marketplace-bptq5\" (UID: \"18a4da67-7097-498e-9ef8-14c82563963f\") " pod="openshift-marketplace/redhat-marketplace-bptq5" Jan 01 09:24:24 crc kubenswrapper[4867]: I0101 09:24:24.680807 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18a4da67-7097-498e-9ef8-14c82563963f-utilities\") pod \"redhat-marketplace-bptq5\" (UID: \"18a4da67-7097-498e-9ef8-14c82563963f\") " pod="openshift-marketplace/redhat-marketplace-bptq5" Jan 01 09:24:24 crc kubenswrapper[4867]: I0101 09:24:24.680857 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18a4da67-7097-498e-9ef8-14c82563963f-catalog-content\") pod \"redhat-marketplace-bptq5\" (UID: \"18a4da67-7097-498e-9ef8-14c82563963f\") " pod="openshift-marketplace/redhat-marketplace-bptq5" Jan 01 09:24:24 crc kubenswrapper[4867]: I0101 09:24:24.680946 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6qgz\" (UniqueName: \"kubernetes.io/projected/18a4da67-7097-498e-9ef8-14c82563963f-kube-api-access-n6qgz\") pod \"redhat-marketplace-bptq5\" (UID: \"18a4da67-7097-498e-9ef8-14c82563963f\") " pod="openshift-marketplace/redhat-marketplace-bptq5" Jan 01 09:24:24 crc kubenswrapper[4867]: I0101 09:24:24.681392 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18a4da67-7097-498e-9ef8-14c82563963f-utilities\") pod \"redhat-marketplace-bptq5\" (UID: \"18a4da67-7097-498e-9ef8-14c82563963f\") " pod="openshift-marketplace/redhat-marketplace-bptq5" Jan 01 09:24:24 crc kubenswrapper[4867]: I0101 09:24:24.681444 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18a4da67-7097-498e-9ef8-14c82563963f-catalog-content\") pod \"redhat-marketplace-bptq5\" (UID: \"18a4da67-7097-498e-9ef8-14c82563963f\") " pod="openshift-marketplace/redhat-marketplace-bptq5" Jan 01 09:24:24 crc kubenswrapper[4867]: I0101 09:24:24.716800 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6qgz\" (UniqueName: \"kubernetes.io/projected/18a4da67-7097-498e-9ef8-14c82563963f-kube-api-access-n6qgz\") pod \"redhat-marketplace-bptq5\" (UID: \"18a4da67-7097-498e-9ef8-14c82563963f\") " pod="openshift-marketplace/redhat-marketplace-bptq5" Jan 01 09:24:24 crc kubenswrapper[4867]: I0101 09:24:24.809104 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bptq5" Jan 01 09:24:25 crc kubenswrapper[4867]: I0101 09:24:25.108145 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bptq5"] Jan 01 09:24:25 crc kubenswrapper[4867]: W0101 09:24:25.115319 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18a4da67_7097_498e_9ef8_14c82563963f.slice/crio-06361239d1cf359d4358a9716981448bda22394dbb1b7f60ef297c2182bba86c WatchSource:0}: Error finding container 06361239d1cf359d4358a9716981448bda22394dbb1b7f60ef297c2182bba86c: Status 404 returned error can't find the container with id 06361239d1cf359d4358a9716981448bda22394dbb1b7f60ef297c2182bba86c Jan 01 09:24:25 crc kubenswrapper[4867]: I0101 09:24:25.985808 4867 generic.go:334] "Generic (PLEG): container finished" podID="18a4da67-7097-498e-9ef8-14c82563963f" containerID="03ade3578532873faa65cfb6a281fa2f0fd4543b41b1e473feaebffa7c2d418d" exitCode=0 Jan 01 09:24:25 crc kubenswrapper[4867]: I0101 09:24:25.985959 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bptq5" event={"ID":"18a4da67-7097-498e-9ef8-14c82563963f","Type":"ContainerDied","Data":"03ade3578532873faa65cfb6a281fa2f0fd4543b41b1e473feaebffa7c2d418d"} Jan 01 09:24:25 crc kubenswrapper[4867]: I0101 09:24:25.986014 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bptq5" event={"ID":"18a4da67-7097-498e-9ef8-14c82563963f","Type":"ContainerStarted","Data":"06361239d1cf359d4358a9716981448bda22394dbb1b7f60ef297c2182bba86c"} Jan 01 09:24:26 crc kubenswrapper[4867]: I0101 09:24:26.995441 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bptq5" event={"ID":"18a4da67-7097-498e-9ef8-14c82563963f","Type":"ContainerStarted","Data":"6d83f7c58580b0060071685d20e588f08bb6ed990f444a8cc671c23eec951bcb"} Jan 01 09:24:28 crc kubenswrapper[4867]: I0101 09:24:28.016574 4867 generic.go:334] "Generic (PLEG): container finished" podID="18a4da67-7097-498e-9ef8-14c82563963f" containerID="6d83f7c58580b0060071685d20e588f08bb6ed990f444a8cc671c23eec951bcb" exitCode=0 Jan 01 09:24:28 crc kubenswrapper[4867]: I0101 09:24:28.016644 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bptq5" event={"ID":"18a4da67-7097-498e-9ef8-14c82563963f","Type":"ContainerDied","Data":"6d83f7c58580b0060071685d20e588f08bb6ed990f444a8cc671c23eec951bcb"} Jan 01 09:24:29 crc kubenswrapper[4867]: I0101 09:24:29.031963 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bptq5" event={"ID":"18a4da67-7097-498e-9ef8-14c82563963f","Type":"ContainerStarted","Data":"9dec76c355ca7b0c3fb368f9408dc31e65fa744d30c6f198816f43f90b499eb2"} Jan 01 09:24:29 crc kubenswrapper[4867]: I0101 09:24:29.062027 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bptq5" podStartSLOduration=2.5566058480000002 podStartE2EDuration="5.06199209s" podCreationTimestamp="2026-01-01 09:24:24 +0000 UTC" firstStartedPulling="2026-01-01 09:24:25.98865915 +0000 UTC m=+3475.123927949" lastFinishedPulling="2026-01-01 09:24:28.494045422 +0000 UTC m=+3477.629314191" observedRunningTime="2026-01-01 09:24:29.056273227 +0000 UTC m=+3478.191542096" watchObservedRunningTime="2026-01-01 09:24:29.06199209 +0000 UTC m=+3478.197260919" Jan 01 09:24:34 crc kubenswrapper[4867]: I0101 09:24:34.809333 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bptq5" Jan 01 09:24:34 crc kubenswrapper[4867]: I0101 09:24:34.810051 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bptq5" Jan 01 09:24:34 crc kubenswrapper[4867]: I0101 09:24:34.861469 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bptq5" Jan 01 09:24:35 crc kubenswrapper[4867]: I0101 09:24:35.166814 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bptq5" Jan 01 09:24:35 crc kubenswrapper[4867]: I0101 09:24:35.239652 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bptq5"] Jan 01 09:24:37 crc kubenswrapper[4867]: I0101 09:24:37.112584 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bptq5" podUID="18a4da67-7097-498e-9ef8-14c82563963f" containerName="registry-server" containerID="cri-o://9dec76c355ca7b0c3fb368f9408dc31e65fa744d30c6f198816f43f90b499eb2" gracePeriod=2 Jan 01 09:24:38 crc kubenswrapper[4867]: I0101 09:24:38.121781 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bptq5" Jan 01 09:24:38 crc kubenswrapper[4867]: I0101 09:24:38.122329 4867 generic.go:334] "Generic (PLEG): container finished" podID="18a4da67-7097-498e-9ef8-14c82563963f" containerID="9dec76c355ca7b0c3fb368f9408dc31e65fa744d30c6f198816f43f90b499eb2" exitCode=0 Jan 01 09:24:38 crc kubenswrapper[4867]: I0101 09:24:38.122361 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bptq5" event={"ID":"18a4da67-7097-498e-9ef8-14c82563963f","Type":"ContainerDied","Data":"9dec76c355ca7b0c3fb368f9408dc31e65fa744d30c6f198816f43f90b499eb2"} Jan 01 09:24:38 crc kubenswrapper[4867]: I0101 09:24:38.122386 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bptq5" event={"ID":"18a4da67-7097-498e-9ef8-14c82563963f","Type":"ContainerDied","Data":"06361239d1cf359d4358a9716981448bda22394dbb1b7f60ef297c2182bba86c"} Jan 01 09:24:38 crc kubenswrapper[4867]: I0101 09:24:38.122402 4867 scope.go:117] "RemoveContainer" containerID="9dec76c355ca7b0c3fb368f9408dc31e65fa744d30c6f198816f43f90b499eb2" Jan 01 09:24:38 crc kubenswrapper[4867]: I0101 09:24:38.148992 4867 scope.go:117] "RemoveContainer" containerID="6d83f7c58580b0060071685d20e588f08bb6ed990f444a8cc671c23eec951bcb" Jan 01 09:24:38 crc kubenswrapper[4867]: I0101 09:24:38.173458 4867 scope.go:117] "RemoveContainer" containerID="03ade3578532873faa65cfb6a281fa2f0fd4543b41b1e473feaebffa7c2d418d" Jan 01 09:24:38 crc kubenswrapper[4867]: I0101 09:24:38.193014 4867 scope.go:117] "RemoveContainer" containerID="9dec76c355ca7b0c3fb368f9408dc31e65fa744d30c6f198816f43f90b499eb2" Jan 01 09:24:38 crc kubenswrapper[4867]: E0101 09:24:38.193513 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dec76c355ca7b0c3fb368f9408dc31e65fa744d30c6f198816f43f90b499eb2\": container with ID starting with 9dec76c355ca7b0c3fb368f9408dc31e65fa744d30c6f198816f43f90b499eb2 not found: ID does not exist" containerID="9dec76c355ca7b0c3fb368f9408dc31e65fa744d30c6f198816f43f90b499eb2" Jan 01 09:24:38 crc kubenswrapper[4867]: I0101 09:24:38.193550 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dec76c355ca7b0c3fb368f9408dc31e65fa744d30c6f198816f43f90b499eb2"} err="failed to get container status \"9dec76c355ca7b0c3fb368f9408dc31e65fa744d30c6f198816f43f90b499eb2\": rpc error: code = NotFound desc = could not find container \"9dec76c355ca7b0c3fb368f9408dc31e65fa744d30c6f198816f43f90b499eb2\": container with ID starting with 9dec76c355ca7b0c3fb368f9408dc31e65fa744d30c6f198816f43f90b499eb2 not found: ID does not exist" Jan 01 09:24:38 crc kubenswrapper[4867]: I0101 09:24:38.193574 4867 scope.go:117] "RemoveContainer" containerID="6d83f7c58580b0060071685d20e588f08bb6ed990f444a8cc671c23eec951bcb" Jan 01 09:24:38 crc kubenswrapper[4867]: E0101 09:24:38.194009 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d83f7c58580b0060071685d20e588f08bb6ed990f444a8cc671c23eec951bcb\": container with ID starting with 6d83f7c58580b0060071685d20e588f08bb6ed990f444a8cc671c23eec951bcb not found: ID does not exist" containerID="6d83f7c58580b0060071685d20e588f08bb6ed990f444a8cc671c23eec951bcb" Jan 01 09:24:38 crc kubenswrapper[4867]: I0101 09:24:38.194052 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d83f7c58580b0060071685d20e588f08bb6ed990f444a8cc671c23eec951bcb"} err="failed to get container status \"6d83f7c58580b0060071685d20e588f08bb6ed990f444a8cc671c23eec951bcb\": rpc error: code = NotFound desc = could not find container \"6d83f7c58580b0060071685d20e588f08bb6ed990f444a8cc671c23eec951bcb\": container with ID starting with 6d83f7c58580b0060071685d20e588f08bb6ed990f444a8cc671c23eec951bcb not found: ID does not exist" Jan 01 09:24:38 crc kubenswrapper[4867]: I0101 09:24:38.194082 4867 scope.go:117] "RemoveContainer" containerID="03ade3578532873faa65cfb6a281fa2f0fd4543b41b1e473feaebffa7c2d418d" Jan 01 09:24:38 crc kubenswrapper[4867]: E0101 09:24:38.194353 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03ade3578532873faa65cfb6a281fa2f0fd4543b41b1e473feaebffa7c2d418d\": container with ID starting with 03ade3578532873faa65cfb6a281fa2f0fd4543b41b1e473feaebffa7c2d418d not found: ID does not exist" containerID="03ade3578532873faa65cfb6a281fa2f0fd4543b41b1e473feaebffa7c2d418d" Jan 01 09:24:38 crc kubenswrapper[4867]: I0101 09:24:38.194372 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03ade3578532873faa65cfb6a281fa2f0fd4543b41b1e473feaebffa7c2d418d"} err="failed to get container status \"03ade3578532873faa65cfb6a281fa2f0fd4543b41b1e473feaebffa7c2d418d\": rpc error: code = NotFound desc = could not find container \"03ade3578532873faa65cfb6a281fa2f0fd4543b41b1e473feaebffa7c2d418d\": container with ID starting with 03ade3578532873faa65cfb6a281fa2f0fd4543b41b1e473feaebffa7c2d418d not found: ID does not exist" Jan 01 09:24:38 crc kubenswrapper[4867]: I0101 09:24:38.297973 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18a4da67-7097-498e-9ef8-14c82563963f-catalog-content\") pod \"18a4da67-7097-498e-9ef8-14c82563963f\" (UID: \"18a4da67-7097-498e-9ef8-14c82563963f\") " Jan 01 09:24:38 crc kubenswrapper[4867]: I0101 09:24:38.298042 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18a4da67-7097-498e-9ef8-14c82563963f-utilities\") pod \"18a4da67-7097-498e-9ef8-14c82563963f\" (UID: \"18a4da67-7097-498e-9ef8-14c82563963f\") " Jan 01 09:24:38 crc kubenswrapper[4867]: I0101 09:24:38.298075 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6qgz\" (UniqueName: \"kubernetes.io/projected/18a4da67-7097-498e-9ef8-14c82563963f-kube-api-access-n6qgz\") pod \"18a4da67-7097-498e-9ef8-14c82563963f\" (UID: \"18a4da67-7097-498e-9ef8-14c82563963f\") " Jan 01 09:24:38 crc kubenswrapper[4867]: I0101 09:24:38.300346 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18a4da67-7097-498e-9ef8-14c82563963f-utilities" (OuterVolumeSpecName: "utilities") pod "18a4da67-7097-498e-9ef8-14c82563963f" (UID: "18a4da67-7097-498e-9ef8-14c82563963f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:24:38 crc kubenswrapper[4867]: I0101 09:24:38.304209 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18a4da67-7097-498e-9ef8-14c82563963f-kube-api-access-n6qgz" (OuterVolumeSpecName: "kube-api-access-n6qgz") pod "18a4da67-7097-498e-9ef8-14c82563963f" (UID: "18a4da67-7097-498e-9ef8-14c82563963f"). InnerVolumeSpecName "kube-api-access-n6qgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:24:38 crc kubenswrapper[4867]: I0101 09:24:38.349832 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18a4da67-7097-498e-9ef8-14c82563963f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "18a4da67-7097-498e-9ef8-14c82563963f" (UID: "18a4da67-7097-498e-9ef8-14c82563963f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:24:38 crc kubenswrapper[4867]: I0101 09:24:38.400197 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18a4da67-7097-498e-9ef8-14c82563963f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 09:24:38 crc kubenswrapper[4867]: I0101 09:24:38.400250 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18a4da67-7097-498e-9ef8-14c82563963f-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 09:24:38 crc kubenswrapper[4867]: I0101 09:24:38.400272 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6qgz\" (UniqueName: \"kubernetes.io/projected/18a4da67-7097-498e-9ef8-14c82563963f-kube-api-access-n6qgz\") on node \"crc\" DevicePath \"\"" Jan 01 09:24:39 crc kubenswrapper[4867]: I0101 09:24:39.135138 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bptq5" Jan 01 09:24:39 crc kubenswrapper[4867]: I0101 09:24:39.188625 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bptq5"] Jan 01 09:24:39 crc kubenswrapper[4867]: I0101 09:24:39.197470 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bptq5"] Jan 01 09:24:41 crc kubenswrapper[4867]: I0101 09:24:41.160156 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18a4da67-7097-498e-9ef8-14c82563963f" path="/var/lib/kubelet/pods/18a4da67-7097-498e-9ef8-14c82563963f/volumes" Jan 01 09:25:22 crc kubenswrapper[4867]: I0101 09:25:22.026255 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hkzcl"] Jan 01 09:25:22 crc kubenswrapper[4867]: E0101 09:25:22.027339 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a4da67-7097-498e-9ef8-14c82563963f" containerName="extract-content" Jan 01 09:25:22 crc kubenswrapper[4867]: I0101 09:25:22.027358 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a4da67-7097-498e-9ef8-14c82563963f" containerName="extract-content" Jan 01 09:25:22 crc kubenswrapper[4867]: E0101 09:25:22.027392 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a4da67-7097-498e-9ef8-14c82563963f" containerName="extract-utilities" Jan 01 09:25:22 crc kubenswrapper[4867]: I0101 09:25:22.027403 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a4da67-7097-498e-9ef8-14c82563963f" containerName="extract-utilities" Jan 01 09:25:22 crc kubenswrapper[4867]: E0101 09:25:22.027421 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a4da67-7097-498e-9ef8-14c82563963f" containerName="registry-server" Jan 01 09:25:22 crc kubenswrapper[4867]: I0101 09:25:22.027433 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a4da67-7097-498e-9ef8-14c82563963f" containerName="registry-server" Jan 01 09:25:22 crc kubenswrapper[4867]: I0101 09:25:22.027668 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a4da67-7097-498e-9ef8-14c82563963f" containerName="registry-server" Jan 01 09:25:22 crc kubenswrapper[4867]: I0101 09:25:22.029215 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hkzcl" Jan 01 09:25:22 crc kubenswrapper[4867]: I0101 09:25:22.061199 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hkzcl"] Jan 01 09:25:22 crc kubenswrapper[4867]: I0101 09:25:22.114618 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/334f1b77-cd63-406c-9244-ee387aa2d09b-utilities\") pod \"community-operators-hkzcl\" (UID: \"334f1b77-cd63-406c-9244-ee387aa2d09b\") " pod="openshift-marketplace/community-operators-hkzcl" Jan 01 09:25:22 crc kubenswrapper[4867]: I0101 09:25:22.114707 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/334f1b77-cd63-406c-9244-ee387aa2d09b-catalog-content\") pod \"community-operators-hkzcl\" (UID: \"334f1b77-cd63-406c-9244-ee387aa2d09b\") " pod="openshift-marketplace/community-operators-hkzcl" Jan 01 09:25:22 crc kubenswrapper[4867]: I0101 09:25:22.114755 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzc8c\" (UniqueName: \"kubernetes.io/projected/334f1b77-cd63-406c-9244-ee387aa2d09b-kube-api-access-fzc8c\") pod \"community-operators-hkzcl\" (UID: \"334f1b77-cd63-406c-9244-ee387aa2d09b\") " pod="openshift-marketplace/community-operators-hkzcl" Jan 01 09:25:22 crc kubenswrapper[4867]: I0101 09:25:22.215988 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/334f1b77-cd63-406c-9244-ee387aa2d09b-utilities\") pod \"community-operators-hkzcl\" (UID: \"334f1b77-cd63-406c-9244-ee387aa2d09b\") " pod="openshift-marketplace/community-operators-hkzcl" Jan 01 09:25:22 crc kubenswrapper[4867]: I0101 09:25:22.216269 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/334f1b77-cd63-406c-9244-ee387aa2d09b-catalog-content\") pod \"community-operators-hkzcl\" (UID: \"334f1b77-cd63-406c-9244-ee387aa2d09b\") " pod="openshift-marketplace/community-operators-hkzcl" Jan 01 09:25:22 crc kubenswrapper[4867]: I0101 09:25:22.216393 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/334f1b77-cd63-406c-9244-ee387aa2d09b-utilities\") pod \"community-operators-hkzcl\" (UID: \"334f1b77-cd63-406c-9244-ee387aa2d09b\") " pod="openshift-marketplace/community-operators-hkzcl" Jan 01 09:25:22 crc kubenswrapper[4867]: I0101 09:25:22.216402 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzc8c\" (UniqueName: \"kubernetes.io/projected/334f1b77-cd63-406c-9244-ee387aa2d09b-kube-api-access-fzc8c\") pod \"community-operators-hkzcl\" (UID: \"334f1b77-cd63-406c-9244-ee387aa2d09b\") " pod="openshift-marketplace/community-operators-hkzcl" Jan 01 09:25:22 crc kubenswrapper[4867]: I0101 09:25:22.217492 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/334f1b77-cd63-406c-9244-ee387aa2d09b-catalog-content\") pod \"community-operators-hkzcl\" (UID: \"334f1b77-cd63-406c-9244-ee387aa2d09b\") " pod="openshift-marketplace/community-operators-hkzcl" Jan 01 09:25:22 crc kubenswrapper[4867]: I0101 09:25:22.236628 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzc8c\" (UniqueName: \"kubernetes.io/projected/334f1b77-cd63-406c-9244-ee387aa2d09b-kube-api-access-fzc8c\") pod \"community-operators-hkzcl\" (UID: \"334f1b77-cd63-406c-9244-ee387aa2d09b\") " pod="openshift-marketplace/community-operators-hkzcl" Jan 01 09:25:22 crc kubenswrapper[4867]: I0101 09:25:22.348683 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hkzcl" Jan 01 09:25:22 crc kubenswrapper[4867]: I0101 09:25:22.909860 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hkzcl"] Jan 01 09:25:23 crc kubenswrapper[4867]: I0101 09:25:23.558717 4867 generic.go:334] "Generic (PLEG): container finished" podID="334f1b77-cd63-406c-9244-ee387aa2d09b" containerID="07b1e15e31d7f103fac8d214175328215ae0e991908c854d5eb6e5cfa68d21e3" exitCode=0 Jan 01 09:25:23 crc kubenswrapper[4867]: I0101 09:25:23.558773 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hkzcl" event={"ID":"334f1b77-cd63-406c-9244-ee387aa2d09b","Type":"ContainerDied","Data":"07b1e15e31d7f103fac8d214175328215ae0e991908c854d5eb6e5cfa68d21e3"} Jan 01 09:25:23 crc kubenswrapper[4867]: I0101 09:25:23.558809 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hkzcl" event={"ID":"334f1b77-cd63-406c-9244-ee387aa2d09b","Type":"ContainerStarted","Data":"729e493bc30c470a18dc7d7ec91e394eb74a7f0dbb784d4442f4513a8b0207d5"} Jan 01 09:25:24 crc kubenswrapper[4867]: I0101 09:25:24.569253 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hkzcl" event={"ID":"334f1b77-cd63-406c-9244-ee387aa2d09b","Type":"ContainerStarted","Data":"b9a7fab03f569c414180491ed002534820f731085577243a02993cd2c9ddb731"} Jan 01 09:25:25 crc kubenswrapper[4867]: I0101 09:25:25.577561 4867 generic.go:334] "Generic (PLEG): container finished" podID="334f1b77-cd63-406c-9244-ee387aa2d09b" containerID="b9a7fab03f569c414180491ed002534820f731085577243a02993cd2c9ddb731" exitCode=0 Jan 01 09:25:25 crc kubenswrapper[4867]: I0101 09:25:25.577600 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hkzcl" event={"ID":"334f1b77-cd63-406c-9244-ee387aa2d09b","Type":"ContainerDied","Data":"b9a7fab03f569c414180491ed002534820f731085577243a02993cd2c9ddb731"} Jan 01 09:25:26 crc kubenswrapper[4867]: I0101 09:25:26.587380 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hkzcl" event={"ID":"334f1b77-cd63-406c-9244-ee387aa2d09b","Type":"ContainerStarted","Data":"78d8508a505f810f86979896d0f6ff041ba0d9576a429b85a7a671180a568fd9"} Jan 01 09:25:26 crc kubenswrapper[4867]: I0101 09:25:26.616049 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hkzcl" podStartSLOduration=2.0504443 podStartE2EDuration="4.616031361s" podCreationTimestamp="2026-01-01 09:25:22 +0000 UTC" firstStartedPulling="2026-01-01 09:25:23.561644662 +0000 UTC m=+3532.696913431" lastFinishedPulling="2026-01-01 09:25:26.127231683 +0000 UTC m=+3535.262500492" observedRunningTime="2026-01-01 09:25:26.612976944 +0000 UTC m=+3535.748245723" watchObservedRunningTime="2026-01-01 09:25:26.616031361 +0000 UTC m=+3535.751300130" Jan 01 09:25:32 crc kubenswrapper[4867]: I0101 09:25:32.349033 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hkzcl" Jan 01 09:25:32 crc kubenswrapper[4867]: I0101 09:25:32.349627 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hkzcl" Jan 01 09:25:32 crc kubenswrapper[4867]: I0101 09:25:32.402043 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hkzcl" Jan 01 09:25:32 crc kubenswrapper[4867]: I0101 09:25:32.716560 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hkzcl" Jan 01 09:25:32 crc kubenswrapper[4867]: I0101 09:25:32.785767 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hkzcl"] Jan 01 09:25:34 crc kubenswrapper[4867]: I0101 09:25:34.658960 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hkzcl" podUID="334f1b77-cd63-406c-9244-ee387aa2d09b" containerName="registry-server" containerID="cri-o://78d8508a505f810f86979896d0f6ff041ba0d9576a429b85a7a671180a568fd9" gracePeriod=2 Jan 01 09:25:35 crc kubenswrapper[4867]: I0101 09:25:35.672937 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hkzcl" event={"ID":"334f1b77-cd63-406c-9244-ee387aa2d09b","Type":"ContainerDied","Data":"78d8508a505f810f86979896d0f6ff041ba0d9576a429b85a7a671180a568fd9"} Jan 01 09:25:35 crc kubenswrapper[4867]: I0101 09:25:35.672794 4867 generic.go:334] "Generic (PLEG): container finished" podID="334f1b77-cd63-406c-9244-ee387aa2d09b" containerID="78d8508a505f810f86979896d0f6ff041ba0d9576a429b85a7a671180a568fd9" exitCode=0 Jan 01 09:25:35 crc kubenswrapper[4867]: I0101 09:25:35.673288 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hkzcl" event={"ID":"334f1b77-cd63-406c-9244-ee387aa2d09b","Type":"ContainerDied","Data":"729e493bc30c470a18dc7d7ec91e394eb74a7f0dbb784d4442f4513a8b0207d5"} Jan 01 09:25:35 crc kubenswrapper[4867]: I0101 09:25:35.673490 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="729e493bc30c470a18dc7d7ec91e394eb74a7f0dbb784d4442f4513a8b0207d5" Jan 01 09:25:35 crc kubenswrapper[4867]: I0101 09:25:35.705788 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hkzcl" Jan 01 09:25:35 crc kubenswrapper[4867]: I0101 09:25:35.742157 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzc8c\" (UniqueName: \"kubernetes.io/projected/334f1b77-cd63-406c-9244-ee387aa2d09b-kube-api-access-fzc8c\") pod \"334f1b77-cd63-406c-9244-ee387aa2d09b\" (UID: \"334f1b77-cd63-406c-9244-ee387aa2d09b\") " Jan 01 09:25:35 crc kubenswrapper[4867]: I0101 09:25:35.742244 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/334f1b77-cd63-406c-9244-ee387aa2d09b-catalog-content\") pod \"334f1b77-cd63-406c-9244-ee387aa2d09b\" (UID: \"334f1b77-cd63-406c-9244-ee387aa2d09b\") " Jan 01 09:25:35 crc kubenswrapper[4867]: I0101 09:25:35.742275 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/334f1b77-cd63-406c-9244-ee387aa2d09b-utilities\") pod \"334f1b77-cd63-406c-9244-ee387aa2d09b\" (UID: \"334f1b77-cd63-406c-9244-ee387aa2d09b\") " Jan 01 09:25:35 crc kubenswrapper[4867]: I0101 09:25:35.744026 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/334f1b77-cd63-406c-9244-ee387aa2d09b-utilities" (OuterVolumeSpecName: "utilities") pod "334f1b77-cd63-406c-9244-ee387aa2d09b" (UID: "334f1b77-cd63-406c-9244-ee387aa2d09b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:25:35 crc kubenswrapper[4867]: I0101 09:25:35.750529 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/334f1b77-cd63-406c-9244-ee387aa2d09b-kube-api-access-fzc8c" (OuterVolumeSpecName: "kube-api-access-fzc8c") pod "334f1b77-cd63-406c-9244-ee387aa2d09b" (UID: "334f1b77-cd63-406c-9244-ee387aa2d09b"). InnerVolumeSpecName "kube-api-access-fzc8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:25:35 crc kubenswrapper[4867]: I0101 09:25:35.797298 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/334f1b77-cd63-406c-9244-ee387aa2d09b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "334f1b77-cd63-406c-9244-ee387aa2d09b" (UID: "334f1b77-cd63-406c-9244-ee387aa2d09b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:25:35 crc kubenswrapper[4867]: I0101 09:25:35.844019 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzc8c\" (UniqueName: \"kubernetes.io/projected/334f1b77-cd63-406c-9244-ee387aa2d09b-kube-api-access-fzc8c\") on node \"crc\" DevicePath \"\"" Jan 01 09:25:35 crc kubenswrapper[4867]: I0101 09:25:35.844064 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/334f1b77-cd63-406c-9244-ee387aa2d09b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 09:25:35 crc kubenswrapper[4867]: I0101 09:25:35.844084 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/334f1b77-cd63-406c-9244-ee387aa2d09b-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 09:25:36 crc kubenswrapper[4867]: I0101 09:25:36.678613 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hkzcl" Jan 01 09:25:36 crc kubenswrapper[4867]: I0101 09:25:36.711812 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hkzcl"] Jan 01 09:25:36 crc kubenswrapper[4867]: I0101 09:25:36.722970 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hkzcl"] Jan 01 09:25:37 crc kubenswrapper[4867]: I0101 09:25:37.144077 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="334f1b77-cd63-406c-9244-ee387aa2d09b" path="/var/lib/kubelet/pods/334f1b77-cd63-406c-9244-ee387aa2d09b/volumes" Jan 01 09:26:21 crc kubenswrapper[4867]: I0101 09:26:21.331711 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 09:26:21 crc kubenswrapper[4867]: I0101 09:26:21.332498 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 09:26:51 crc kubenswrapper[4867]: I0101 09:26:51.331220 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 09:26:51 crc kubenswrapper[4867]: I0101 09:26:51.331936 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 09:27:21 crc kubenswrapper[4867]: I0101 09:27:21.331132 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 09:27:21 crc kubenswrapper[4867]: I0101 09:27:21.331756 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 09:27:21 crc kubenswrapper[4867]: I0101 09:27:21.331816 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-69jph" Jan 01 09:27:21 crc kubenswrapper[4867]: I0101 09:27:21.333203 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fe8217f3f465aea496163b54d945ea6122fe0cdc382c79e94a49c603f1007815"} pod="openshift-machine-config-operator/machine-config-daemon-69jph" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 01 09:27:21 crc kubenswrapper[4867]: I0101 09:27:21.333299 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" containerID="cri-o://fe8217f3f465aea496163b54d945ea6122fe0cdc382c79e94a49c603f1007815" gracePeriod=600 Jan 01 09:27:21 crc kubenswrapper[4867]: E0101 09:27:21.464557 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:27:21 crc kubenswrapper[4867]: I0101 09:27:21.637283 4867 generic.go:334] "Generic (PLEG): container finished" podID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerID="fe8217f3f465aea496163b54d945ea6122fe0cdc382c79e94a49c603f1007815" exitCode=0 Jan 01 09:27:21 crc kubenswrapper[4867]: I0101 09:27:21.637393 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerDied","Data":"fe8217f3f465aea496163b54d945ea6122fe0cdc382c79e94a49c603f1007815"} Jan 01 09:27:21 crc kubenswrapper[4867]: I0101 09:27:21.637474 4867 scope.go:117] "RemoveContainer" containerID="15ec261e737e02f10bb4e58227cc31403721a5336c7e9233d62e654fee1472f1" Jan 01 09:27:21 crc kubenswrapper[4867]: I0101 09:27:21.638386 4867 scope.go:117] "RemoveContainer" containerID="fe8217f3f465aea496163b54d945ea6122fe0cdc382c79e94a49c603f1007815" Jan 01 09:27:21 crc kubenswrapper[4867]: E0101 09:27:21.638963 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:27:35 crc kubenswrapper[4867]: I0101 09:27:35.129339 4867 scope.go:117] "RemoveContainer" containerID="fe8217f3f465aea496163b54d945ea6122fe0cdc382c79e94a49c603f1007815" Jan 01 09:27:35 crc kubenswrapper[4867]: E0101 09:27:35.130461 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:27:46 crc kubenswrapper[4867]: I0101 09:27:46.128464 4867 scope.go:117] "RemoveContainer" containerID="fe8217f3f465aea496163b54d945ea6122fe0cdc382c79e94a49c603f1007815" Jan 01 09:27:46 crc kubenswrapper[4867]: E0101 09:27:46.129637 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:27:58 crc kubenswrapper[4867]: I0101 09:27:58.129585 4867 scope.go:117] "RemoveContainer" containerID="fe8217f3f465aea496163b54d945ea6122fe0cdc382c79e94a49c603f1007815" Jan 01 09:27:58 crc kubenswrapper[4867]: E0101 09:27:58.130635 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:28:10 crc kubenswrapper[4867]: I0101 09:28:10.129108 4867 scope.go:117] "RemoveContainer" containerID="fe8217f3f465aea496163b54d945ea6122fe0cdc382c79e94a49c603f1007815" Jan 01 09:28:10 crc kubenswrapper[4867]: E0101 09:28:10.130105 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:28:23 crc kubenswrapper[4867]: I0101 09:28:23.129095 4867 scope.go:117] "RemoveContainer" containerID="fe8217f3f465aea496163b54d945ea6122fe0cdc382c79e94a49c603f1007815" Jan 01 09:28:23 crc kubenswrapper[4867]: E0101 09:28:23.130256 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:28:34 crc kubenswrapper[4867]: I0101 09:28:34.128782 4867 scope.go:117] "RemoveContainer" containerID="fe8217f3f465aea496163b54d945ea6122fe0cdc382c79e94a49c603f1007815" Jan 01 09:28:34 crc kubenswrapper[4867]: E0101 09:28:34.129675 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:28:45 crc kubenswrapper[4867]: I0101 09:28:45.129760 4867 scope.go:117] "RemoveContainer" containerID="fe8217f3f465aea496163b54d945ea6122fe0cdc382c79e94a49c603f1007815" Jan 01 09:28:45 crc kubenswrapper[4867]: E0101 09:28:45.130817 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:28:59 crc kubenswrapper[4867]: I0101 09:28:59.129647 4867 scope.go:117] "RemoveContainer" containerID="fe8217f3f465aea496163b54d945ea6122fe0cdc382c79e94a49c603f1007815" Jan 01 09:28:59 crc kubenswrapper[4867]: E0101 09:28:59.130861 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:29:12 crc kubenswrapper[4867]: I0101 09:29:12.128715 4867 scope.go:117] "RemoveContainer" containerID="fe8217f3f465aea496163b54d945ea6122fe0cdc382c79e94a49c603f1007815" Jan 01 09:29:12 crc kubenswrapper[4867]: E0101 09:29:12.130753 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:29:23 crc kubenswrapper[4867]: I0101 09:29:23.129478 4867 scope.go:117] "RemoveContainer" containerID="fe8217f3f465aea496163b54d945ea6122fe0cdc382c79e94a49c603f1007815" Jan 01 09:29:23 crc kubenswrapper[4867]: E0101 09:29:23.130577 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:29:34 crc kubenswrapper[4867]: I0101 09:29:34.128582 4867 scope.go:117] "RemoveContainer" containerID="fe8217f3f465aea496163b54d945ea6122fe0cdc382c79e94a49c603f1007815" Jan 01 09:29:34 crc kubenswrapper[4867]: E0101 09:29:34.129762 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:29:46 crc kubenswrapper[4867]: I0101 09:29:46.129730 4867 scope.go:117] "RemoveContainer" containerID="fe8217f3f465aea496163b54d945ea6122fe0cdc382c79e94a49c603f1007815" Jan 01 09:29:46 crc kubenswrapper[4867]: E0101 09:29:46.130404 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:30:00 crc kubenswrapper[4867]: I0101 09:30:00.174903 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29454330-fwnnw"] Jan 01 09:30:00 crc kubenswrapper[4867]: E0101 09:30:00.175789 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="334f1b77-cd63-406c-9244-ee387aa2d09b" containerName="extract-content" Jan 01 09:30:00 crc kubenswrapper[4867]: I0101 09:30:00.175806 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="334f1b77-cd63-406c-9244-ee387aa2d09b" containerName="extract-content" Jan 01 09:30:00 crc kubenswrapper[4867]: E0101 09:30:00.175829 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="334f1b77-cd63-406c-9244-ee387aa2d09b" containerName="extract-utilities" Jan 01 09:30:00 crc kubenswrapper[4867]: I0101 09:30:00.175836 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="334f1b77-cd63-406c-9244-ee387aa2d09b" containerName="extract-utilities" Jan 01 09:30:00 crc kubenswrapper[4867]: E0101 09:30:00.175846 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="334f1b77-cd63-406c-9244-ee387aa2d09b" containerName="registry-server" Jan 01 09:30:00 crc kubenswrapper[4867]: I0101 09:30:00.175853 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="334f1b77-cd63-406c-9244-ee387aa2d09b" containerName="registry-server" Jan 01 09:30:00 crc kubenswrapper[4867]: I0101 09:30:00.176025 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="334f1b77-cd63-406c-9244-ee387aa2d09b" containerName="registry-server" Jan 01 09:30:00 crc kubenswrapper[4867]: I0101 09:30:00.176538 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29454330-fwnnw" Jan 01 09:30:00 crc kubenswrapper[4867]: I0101 09:30:00.178368 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 01 09:30:00 crc kubenswrapper[4867]: I0101 09:30:00.178616 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 01 09:30:00 crc kubenswrapper[4867]: I0101 09:30:00.190488 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29454330-fwnnw"] Jan 01 09:30:00 crc kubenswrapper[4867]: I0101 09:30:00.199610 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bff11ad-16c5-4154-92d4-6b515cefc6fd-config-volume\") pod \"collect-profiles-29454330-fwnnw\" (UID: \"2bff11ad-16c5-4154-92d4-6b515cefc6fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454330-fwnnw" Jan 01 09:30:00 crc kubenswrapper[4867]: I0101 09:30:00.199674 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bff11ad-16c5-4154-92d4-6b515cefc6fd-secret-volume\") pod \"collect-profiles-29454330-fwnnw\" (UID: \"2bff11ad-16c5-4154-92d4-6b515cefc6fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454330-fwnnw" Jan 01 09:30:00 crc kubenswrapper[4867]: I0101 09:30:00.199741 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw94q\" (UniqueName: \"kubernetes.io/projected/2bff11ad-16c5-4154-92d4-6b515cefc6fd-kube-api-access-pw94q\") pod \"collect-profiles-29454330-fwnnw\" (UID: \"2bff11ad-16c5-4154-92d4-6b515cefc6fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454330-fwnnw" Jan 01 09:30:00 crc kubenswrapper[4867]: I0101 09:30:00.301202 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw94q\" (UniqueName: \"kubernetes.io/projected/2bff11ad-16c5-4154-92d4-6b515cefc6fd-kube-api-access-pw94q\") pod \"collect-profiles-29454330-fwnnw\" (UID: \"2bff11ad-16c5-4154-92d4-6b515cefc6fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454330-fwnnw" Jan 01 09:30:00 crc kubenswrapper[4867]: I0101 09:30:00.301394 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bff11ad-16c5-4154-92d4-6b515cefc6fd-config-volume\") pod \"collect-profiles-29454330-fwnnw\" (UID: \"2bff11ad-16c5-4154-92d4-6b515cefc6fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454330-fwnnw" Jan 01 09:30:00 crc kubenswrapper[4867]: I0101 09:30:00.301454 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bff11ad-16c5-4154-92d4-6b515cefc6fd-secret-volume\") pod \"collect-profiles-29454330-fwnnw\" (UID: \"2bff11ad-16c5-4154-92d4-6b515cefc6fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454330-fwnnw" Jan 01 09:30:00 crc kubenswrapper[4867]: I0101 09:30:00.303371 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bff11ad-16c5-4154-92d4-6b515cefc6fd-config-volume\") pod \"collect-profiles-29454330-fwnnw\" (UID: \"2bff11ad-16c5-4154-92d4-6b515cefc6fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454330-fwnnw" Jan 01 09:30:00 crc kubenswrapper[4867]: I0101 09:30:00.316961 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bff11ad-16c5-4154-92d4-6b515cefc6fd-secret-volume\") pod \"collect-profiles-29454330-fwnnw\" (UID: \"2bff11ad-16c5-4154-92d4-6b515cefc6fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454330-fwnnw" Jan 01 09:30:00 crc kubenswrapper[4867]: I0101 09:30:00.325586 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw94q\" (UniqueName: \"kubernetes.io/projected/2bff11ad-16c5-4154-92d4-6b515cefc6fd-kube-api-access-pw94q\") pod \"collect-profiles-29454330-fwnnw\" (UID: \"2bff11ad-16c5-4154-92d4-6b515cefc6fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454330-fwnnw" Jan 01 09:30:00 crc kubenswrapper[4867]: I0101 09:30:00.502502 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29454330-fwnnw" Jan 01 09:30:01 crc kubenswrapper[4867]: I0101 09:30:01.024316 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29454330-fwnnw"] Jan 01 09:30:01 crc kubenswrapper[4867]: I0101 09:30:01.118232 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29454330-fwnnw" event={"ID":"2bff11ad-16c5-4154-92d4-6b515cefc6fd","Type":"ContainerStarted","Data":"dff12d638bb6ae4797d89d084692c051bbe63aefff871e1354dabc85e59266f1"} Jan 01 09:30:01 crc kubenswrapper[4867]: I0101 09:30:01.133330 4867 scope.go:117] "RemoveContainer" containerID="fe8217f3f465aea496163b54d945ea6122fe0cdc382c79e94a49c603f1007815" Jan 01 09:30:01 crc kubenswrapper[4867]: E0101 09:30:01.133567 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:30:02 crc kubenswrapper[4867]: I0101 09:30:02.124768 4867 generic.go:334] "Generic (PLEG): container finished" podID="2bff11ad-16c5-4154-92d4-6b515cefc6fd" containerID="2e1fcfcb6380a111cfd596149d2ec631e8e09233d7f51dbb72524b846caf0328" exitCode=0 Jan 01 09:30:02 crc kubenswrapper[4867]: I0101 09:30:02.124810 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29454330-fwnnw" event={"ID":"2bff11ad-16c5-4154-92d4-6b515cefc6fd","Type":"ContainerDied","Data":"2e1fcfcb6380a111cfd596149d2ec631e8e09233d7f51dbb72524b846caf0328"} Jan 01 09:30:03 crc kubenswrapper[4867]: I0101 09:30:03.455044 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29454330-fwnnw" Jan 01 09:30:03 crc kubenswrapper[4867]: I0101 09:30:03.654818 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bff11ad-16c5-4154-92d4-6b515cefc6fd-config-volume\") pod \"2bff11ad-16c5-4154-92d4-6b515cefc6fd\" (UID: \"2bff11ad-16c5-4154-92d4-6b515cefc6fd\") " Jan 01 09:30:03 crc kubenswrapper[4867]: I0101 09:30:03.654956 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bff11ad-16c5-4154-92d4-6b515cefc6fd-secret-volume\") pod \"2bff11ad-16c5-4154-92d4-6b515cefc6fd\" (UID: \"2bff11ad-16c5-4154-92d4-6b515cefc6fd\") " Jan 01 09:30:03 crc kubenswrapper[4867]: I0101 09:30:03.654998 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pw94q\" (UniqueName: \"kubernetes.io/projected/2bff11ad-16c5-4154-92d4-6b515cefc6fd-kube-api-access-pw94q\") pod \"2bff11ad-16c5-4154-92d4-6b515cefc6fd\" (UID: \"2bff11ad-16c5-4154-92d4-6b515cefc6fd\") " Jan 01 09:30:03 crc kubenswrapper[4867]: I0101 09:30:03.656312 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bff11ad-16c5-4154-92d4-6b515cefc6fd-config-volume" (OuterVolumeSpecName: "config-volume") pod "2bff11ad-16c5-4154-92d4-6b515cefc6fd" (UID: "2bff11ad-16c5-4154-92d4-6b515cefc6fd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 09:30:03 crc kubenswrapper[4867]: I0101 09:30:03.757313 4867 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bff11ad-16c5-4154-92d4-6b515cefc6fd-config-volume\") on node \"crc\" DevicePath \"\"" Jan 01 09:30:04 crc kubenswrapper[4867]: I0101 09:30:04.147252 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29454330-fwnnw" event={"ID":"2bff11ad-16c5-4154-92d4-6b515cefc6fd","Type":"ContainerDied","Data":"dff12d638bb6ae4797d89d084692c051bbe63aefff871e1354dabc85e59266f1"} Jan 01 09:30:04 crc kubenswrapper[4867]: I0101 09:30:04.147297 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dff12d638bb6ae4797d89d084692c051bbe63aefff871e1354dabc85e59266f1" Jan 01 09:30:04 crc kubenswrapper[4867]: I0101 09:30:04.147382 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29454330-fwnnw" Jan 01 09:30:04 crc kubenswrapper[4867]: I0101 09:30:04.252363 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bff11ad-16c5-4154-92d4-6b515cefc6fd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2bff11ad-16c5-4154-92d4-6b515cefc6fd" (UID: "2bff11ad-16c5-4154-92d4-6b515cefc6fd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 09:30:04 crc kubenswrapper[4867]: I0101 09:30:04.252967 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bff11ad-16c5-4154-92d4-6b515cefc6fd-kube-api-access-pw94q" (OuterVolumeSpecName: "kube-api-access-pw94q") pod "2bff11ad-16c5-4154-92d4-6b515cefc6fd" (UID: "2bff11ad-16c5-4154-92d4-6b515cefc6fd"). InnerVolumeSpecName "kube-api-access-pw94q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:30:04 crc kubenswrapper[4867]: I0101 09:30:04.265342 4867 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bff11ad-16c5-4154-92d4-6b515cefc6fd-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 01 09:30:04 crc kubenswrapper[4867]: I0101 09:30:04.265404 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pw94q\" (UniqueName: \"kubernetes.io/projected/2bff11ad-16c5-4154-92d4-6b515cefc6fd-kube-api-access-pw94q\") on node \"crc\" DevicePath \"\"" Jan 01 09:30:04 crc kubenswrapper[4867]: I0101 09:30:04.552228 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29454285-kqkct"] Jan 01 09:30:04 crc kubenswrapper[4867]: I0101 09:30:04.560937 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29454285-kqkct"] Jan 01 09:30:05 crc kubenswrapper[4867]: I0101 09:30:05.146377 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46" path="/var/lib/kubelet/pods/32e3945b-5eb9-42ef-b8ce-9a3a3d0cfe46/volumes" Jan 01 09:30:15 crc kubenswrapper[4867]: I0101 09:30:15.129309 4867 scope.go:117] "RemoveContainer" containerID="fe8217f3f465aea496163b54d945ea6122fe0cdc382c79e94a49c603f1007815" Jan 01 09:30:15 crc kubenswrapper[4867]: E0101 09:30:15.130235 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:30:30 crc kubenswrapper[4867]: I0101 09:30:30.129170 4867 scope.go:117] "RemoveContainer" containerID="fe8217f3f465aea496163b54d945ea6122fe0cdc382c79e94a49c603f1007815" Jan 01 09:30:30 crc kubenswrapper[4867]: E0101 09:30:30.130953 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:30:40 crc kubenswrapper[4867]: I0101 09:30:40.068626 4867 scope.go:117] "RemoveContainer" containerID="82a5121fba39bcb412d420a1466556657a05b4f87cb976579186b0e0909ae070" Jan 01 09:30:44 crc kubenswrapper[4867]: I0101 09:30:44.129279 4867 scope.go:117] "RemoveContainer" containerID="fe8217f3f465aea496163b54d945ea6122fe0cdc382c79e94a49c603f1007815" Jan 01 09:30:44 crc kubenswrapper[4867]: E0101 09:30:44.130353 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:30:59 crc kubenswrapper[4867]: I0101 09:30:59.128556 4867 scope.go:117] "RemoveContainer" containerID="fe8217f3f465aea496163b54d945ea6122fe0cdc382c79e94a49c603f1007815" Jan 01 09:30:59 crc kubenswrapper[4867]: E0101 09:30:59.130576 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:31:11 crc kubenswrapper[4867]: I0101 09:31:11.138037 4867 scope.go:117] "RemoveContainer" containerID="fe8217f3f465aea496163b54d945ea6122fe0cdc382c79e94a49c603f1007815" Jan 01 09:31:11 crc kubenswrapper[4867]: E0101 09:31:11.139188 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:31:24 crc kubenswrapper[4867]: I0101 09:31:24.129291 4867 scope.go:117] "RemoveContainer" containerID="fe8217f3f465aea496163b54d945ea6122fe0cdc382c79e94a49c603f1007815" Jan 01 09:31:24 crc kubenswrapper[4867]: E0101 09:31:24.131641 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:31:37 crc kubenswrapper[4867]: I0101 09:31:37.128962 4867 scope.go:117] "RemoveContainer" containerID="fe8217f3f465aea496163b54d945ea6122fe0cdc382c79e94a49c603f1007815" Jan 01 09:31:37 crc kubenswrapper[4867]: E0101 09:31:37.130101 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:31:40 crc kubenswrapper[4867]: I0101 09:31:40.147239 4867 scope.go:117] "RemoveContainer" containerID="07b1e15e31d7f103fac8d214175328215ae0e991908c854d5eb6e5cfa68d21e3" Jan 01 09:31:40 crc kubenswrapper[4867]: I0101 09:31:40.269421 4867 scope.go:117] "RemoveContainer" containerID="b9a7fab03f569c414180491ed002534820f731085577243a02993cd2c9ddb731" Jan 01 09:31:40 crc kubenswrapper[4867]: I0101 09:31:40.320664 4867 scope.go:117] "RemoveContainer" containerID="78d8508a505f810f86979896d0f6ff041ba0d9576a429b85a7a671180a568fd9" Jan 01 09:31:51 crc kubenswrapper[4867]: I0101 09:31:51.135634 4867 scope.go:117] "RemoveContainer" containerID="fe8217f3f465aea496163b54d945ea6122fe0cdc382c79e94a49c603f1007815" Jan 01 09:31:51 crc kubenswrapper[4867]: E0101 09:31:51.137097 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:32:06 crc kubenswrapper[4867]: I0101 09:32:06.129422 4867 scope.go:117] "RemoveContainer" containerID="fe8217f3f465aea496163b54d945ea6122fe0cdc382c79e94a49c603f1007815" Jan 01 09:32:06 crc kubenswrapper[4867]: E0101 09:32:06.130435 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:32:20 crc kubenswrapper[4867]: I0101 09:32:20.129375 4867 scope.go:117] "RemoveContainer" containerID="fe8217f3f465aea496163b54d945ea6122fe0cdc382c79e94a49c603f1007815" Jan 01 09:32:20 crc kubenswrapper[4867]: E0101 09:32:20.130428 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:32:35 crc kubenswrapper[4867]: I0101 09:32:35.130057 4867 scope.go:117] "RemoveContainer" containerID="fe8217f3f465aea496163b54d945ea6122fe0cdc382c79e94a49c603f1007815" Jan 01 09:32:36 crc kubenswrapper[4867]: I0101 09:32:36.473732 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerStarted","Data":"0dc83063321e6b3a84dc0ca3e3ffa3ce1c8d7f541189e09b33c7c3311259fbcc"} Jan 01 09:33:03 crc kubenswrapper[4867]: I0101 09:33:03.376689 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jl2tx"] Jan 01 09:33:03 crc kubenswrapper[4867]: E0101 09:33:03.377708 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bff11ad-16c5-4154-92d4-6b515cefc6fd" containerName="collect-profiles" Jan 01 09:33:03 crc kubenswrapper[4867]: I0101 09:33:03.377729 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bff11ad-16c5-4154-92d4-6b515cefc6fd" containerName="collect-profiles" Jan 01 09:33:03 crc kubenswrapper[4867]: I0101 09:33:03.378023 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bff11ad-16c5-4154-92d4-6b515cefc6fd" containerName="collect-profiles" Jan 01 09:33:03 crc kubenswrapper[4867]: I0101 09:33:03.383414 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jl2tx" Jan 01 09:33:03 crc kubenswrapper[4867]: I0101 09:33:03.392299 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jl2tx"] Jan 01 09:33:03 crc kubenswrapper[4867]: I0101 09:33:03.494027 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpks4\" (UniqueName: \"kubernetes.io/projected/ecc7490d-25ce-4c82-a622-c5832b1f8f50-kube-api-access-zpks4\") pod \"redhat-operators-jl2tx\" (UID: \"ecc7490d-25ce-4c82-a622-c5832b1f8f50\") " pod="openshift-marketplace/redhat-operators-jl2tx" Jan 01 09:33:03 crc kubenswrapper[4867]: I0101 09:33:03.494207 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecc7490d-25ce-4c82-a622-c5832b1f8f50-utilities\") pod \"redhat-operators-jl2tx\" (UID: \"ecc7490d-25ce-4c82-a622-c5832b1f8f50\") " pod="openshift-marketplace/redhat-operators-jl2tx" Jan 01 09:33:03 crc kubenswrapper[4867]: I0101 09:33:03.494290 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecc7490d-25ce-4c82-a622-c5832b1f8f50-catalog-content\") pod \"redhat-operators-jl2tx\" (UID: \"ecc7490d-25ce-4c82-a622-c5832b1f8f50\") " pod="openshift-marketplace/redhat-operators-jl2tx" Jan 01 09:33:03 crc kubenswrapper[4867]: I0101 09:33:03.595444 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpks4\" (UniqueName: \"kubernetes.io/projected/ecc7490d-25ce-4c82-a622-c5832b1f8f50-kube-api-access-zpks4\") pod \"redhat-operators-jl2tx\" (UID: \"ecc7490d-25ce-4c82-a622-c5832b1f8f50\") " pod="openshift-marketplace/redhat-operators-jl2tx" Jan 01 09:33:03 crc kubenswrapper[4867]: I0101 09:33:03.595537 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecc7490d-25ce-4c82-a622-c5832b1f8f50-utilities\") pod \"redhat-operators-jl2tx\" (UID: \"ecc7490d-25ce-4c82-a622-c5832b1f8f50\") " pod="openshift-marketplace/redhat-operators-jl2tx" Jan 01 09:33:03 crc kubenswrapper[4867]: I0101 09:33:03.595575 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecc7490d-25ce-4c82-a622-c5832b1f8f50-catalog-content\") pod \"redhat-operators-jl2tx\" (UID: \"ecc7490d-25ce-4c82-a622-c5832b1f8f50\") " pod="openshift-marketplace/redhat-operators-jl2tx" Jan 01 09:33:03 crc kubenswrapper[4867]: I0101 09:33:03.596206 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecc7490d-25ce-4c82-a622-c5832b1f8f50-catalog-content\") pod \"redhat-operators-jl2tx\" (UID: \"ecc7490d-25ce-4c82-a622-c5832b1f8f50\") " pod="openshift-marketplace/redhat-operators-jl2tx" Jan 01 09:33:03 crc kubenswrapper[4867]: I0101 09:33:03.596399 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecc7490d-25ce-4c82-a622-c5832b1f8f50-utilities\") pod \"redhat-operators-jl2tx\" (UID: \"ecc7490d-25ce-4c82-a622-c5832b1f8f50\") " pod="openshift-marketplace/redhat-operators-jl2tx" Jan 01 09:33:03 crc kubenswrapper[4867]: I0101 09:33:03.626532 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpks4\" (UniqueName: \"kubernetes.io/projected/ecc7490d-25ce-4c82-a622-c5832b1f8f50-kube-api-access-zpks4\") pod \"redhat-operators-jl2tx\" (UID: \"ecc7490d-25ce-4c82-a622-c5832b1f8f50\") " pod="openshift-marketplace/redhat-operators-jl2tx" Jan 01 09:33:03 crc kubenswrapper[4867]: I0101 09:33:03.719300 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jl2tx" Jan 01 09:33:04 crc kubenswrapper[4867]: I0101 09:33:04.165489 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jl2tx"] Jan 01 09:33:04 crc kubenswrapper[4867]: W0101 09:33:04.178647 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecc7490d_25ce_4c82_a622_c5832b1f8f50.slice/crio-ebdae03db7eb27ae778b786665031063f8fc35d6583b1a58abf4cf7c3d09c39a WatchSource:0}: Error finding container ebdae03db7eb27ae778b786665031063f8fc35d6583b1a58abf4cf7c3d09c39a: Status 404 returned error can't find the container with id ebdae03db7eb27ae778b786665031063f8fc35d6583b1a58abf4cf7c3d09c39a Jan 01 09:33:04 crc kubenswrapper[4867]: I0101 09:33:04.728629 4867 generic.go:334] "Generic (PLEG): container finished" podID="ecc7490d-25ce-4c82-a622-c5832b1f8f50" containerID="83791136660c45afbfc77a5efa5bffec55b79da26bb9217726ce28b35f6ea5b7" exitCode=0 Jan 01 09:33:04 crc kubenswrapper[4867]: I0101 09:33:04.728690 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jl2tx" event={"ID":"ecc7490d-25ce-4c82-a622-c5832b1f8f50","Type":"ContainerDied","Data":"83791136660c45afbfc77a5efa5bffec55b79da26bb9217726ce28b35f6ea5b7"} Jan 01 09:33:04 crc kubenswrapper[4867]: I0101 09:33:04.728725 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jl2tx" event={"ID":"ecc7490d-25ce-4c82-a622-c5832b1f8f50","Type":"ContainerStarted","Data":"ebdae03db7eb27ae778b786665031063f8fc35d6583b1a58abf4cf7c3d09c39a"} Jan 01 09:33:04 crc kubenswrapper[4867]: I0101 09:33:04.731793 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 01 09:33:05 crc kubenswrapper[4867]: I0101 09:33:05.739586 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jl2tx" event={"ID":"ecc7490d-25ce-4c82-a622-c5832b1f8f50","Type":"ContainerStarted","Data":"59a3edaabdc7e62fecedb1cf3dcca0f9ba66726014c4443a632a0366282a5b61"} Jan 01 09:33:06 crc kubenswrapper[4867]: I0101 09:33:06.751449 4867 generic.go:334] "Generic (PLEG): container finished" podID="ecc7490d-25ce-4c82-a622-c5832b1f8f50" containerID="59a3edaabdc7e62fecedb1cf3dcca0f9ba66726014c4443a632a0366282a5b61" exitCode=0 Jan 01 09:33:06 crc kubenswrapper[4867]: I0101 09:33:06.751561 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jl2tx" event={"ID":"ecc7490d-25ce-4c82-a622-c5832b1f8f50","Type":"ContainerDied","Data":"59a3edaabdc7e62fecedb1cf3dcca0f9ba66726014c4443a632a0366282a5b61"} Jan 01 09:33:07 crc kubenswrapper[4867]: I0101 09:33:07.761797 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jl2tx" event={"ID":"ecc7490d-25ce-4c82-a622-c5832b1f8f50","Type":"ContainerStarted","Data":"cfbc86f32dfbc3407c06381254d6518b870482a02c85aad1ddb7ce9490d7e06d"} Jan 01 09:33:07 crc kubenswrapper[4867]: I0101 09:33:07.791552 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jl2tx" podStartSLOduration=2.219784561 podStartE2EDuration="4.791525941s" podCreationTimestamp="2026-01-01 09:33:03 +0000 UTC" firstStartedPulling="2026-01-01 09:33:04.731433998 +0000 UTC m=+3993.866702777" lastFinishedPulling="2026-01-01 09:33:07.303175388 +0000 UTC m=+3996.438444157" observedRunningTime="2026-01-01 09:33:07.789266076 +0000 UTC m=+3996.924534945" watchObservedRunningTime="2026-01-01 09:33:07.791525941 +0000 UTC m=+3996.926794740" Jan 01 09:33:13 crc kubenswrapper[4867]: I0101 09:33:13.719818 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jl2tx" Jan 01 09:33:13 crc kubenswrapper[4867]: I0101 09:33:13.720424 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jl2tx" Jan 01 09:33:14 crc kubenswrapper[4867]: I0101 09:33:14.988638 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jl2tx" podUID="ecc7490d-25ce-4c82-a622-c5832b1f8f50" containerName="registry-server" probeResult="failure" output=< Jan 01 09:33:14 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Jan 01 09:33:14 crc kubenswrapper[4867]: > Jan 01 09:33:23 crc kubenswrapper[4867]: I0101 09:33:23.781448 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jl2tx" Jan 01 09:33:23 crc kubenswrapper[4867]: I0101 09:33:23.839948 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jl2tx" Jan 01 09:33:24 crc kubenswrapper[4867]: I0101 09:33:24.971400 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jl2tx"] Jan 01 09:33:24 crc kubenswrapper[4867]: I0101 09:33:24.971646 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jl2tx" podUID="ecc7490d-25ce-4c82-a622-c5832b1f8f50" containerName="registry-server" containerID="cri-o://cfbc86f32dfbc3407c06381254d6518b870482a02c85aad1ddb7ce9490d7e06d" gracePeriod=2 Jan 01 09:33:25 crc kubenswrapper[4867]: I0101 09:33:25.491939 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jl2tx" Jan 01 09:33:25 crc kubenswrapper[4867]: I0101 09:33:25.630435 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecc7490d-25ce-4c82-a622-c5832b1f8f50-utilities\") pod \"ecc7490d-25ce-4c82-a622-c5832b1f8f50\" (UID: \"ecc7490d-25ce-4c82-a622-c5832b1f8f50\") " Jan 01 09:33:25 crc kubenswrapper[4867]: I0101 09:33:25.630541 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpks4\" (UniqueName: \"kubernetes.io/projected/ecc7490d-25ce-4c82-a622-c5832b1f8f50-kube-api-access-zpks4\") pod \"ecc7490d-25ce-4c82-a622-c5832b1f8f50\" (UID: \"ecc7490d-25ce-4c82-a622-c5832b1f8f50\") " Jan 01 09:33:25 crc kubenswrapper[4867]: I0101 09:33:25.630572 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecc7490d-25ce-4c82-a622-c5832b1f8f50-catalog-content\") pod \"ecc7490d-25ce-4c82-a622-c5832b1f8f50\" (UID: \"ecc7490d-25ce-4c82-a622-c5832b1f8f50\") " Jan 01 09:33:25 crc kubenswrapper[4867]: I0101 09:33:25.632255 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecc7490d-25ce-4c82-a622-c5832b1f8f50-utilities" (OuterVolumeSpecName: "utilities") pod "ecc7490d-25ce-4c82-a622-c5832b1f8f50" (UID: "ecc7490d-25ce-4c82-a622-c5832b1f8f50"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:33:25 crc kubenswrapper[4867]: I0101 09:33:25.640827 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecc7490d-25ce-4c82-a622-c5832b1f8f50-kube-api-access-zpks4" (OuterVolumeSpecName: "kube-api-access-zpks4") pod "ecc7490d-25ce-4c82-a622-c5832b1f8f50" (UID: "ecc7490d-25ce-4c82-a622-c5832b1f8f50"). InnerVolumeSpecName "kube-api-access-zpks4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:33:25 crc kubenswrapper[4867]: I0101 09:33:25.732283 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecc7490d-25ce-4c82-a622-c5832b1f8f50-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 09:33:25 crc kubenswrapper[4867]: I0101 09:33:25.732356 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpks4\" (UniqueName: \"kubernetes.io/projected/ecc7490d-25ce-4c82-a622-c5832b1f8f50-kube-api-access-zpks4\") on node \"crc\" DevicePath \"\"" Jan 01 09:33:25 crc kubenswrapper[4867]: I0101 09:33:25.819222 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecc7490d-25ce-4c82-a622-c5832b1f8f50-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ecc7490d-25ce-4c82-a622-c5832b1f8f50" (UID: "ecc7490d-25ce-4c82-a622-c5832b1f8f50"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:33:25 crc kubenswrapper[4867]: I0101 09:33:25.855697 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecc7490d-25ce-4c82-a622-c5832b1f8f50-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 09:33:25 crc kubenswrapper[4867]: I0101 09:33:25.959838 4867 generic.go:334] "Generic (PLEG): container finished" podID="ecc7490d-25ce-4c82-a622-c5832b1f8f50" containerID="cfbc86f32dfbc3407c06381254d6518b870482a02c85aad1ddb7ce9490d7e06d" exitCode=0 Jan 01 09:33:25 crc kubenswrapper[4867]: I0101 09:33:25.959959 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jl2tx" event={"ID":"ecc7490d-25ce-4c82-a622-c5832b1f8f50","Type":"ContainerDied","Data":"cfbc86f32dfbc3407c06381254d6518b870482a02c85aad1ddb7ce9490d7e06d"} Jan 01 09:33:25 crc kubenswrapper[4867]: I0101 09:33:25.959995 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jl2tx" event={"ID":"ecc7490d-25ce-4c82-a622-c5832b1f8f50","Type":"ContainerDied","Data":"ebdae03db7eb27ae778b786665031063f8fc35d6583b1a58abf4cf7c3d09c39a"} Jan 01 09:33:25 crc kubenswrapper[4867]: I0101 09:33:25.960016 4867 scope.go:117] "RemoveContainer" containerID="cfbc86f32dfbc3407c06381254d6518b870482a02c85aad1ddb7ce9490d7e06d" Jan 01 09:33:25 crc kubenswrapper[4867]: I0101 09:33:25.960205 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jl2tx" Jan 01 09:33:25 crc kubenswrapper[4867]: I0101 09:33:25.987274 4867 scope.go:117] "RemoveContainer" containerID="59a3edaabdc7e62fecedb1cf3dcca0f9ba66726014c4443a632a0366282a5b61" Jan 01 09:33:26 crc kubenswrapper[4867]: I0101 09:33:25.999642 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jl2tx"] Jan 01 09:33:26 crc kubenswrapper[4867]: I0101 09:33:26.008239 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jl2tx"] Jan 01 09:33:26 crc kubenswrapper[4867]: I0101 09:33:26.033096 4867 scope.go:117] "RemoveContainer" containerID="83791136660c45afbfc77a5efa5bffec55b79da26bb9217726ce28b35f6ea5b7" Jan 01 09:33:26 crc kubenswrapper[4867]: I0101 09:33:26.075090 4867 scope.go:117] "RemoveContainer" containerID="cfbc86f32dfbc3407c06381254d6518b870482a02c85aad1ddb7ce9490d7e06d" Jan 01 09:33:26 crc kubenswrapper[4867]: E0101 09:33:26.075570 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfbc86f32dfbc3407c06381254d6518b870482a02c85aad1ddb7ce9490d7e06d\": container with ID starting with cfbc86f32dfbc3407c06381254d6518b870482a02c85aad1ddb7ce9490d7e06d not found: ID does not exist" containerID="cfbc86f32dfbc3407c06381254d6518b870482a02c85aad1ddb7ce9490d7e06d" Jan 01 09:33:26 crc kubenswrapper[4867]: I0101 09:33:26.075608 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfbc86f32dfbc3407c06381254d6518b870482a02c85aad1ddb7ce9490d7e06d"} err="failed to get container status \"cfbc86f32dfbc3407c06381254d6518b870482a02c85aad1ddb7ce9490d7e06d\": rpc error: code = NotFound desc = could not find container \"cfbc86f32dfbc3407c06381254d6518b870482a02c85aad1ddb7ce9490d7e06d\": container with ID starting with cfbc86f32dfbc3407c06381254d6518b870482a02c85aad1ddb7ce9490d7e06d not found: ID does not exist" Jan 01 09:33:26 crc kubenswrapper[4867]: I0101 09:33:26.075633 4867 scope.go:117] "RemoveContainer" containerID="59a3edaabdc7e62fecedb1cf3dcca0f9ba66726014c4443a632a0366282a5b61" Jan 01 09:33:26 crc kubenswrapper[4867]: E0101 09:33:26.075798 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59a3edaabdc7e62fecedb1cf3dcca0f9ba66726014c4443a632a0366282a5b61\": container with ID starting with 59a3edaabdc7e62fecedb1cf3dcca0f9ba66726014c4443a632a0366282a5b61 not found: ID does not exist" containerID="59a3edaabdc7e62fecedb1cf3dcca0f9ba66726014c4443a632a0366282a5b61" Jan 01 09:33:26 crc kubenswrapper[4867]: I0101 09:33:26.075817 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59a3edaabdc7e62fecedb1cf3dcca0f9ba66726014c4443a632a0366282a5b61"} err="failed to get container status \"59a3edaabdc7e62fecedb1cf3dcca0f9ba66726014c4443a632a0366282a5b61\": rpc error: code = NotFound desc = could not find container \"59a3edaabdc7e62fecedb1cf3dcca0f9ba66726014c4443a632a0366282a5b61\": container with ID starting with 59a3edaabdc7e62fecedb1cf3dcca0f9ba66726014c4443a632a0366282a5b61 not found: ID does not exist" Jan 01 09:33:26 crc kubenswrapper[4867]: I0101 09:33:26.075830 4867 scope.go:117] "RemoveContainer" containerID="83791136660c45afbfc77a5efa5bffec55b79da26bb9217726ce28b35f6ea5b7" Jan 01 09:33:26 crc kubenswrapper[4867]: E0101 09:33:26.075996 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83791136660c45afbfc77a5efa5bffec55b79da26bb9217726ce28b35f6ea5b7\": container with ID starting with 83791136660c45afbfc77a5efa5bffec55b79da26bb9217726ce28b35f6ea5b7 not found: ID does not exist" containerID="83791136660c45afbfc77a5efa5bffec55b79da26bb9217726ce28b35f6ea5b7" Jan 01 09:33:26 crc kubenswrapper[4867]: I0101 09:33:26.076016 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83791136660c45afbfc77a5efa5bffec55b79da26bb9217726ce28b35f6ea5b7"} err="failed to get container status \"83791136660c45afbfc77a5efa5bffec55b79da26bb9217726ce28b35f6ea5b7\": rpc error: code = NotFound desc = could not find container \"83791136660c45afbfc77a5efa5bffec55b79da26bb9217726ce28b35f6ea5b7\": container with ID starting with 83791136660c45afbfc77a5efa5bffec55b79da26bb9217726ce28b35f6ea5b7 not found: ID does not exist" Jan 01 09:33:27 crc kubenswrapper[4867]: I0101 09:33:27.139320 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecc7490d-25ce-4c82-a622-c5832b1f8f50" path="/var/lib/kubelet/pods/ecc7490d-25ce-4c82-a622-c5832b1f8f50/volumes" Jan 01 09:34:43 crc kubenswrapper[4867]: I0101 09:34:43.145560 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bhkh6"] Jan 01 09:34:43 crc kubenswrapper[4867]: E0101 09:34:43.146708 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecc7490d-25ce-4c82-a622-c5832b1f8f50" containerName="extract-utilities" Jan 01 09:34:43 crc kubenswrapper[4867]: I0101 09:34:43.146731 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecc7490d-25ce-4c82-a622-c5832b1f8f50" containerName="extract-utilities" Jan 01 09:34:43 crc kubenswrapper[4867]: E0101 09:34:43.146754 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecc7490d-25ce-4c82-a622-c5832b1f8f50" containerName="extract-content" Jan 01 09:34:43 crc kubenswrapper[4867]: I0101 09:34:43.146766 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecc7490d-25ce-4c82-a622-c5832b1f8f50" containerName="extract-content" Jan 01 09:34:43 crc kubenswrapper[4867]: E0101 09:34:43.146796 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecc7490d-25ce-4c82-a622-c5832b1f8f50" containerName="registry-server" Jan 01 09:34:43 crc kubenswrapper[4867]: I0101 09:34:43.146808 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecc7490d-25ce-4c82-a622-c5832b1f8f50" containerName="registry-server" Jan 01 09:34:43 crc kubenswrapper[4867]: I0101 09:34:43.147101 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecc7490d-25ce-4c82-a622-c5832b1f8f50" containerName="registry-server" Jan 01 09:34:43 crc kubenswrapper[4867]: I0101 09:34:43.149697 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhkh6" Jan 01 09:34:43 crc kubenswrapper[4867]: I0101 09:34:43.173940 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bhkh6"] Jan 01 09:34:43 crc kubenswrapper[4867]: I0101 09:34:43.254122 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47791f7d-6ac5-46c1-b1aa-e463d6129c7b-catalog-content\") pod \"certified-operators-bhkh6\" (UID: \"47791f7d-6ac5-46c1-b1aa-e463d6129c7b\") " pod="openshift-marketplace/certified-operators-bhkh6" Jan 01 09:34:43 crc kubenswrapper[4867]: I0101 09:34:43.254238 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47791f7d-6ac5-46c1-b1aa-e463d6129c7b-utilities\") pod \"certified-operators-bhkh6\" (UID: \"47791f7d-6ac5-46c1-b1aa-e463d6129c7b\") " pod="openshift-marketplace/certified-operators-bhkh6" Jan 01 09:34:43 crc kubenswrapper[4867]: I0101 09:34:43.254280 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwkqm\" (UniqueName: \"kubernetes.io/projected/47791f7d-6ac5-46c1-b1aa-e463d6129c7b-kube-api-access-dwkqm\") pod \"certified-operators-bhkh6\" (UID: \"47791f7d-6ac5-46c1-b1aa-e463d6129c7b\") " pod="openshift-marketplace/certified-operators-bhkh6" Jan 01 09:34:43 crc kubenswrapper[4867]: I0101 09:34:43.356196 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47791f7d-6ac5-46c1-b1aa-e463d6129c7b-catalog-content\") pod \"certified-operators-bhkh6\" (UID: \"47791f7d-6ac5-46c1-b1aa-e463d6129c7b\") " pod="openshift-marketplace/certified-operators-bhkh6" Jan 01 09:34:43 crc kubenswrapper[4867]: I0101 09:34:43.356322 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47791f7d-6ac5-46c1-b1aa-e463d6129c7b-utilities\") pod \"certified-operators-bhkh6\" (UID: \"47791f7d-6ac5-46c1-b1aa-e463d6129c7b\") " pod="openshift-marketplace/certified-operators-bhkh6" Jan 01 09:34:43 crc kubenswrapper[4867]: I0101 09:34:43.356371 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwkqm\" (UniqueName: \"kubernetes.io/projected/47791f7d-6ac5-46c1-b1aa-e463d6129c7b-kube-api-access-dwkqm\") pod \"certified-operators-bhkh6\" (UID: \"47791f7d-6ac5-46c1-b1aa-e463d6129c7b\") " pod="openshift-marketplace/certified-operators-bhkh6" Jan 01 09:34:43 crc kubenswrapper[4867]: I0101 09:34:43.356865 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47791f7d-6ac5-46c1-b1aa-e463d6129c7b-catalog-content\") pod \"certified-operators-bhkh6\" (UID: \"47791f7d-6ac5-46c1-b1aa-e463d6129c7b\") " pod="openshift-marketplace/certified-operators-bhkh6" Jan 01 09:34:43 crc kubenswrapper[4867]: I0101 09:34:43.356939 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47791f7d-6ac5-46c1-b1aa-e463d6129c7b-utilities\") pod \"certified-operators-bhkh6\" (UID: \"47791f7d-6ac5-46c1-b1aa-e463d6129c7b\") " pod="openshift-marketplace/certified-operators-bhkh6" Jan 01 09:34:43 crc kubenswrapper[4867]: I0101 09:34:43.377787 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwkqm\" (UniqueName: \"kubernetes.io/projected/47791f7d-6ac5-46c1-b1aa-e463d6129c7b-kube-api-access-dwkqm\") pod \"certified-operators-bhkh6\" (UID: \"47791f7d-6ac5-46c1-b1aa-e463d6129c7b\") " pod="openshift-marketplace/certified-operators-bhkh6" Jan 01 09:34:43 crc kubenswrapper[4867]: I0101 09:34:43.485846 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhkh6" Jan 01 09:34:44 crc kubenswrapper[4867]: I0101 09:34:44.002763 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bhkh6"] Jan 01 09:34:44 crc kubenswrapper[4867]: I0101 09:34:44.651355 4867 generic.go:334] "Generic (PLEG): container finished" podID="47791f7d-6ac5-46c1-b1aa-e463d6129c7b" containerID="7eb9a0f0bb1d0eb87114135243ac980af72776933b20cdb61c1f78021ebcb387" exitCode=0 Jan 01 09:34:44 crc kubenswrapper[4867]: I0101 09:34:44.651444 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhkh6" event={"ID":"47791f7d-6ac5-46c1-b1aa-e463d6129c7b","Type":"ContainerDied","Data":"7eb9a0f0bb1d0eb87114135243ac980af72776933b20cdb61c1f78021ebcb387"} Jan 01 09:34:44 crc kubenswrapper[4867]: I0101 09:34:44.651786 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhkh6" event={"ID":"47791f7d-6ac5-46c1-b1aa-e463d6129c7b","Type":"ContainerStarted","Data":"86a3d4fd2b3991f5872fcaced3818392fa688dd5923e0805b00ada8a97554413"} Jan 01 09:34:45 crc kubenswrapper[4867]: I0101 09:34:45.668193 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhkh6" event={"ID":"47791f7d-6ac5-46c1-b1aa-e463d6129c7b","Type":"ContainerStarted","Data":"a0952f9d14f72edd8785c08b6ff16ac42d8f9d0f03aa6dc1fd9f0a0dee3fa08b"} Jan 01 09:34:46 crc kubenswrapper[4867]: I0101 09:34:46.682467 4867 generic.go:334] "Generic (PLEG): container finished" podID="47791f7d-6ac5-46c1-b1aa-e463d6129c7b" containerID="a0952f9d14f72edd8785c08b6ff16ac42d8f9d0f03aa6dc1fd9f0a0dee3fa08b" exitCode=0 Jan 01 09:34:46 crc kubenswrapper[4867]: I0101 09:34:46.682540 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhkh6" event={"ID":"47791f7d-6ac5-46c1-b1aa-e463d6129c7b","Type":"ContainerDied","Data":"a0952f9d14f72edd8785c08b6ff16ac42d8f9d0f03aa6dc1fd9f0a0dee3fa08b"} Jan 01 09:34:47 crc kubenswrapper[4867]: I0101 09:34:47.696879 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhkh6" event={"ID":"47791f7d-6ac5-46c1-b1aa-e463d6129c7b","Type":"ContainerStarted","Data":"01fb4e0941ddabcb9982054f3044d76f3a32a7ac10ccfd11b2f364b2e7b0cc14"} Jan 01 09:34:47 crc kubenswrapper[4867]: I0101 09:34:47.730424 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bhkh6" podStartSLOduration=2.180462955 podStartE2EDuration="4.730399444s" podCreationTimestamp="2026-01-01 09:34:43 +0000 UTC" firstStartedPulling="2026-01-01 09:34:44.653937554 +0000 UTC m=+4093.789206333" lastFinishedPulling="2026-01-01 09:34:47.203874023 +0000 UTC m=+4096.339142822" observedRunningTime="2026-01-01 09:34:47.725390691 +0000 UTC m=+4096.860659500" watchObservedRunningTime="2026-01-01 09:34:47.730399444 +0000 UTC m=+4096.865668223" Jan 01 09:34:51 crc kubenswrapper[4867]: I0101 09:34:51.331807 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 09:34:51 crc kubenswrapper[4867]: I0101 09:34:51.332346 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 09:34:53 crc kubenswrapper[4867]: I0101 09:34:53.486677 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bhkh6" Jan 01 09:34:53 crc kubenswrapper[4867]: I0101 09:34:53.487251 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bhkh6" Jan 01 09:34:53 crc kubenswrapper[4867]: I0101 09:34:53.566317 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bhkh6" Jan 01 09:34:53 crc kubenswrapper[4867]: I0101 09:34:53.824670 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bhkh6" Jan 01 09:34:53 crc kubenswrapper[4867]: I0101 09:34:53.890049 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bhkh6"] Jan 01 09:34:55 crc kubenswrapper[4867]: I0101 09:34:55.770270 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bhkh6" podUID="47791f7d-6ac5-46c1-b1aa-e463d6129c7b" containerName="registry-server" containerID="cri-o://01fb4e0941ddabcb9982054f3044d76f3a32a7ac10ccfd11b2f364b2e7b0cc14" gracePeriod=2 Jan 01 09:34:56 crc kubenswrapper[4867]: I0101 09:34:56.791022 4867 generic.go:334] "Generic (PLEG): container finished" podID="47791f7d-6ac5-46c1-b1aa-e463d6129c7b" containerID="01fb4e0941ddabcb9982054f3044d76f3a32a7ac10ccfd11b2f364b2e7b0cc14" exitCode=0 Jan 01 09:34:56 crc kubenswrapper[4867]: I0101 09:34:56.791110 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhkh6" event={"ID":"47791f7d-6ac5-46c1-b1aa-e463d6129c7b","Type":"ContainerDied","Data":"01fb4e0941ddabcb9982054f3044d76f3a32a7ac10ccfd11b2f364b2e7b0cc14"} Jan 01 09:34:56 crc kubenswrapper[4867]: I0101 09:34:56.971795 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhkh6" Jan 01 09:34:57 crc kubenswrapper[4867]: I0101 09:34:57.075137 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47791f7d-6ac5-46c1-b1aa-e463d6129c7b-utilities\") pod \"47791f7d-6ac5-46c1-b1aa-e463d6129c7b\" (UID: \"47791f7d-6ac5-46c1-b1aa-e463d6129c7b\") " Jan 01 09:34:57 crc kubenswrapper[4867]: I0101 09:34:57.075260 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47791f7d-6ac5-46c1-b1aa-e463d6129c7b-catalog-content\") pod \"47791f7d-6ac5-46c1-b1aa-e463d6129c7b\" (UID: \"47791f7d-6ac5-46c1-b1aa-e463d6129c7b\") " Jan 01 09:34:57 crc kubenswrapper[4867]: I0101 09:34:57.075428 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwkqm\" (UniqueName: \"kubernetes.io/projected/47791f7d-6ac5-46c1-b1aa-e463d6129c7b-kube-api-access-dwkqm\") pod \"47791f7d-6ac5-46c1-b1aa-e463d6129c7b\" (UID: \"47791f7d-6ac5-46c1-b1aa-e463d6129c7b\") " Jan 01 09:34:57 crc kubenswrapper[4867]: I0101 09:34:57.076300 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47791f7d-6ac5-46c1-b1aa-e463d6129c7b-utilities" (OuterVolumeSpecName: "utilities") pod "47791f7d-6ac5-46c1-b1aa-e463d6129c7b" (UID: "47791f7d-6ac5-46c1-b1aa-e463d6129c7b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:34:57 crc kubenswrapper[4867]: I0101 09:34:57.084286 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47791f7d-6ac5-46c1-b1aa-e463d6129c7b-kube-api-access-dwkqm" (OuterVolumeSpecName: "kube-api-access-dwkqm") pod "47791f7d-6ac5-46c1-b1aa-e463d6129c7b" (UID: "47791f7d-6ac5-46c1-b1aa-e463d6129c7b"). InnerVolumeSpecName "kube-api-access-dwkqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:34:57 crc kubenswrapper[4867]: I0101 09:34:57.130215 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47791f7d-6ac5-46c1-b1aa-e463d6129c7b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "47791f7d-6ac5-46c1-b1aa-e463d6129c7b" (UID: "47791f7d-6ac5-46c1-b1aa-e463d6129c7b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:34:57 crc kubenswrapper[4867]: I0101 09:34:57.176515 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwkqm\" (UniqueName: \"kubernetes.io/projected/47791f7d-6ac5-46c1-b1aa-e463d6129c7b-kube-api-access-dwkqm\") on node \"crc\" DevicePath \"\"" Jan 01 09:34:57 crc kubenswrapper[4867]: I0101 09:34:57.176770 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47791f7d-6ac5-46c1-b1aa-e463d6129c7b-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 09:34:57 crc kubenswrapper[4867]: I0101 09:34:57.176779 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47791f7d-6ac5-46c1-b1aa-e463d6129c7b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 09:34:57 crc kubenswrapper[4867]: I0101 09:34:57.806136 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhkh6" event={"ID":"47791f7d-6ac5-46c1-b1aa-e463d6129c7b","Type":"ContainerDied","Data":"86a3d4fd2b3991f5872fcaced3818392fa688dd5923e0805b00ada8a97554413"} Jan 01 09:34:57 crc kubenswrapper[4867]: I0101 09:34:57.806224 4867 scope.go:117] "RemoveContainer" containerID="01fb4e0941ddabcb9982054f3044d76f3a32a7ac10ccfd11b2f364b2e7b0cc14" Jan 01 09:34:57 crc kubenswrapper[4867]: I0101 09:34:57.806268 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhkh6" Jan 01 09:34:57 crc kubenswrapper[4867]: I0101 09:34:57.839387 4867 scope.go:117] "RemoveContainer" containerID="a0952f9d14f72edd8785c08b6ff16ac42d8f9d0f03aa6dc1fd9f0a0dee3fa08b" Jan 01 09:34:57 crc kubenswrapper[4867]: I0101 09:34:57.844087 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bhkh6"] Jan 01 09:34:57 crc kubenswrapper[4867]: I0101 09:34:57.854098 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bhkh6"] Jan 01 09:34:57 crc kubenswrapper[4867]: I0101 09:34:57.864175 4867 scope.go:117] "RemoveContainer" containerID="7eb9a0f0bb1d0eb87114135243ac980af72776933b20cdb61c1f78021ebcb387" Jan 01 09:34:59 crc kubenswrapper[4867]: I0101 09:34:59.147913 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47791f7d-6ac5-46c1-b1aa-e463d6129c7b" path="/var/lib/kubelet/pods/47791f7d-6ac5-46c1-b1aa-e463d6129c7b/volumes" Jan 01 09:35:21 crc kubenswrapper[4867]: I0101 09:35:21.331528 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 09:35:21 crc kubenswrapper[4867]: I0101 09:35:21.332439 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 09:35:24 crc kubenswrapper[4867]: I0101 09:35:24.383040 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7fdfd"] Jan 01 09:35:24 crc kubenswrapper[4867]: E0101 09:35:24.384213 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47791f7d-6ac5-46c1-b1aa-e463d6129c7b" containerName="extract-content" Jan 01 09:35:24 crc kubenswrapper[4867]: I0101 09:35:24.384314 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="47791f7d-6ac5-46c1-b1aa-e463d6129c7b" containerName="extract-content" Jan 01 09:35:24 crc kubenswrapper[4867]: E0101 09:35:24.384414 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47791f7d-6ac5-46c1-b1aa-e463d6129c7b" containerName="extract-utilities" Jan 01 09:35:24 crc kubenswrapper[4867]: I0101 09:35:24.384495 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="47791f7d-6ac5-46c1-b1aa-e463d6129c7b" containerName="extract-utilities" Jan 01 09:35:24 crc kubenswrapper[4867]: E0101 09:35:24.384526 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47791f7d-6ac5-46c1-b1aa-e463d6129c7b" containerName="registry-server" Jan 01 09:35:24 crc kubenswrapper[4867]: I0101 09:35:24.384605 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="47791f7d-6ac5-46c1-b1aa-e463d6129c7b" containerName="registry-server" Jan 01 09:35:24 crc kubenswrapper[4867]: I0101 09:35:24.393827 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="47791f7d-6ac5-46c1-b1aa-e463d6129c7b" containerName="registry-server" Jan 01 09:35:24 crc kubenswrapper[4867]: I0101 09:35:24.405963 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7fdfd"] Jan 01 09:35:24 crc kubenswrapper[4867]: I0101 09:35:24.406224 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7fdfd" Jan 01 09:35:24 crc kubenswrapper[4867]: I0101 09:35:24.554956 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f910e576-5949-44e0-8f48-97755c5ea0b1-utilities\") pod \"redhat-marketplace-7fdfd\" (UID: \"f910e576-5949-44e0-8f48-97755c5ea0b1\") " pod="openshift-marketplace/redhat-marketplace-7fdfd" Jan 01 09:35:24 crc kubenswrapper[4867]: I0101 09:35:24.555021 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f910e576-5949-44e0-8f48-97755c5ea0b1-catalog-content\") pod \"redhat-marketplace-7fdfd\" (UID: \"f910e576-5949-44e0-8f48-97755c5ea0b1\") " pod="openshift-marketplace/redhat-marketplace-7fdfd" Jan 01 09:35:24 crc kubenswrapper[4867]: I0101 09:35:24.555306 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gmp5\" (UniqueName: \"kubernetes.io/projected/f910e576-5949-44e0-8f48-97755c5ea0b1-kube-api-access-7gmp5\") pod \"redhat-marketplace-7fdfd\" (UID: \"f910e576-5949-44e0-8f48-97755c5ea0b1\") " pod="openshift-marketplace/redhat-marketplace-7fdfd" Jan 01 09:35:24 crc kubenswrapper[4867]: I0101 09:35:24.656441 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f910e576-5949-44e0-8f48-97755c5ea0b1-utilities\") pod \"redhat-marketplace-7fdfd\" (UID: \"f910e576-5949-44e0-8f48-97755c5ea0b1\") " pod="openshift-marketplace/redhat-marketplace-7fdfd" Jan 01 09:35:24 crc kubenswrapper[4867]: I0101 09:35:24.656506 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f910e576-5949-44e0-8f48-97755c5ea0b1-catalog-content\") pod \"redhat-marketplace-7fdfd\" (UID: \"f910e576-5949-44e0-8f48-97755c5ea0b1\") " pod="openshift-marketplace/redhat-marketplace-7fdfd" Jan 01 09:35:24 crc kubenswrapper[4867]: I0101 09:35:24.656592 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gmp5\" (UniqueName: \"kubernetes.io/projected/f910e576-5949-44e0-8f48-97755c5ea0b1-kube-api-access-7gmp5\") pod \"redhat-marketplace-7fdfd\" (UID: \"f910e576-5949-44e0-8f48-97755c5ea0b1\") " pod="openshift-marketplace/redhat-marketplace-7fdfd" Jan 01 09:35:24 crc kubenswrapper[4867]: I0101 09:35:24.657123 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f910e576-5949-44e0-8f48-97755c5ea0b1-catalog-content\") pod \"redhat-marketplace-7fdfd\" (UID: \"f910e576-5949-44e0-8f48-97755c5ea0b1\") " pod="openshift-marketplace/redhat-marketplace-7fdfd" Jan 01 09:35:24 crc kubenswrapper[4867]: I0101 09:35:24.657177 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f910e576-5949-44e0-8f48-97755c5ea0b1-utilities\") pod \"redhat-marketplace-7fdfd\" (UID: \"f910e576-5949-44e0-8f48-97755c5ea0b1\") " pod="openshift-marketplace/redhat-marketplace-7fdfd" Jan 01 09:35:24 crc kubenswrapper[4867]: I0101 09:35:24.689139 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gmp5\" (UniqueName: \"kubernetes.io/projected/f910e576-5949-44e0-8f48-97755c5ea0b1-kube-api-access-7gmp5\") pod \"redhat-marketplace-7fdfd\" (UID: \"f910e576-5949-44e0-8f48-97755c5ea0b1\") " pod="openshift-marketplace/redhat-marketplace-7fdfd" Jan 01 09:35:24 crc kubenswrapper[4867]: I0101 09:35:24.746063 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7fdfd" Jan 01 09:35:25 crc kubenswrapper[4867]: I0101 09:35:25.231568 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7fdfd"] Jan 01 09:35:26 crc kubenswrapper[4867]: I0101 09:35:26.091947 4867 generic.go:334] "Generic (PLEG): container finished" podID="f910e576-5949-44e0-8f48-97755c5ea0b1" containerID="80e6655abd09d5edd18025d3abd8157e9074be772c37a888147adfef59aa8f4e" exitCode=0 Jan 01 09:35:26 crc kubenswrapper[4867]: I0101 09:35:26.092006 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7fdfd" event={"ID":"f910e576-5949-44e0-8f48-97755c5ea0b1","Type":"ContainerDied","Data":"80e6655abd09d5edd18025d3abd8157e9074be772c37a888147adfef59aa8f4e"} Jan 01 09:35:26 crc kubenswrapper[4867]: I0101 09:35:26.092237 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7fdfd" event={"ID":"f910e576-5949-44e0-8f48-97755c5ea0b1","Type":"ContainerStarted","Data":"2f2d9c3f5056257c89c344a7006125c725af3ef18250380ae1550f0b8e2b63ea"} Jan 01 09:35:27 crc kubenswrapper[4867]: I0101 09:35:27.105080 4867 generic.go:334] "Generic (PLEG): container finished" podID="f910e576-5949-44e0-8f48-97755c5ea0b1" containerID="fb4fe42665e2fa567f314c93e26c35c830e05159f78cbf6a2c23621c82bcede0" exitCode=0 Jan 01 09:35:27 crc kubenswrapper[4867]: I0101 09:35:27.105148 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7fdfd" event={"ID":"f910e576-5949-44e0-8f48-97755c5ea0b1","Type":"ContainerDied","Data":"fb4fe42665e2fa567f314c93e26c35c830e05159f78cbf6a2c23621c82bcede0"} Jan 01 09:35:28 crc kubenswrapper[4867]: I0101 09:35:28.117464 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7fdfd" event={"ID":"f910e576-5949-44e0-8f48-97755c5ea0b1","Type":"ContainerStarted","Data":"f9c3515c3993551678a64b2879eea017f633a3c64987bb5b2d54db2aa197b761"} Jan 01 09:35:28 crc kubenswrapper[4867]: I0101 09:35:28.148471 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7fdfd" podStartSLOduration=2.714967638 podStartE2EDuration="4.148443007s" podCreationTimestamp="2026-01-01 09:35:24 +0000 UTC" firstStartedPulling="2026-01-01 09:35:26.094091376 +0000 UTC m=+4135.229360185" lastFinishedPulling="2026-01-01 09:35:27.527566745 +0000 UTC m=+4136.662835554" observedRunningTime="2026-01-01 09:35:28.137074743 +0000 UTC m=+4137.272343582" watchObservedRunningTime="2026-01-01 09:35:28.148443007 +0000 UTC m=+4137.283711816" Jan 01 09:35:34 crc kubenswrapper[4867]: I0101 09:35:34.746349 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7fdfd" Jan 01 09:35:34 crc kubenswrapper[4867]: I0101 09:35:34.747197 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7fdfd" Jan 01 09:35:34 crc kubenswrapper[4867]: I0101 09:35:34.820871 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7fdfd" Jan 01 09:35:35 crc kubenswrapper[4867]: I0101 09:35:35.253864 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7fdfd" Jan 01 09:35:35 crc kubenswrapper[4867]: I0101 09:35:35.305256 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7fdfd"] Jan 01 09:35:37 crc kubenswrapper[4867]: I0101 09:35:37.198730 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7fdfd" podUID="f910e576-5949-44e0-8f48-97755c5ea0b1" containerName="registry-server" containerID="cri-o://f9c3515c3993551678a64b2879eea017f633a3c64987bb5b2d54db2aa197b761" gracePeriod=2 Jan 01 09:35:38 crc kubenswrapper[4867]: I0101 09:35:38.140329 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7fdfd" Jan 01 09:35:38 crc kubenswrapper[4867]: I0101 09:35:38.210685 4867 generic.go:334] "Generic (PLEG): container finished" podID="f910e576-5949-44e0-8f48-97755c5ea0b1" containerID="f9c3515c3993551678a64b2879eea017f633a3c64987bb5b2d54db2aa197b761" exitCode=0 Jan 01 09:35:38 crc kubenswrapper[4867]: I0101 09:35:38.210732 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7fdfd" event={"ID":"f910e576-5949-44e0-8f48-97755c5ea0b1","Type":"ContainerDied","Data":"f9c3515c3993551678a64b2879eea017f633a3c64987bb5b2d54db2aa197b761"} Jan 01 09:35:38 crc kubenswrapper[4867]: I0101 09:35:38.210778 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7fdfd" event={"ID":"f910e576-5949-44e0-8f48-97755c5ea0b1","Type":"ContainerDied","Data":"2f2d9c3f5056257c89c344a7006125c725af3ef18250380ae1550f0b8e2b63ea"} Jan 01 09:35:38 crc kubenswrapper[4867]: I0101 09:35:38.210801 4867 scope.go:117] "RemoveContainer" containerID="f9c3515c3993551678a64b2879eea017f633a3c64987bb5b2d54db2aa197b761" Jan 01 09:35:38 crc kubenswrapper[4867]: I0101 09:35:38.210831 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7fdfd" Jan 01 09:35:38 crc kubenswrapper[4867]: I0101 09:35:38.238813 4867 scope.go:117] "RemoveContainer" containerID="fb4fe42665e2fa567f314c93e26c35c830e05159f78cbf6a2c23621c82bcede0" Jan 01 09:35:38 crc kubenswrapper[4867]: I0101 09:35:38.271291 4867 scope.go:117] "RemoveContainer" containerID="80e6655abd09d5edd18025d3abd8157e9074be772c37a888147adfef59aa8f4e" Jan 01 09:35:38 crc kubenswrapper[4867]: I0101 09:35:38.298080 4867 scope.go:117] "RemoveContainer" containerID="f9c3515c3993551678a64b2879eea017f633a3c64987bb5b2d54db2aa197b761" Jan 01 09:35:38 crc kubenswrapper[4867]: E0101 09:35:38.298599 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9c3515c3993551678a64b2879eea017f633a3c64987bb5b2d54db2aa197b761\": container with ID starting with f9c3515c3993551678a64b2879eea017f633a3c64987bb5b2d54db2aa197b761 not found: ID does not exist" containerID="f9c3515c3993551678a64b2879eea017f633a3c64987bb5b2d54db2aa197b761" Jan 01 09:35:38 crc kubenswrapper[4867]: I0101 09:35:38.298631 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9c3515c3993551678a64b2879eea017f633a3c64987bb5b2d54db2aa197b761"} err="failed to get container status \"f9c3515c3993551678a64b2879eea017f633a3c64987bb5b2d54db2aa197b761\": rpc error: code = NotFound desc = could not find container \"f9c3515c3993551678a64b2879eea017f633a3c64987bb5b2d54db2aa197b761\": container with ID starting with f9c3515c3993551678a64b2879eea017f633a3c64987bb5b2d54db2aa197b761 not found: ID does not exist" Jan 01 09:35:38 crc kubenswrapper[4867]: I0101 09:35:38.298652 4867 scope.go:117] "RemoveContainer" containerID="fb4fe42665e2fa567f314c93e26c35c830e05159f78cbf6a2c23621c82bcede0" Jan 01 09:35:38 crc kubenswrapper[4867]: E0101 09:35:38.299128 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb4fe42665e2fa567f314c93e26c35c830e05159f78cbf6a2c23621c82bcede0\": container with ID starting with fb4fe42665e2fa567f314c93e26c35c830e05159f78cbf6a2c23621c82bcede0 not found: ID does not exist" containerID="fb4fe42665e2fa567f314c93e26c35c830e05159f78cbf6a2c23621c82bcede0" Jan 01 09:35:38 crc kubenswrapper[4867]: I0101 09:35:38.299188 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb4fe42665e2fa567f314c93e26c35c830e05159f78cbf6a2c23621c82bcede0"} err="failed to get container status \"fb4fe42665e2fa567f314c93e26c35c830e05159f78cbf6a2c23621c82bcede0\": rpc error: code = NotFound desc = could not find container \"fb4fe42665e2fa567f314c93e26c35c830e05159f78cbf6a2c23621c82bcede0\": container with ID starting with fb4fe42665e2fa567f314c93e26c35c830e05159f78cbf6a2c23621c82bcede0 not found: ID does not exist" Jan 01 09:35:38 crc kubenswrapper[4867]: I0101 09:35:38.299269 4867 scope.go:117] "RemoveContainer" containerID="80e6655abd09d5edd18025d3abd8157e9074be772c37a888147adfef59aa8f4e" Jan 01 09:35:38 crc kubenswrapper[4867]: E0101 09:35:38.299653 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80e6655abd09d5edd18025d3abd8157e9074be772c37a888147adfef59aa8f4e\": container with ID starting with 80e6655abd09d5edd18025d3abd8157e9074be772c37a888147adfef59aa8f4e not found: ID does not exist" containerID="80e6655abd09d5edd18025d3abd8157e9074be772c37a888147adfef59aa8f4e" Jan 01 09:35:38 crc kubenswrapper[4867]: I0101 09:35:38.299682 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80e6655abd09d5edd18025d3abd8157e9074be772c37a888147adfef59aa8f4e"} err="failed to get container status \"80e6655abd09d5edd18025d3abd8157e9074be772c37a888147adfef59aa8f4e\": rpc error: code = NotFound desc = could not find container \"80e6655abd09d5edd18025d3abd8157e9074be772c37a888147adfef59aa8f4e\": container with ID starting with 80e6655abd09d5edd18025d3abd8157e9074be772c37a888147adfef59aa8f4e not found: ID does not exist" Jan 01 09:35:38 crc kubenswrapper[4867]: I0101 09:35:38.330804 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f910e576-5949-44e0-8f48-97755c5ea0b1-utilities\") pod \"f910e576-5949-44e0-8f48-97755c5ea0b1\" (UID: \"f910e576-5949-44e0-8f48-97755c5ea0b1\") " Jan 01 09:35:38 crc kubenswrapper[4867]: I0101 09:35:38.330907 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f910e576-5949-44e0-8f48-97755c5ea0b1-catalog-content\") pod \"f910e576-5949-44e0-8f48-97755c5ea0b1\" (UID: \"f910e576-5949-44e0-8f48-97755c5ea0b1\") " Jan 01 09:35:38 crc kubenswrapper[4867]: I0101 09:35:38.330939 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gmp5\" (UniqueName: \"kubernetes.io/projected/f910e576-5949-44e0-8f48-97755c5ea0b1-kube-api-access-7gmp5\") pod \"f910e576-5949-44e0-8f48-97755c5ea0b1\" (UID: \"f910e576-5949-44e0-8f48-97755c5ea0b1\") " Jan 01 09:35:38 crc kubenswrapper[4867]: I0101 09:35:38.336572 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f910e576-5949-44e0-8f48-97755c5ea0b1-utilities" (OuterVolumeSpecName: "utilities") pod "f910e576-5949-44e0-8f48-97755c5ea0b1" (UID: "f910e576-5949-44e0-8f48-97755c5ea0b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:35:38 crc kubenswrapper[4867]: I0101 09:35:38.339080 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f910e576-5949-44e0-8f48-97755c5ea0b1-kube-api-access-7gmp5" (OuterVolumeSpecName: "kube-api-access-7gmp5") pod "f910e576-5949-44e0-8f48-97755c5ea0b1" (UID: "f910e576-5949-44e0-8f48-97755c5ea0b1"). InnerVolumeSpecName "kube-api-access-7gmp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:35:38 crc kubenswrapper[4867]: I0101 09:35:38.362611 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f910e576-5949-44e0-8f48-97755c5ea0b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f910e576-5949-44e0-8f48-97755c5ea0b1" (UID: "f910e576-5949-44e0-8f48-97755c5ea0b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:35:38 crc kubenswrapper[4867]: I0101 09:35:38.432300 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f910e576-5949-44e0-8f48-97755c5ea0b1-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 09:35:38 crc kubenswrapper[4867]: I0101 09:35:38.432344 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f910e576-5949-44e0-8f48-97755c5ea0b1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 09:35:38 crc kubenswrapper[4867]: I0101 09:35:38.432363 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gmp5\" (UniqueName: \"kubernetes.io/projected/f910e576-5949-44e0-8f48-97755c5ea0b1-kube-api-access-7gmp5\") on node \"crc\" DevicePath \"\"" Jan 01 09:35:38 crc kubenswrapper[4867]: I0101 09:35:38.555706 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7fdfd"] Jan 01 09:35:38 crc kubenswrapper[4867]: I0101 09:35:38.563162 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7fdfd"] Jan 01 09:35:39 crc kubenswrapper[4867]: I0101 09:35:39.142093 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f910e576-5949-44e0-8f48-97755c5ea0b1" path="/var/lib/kubelet/pods/f910e576-5949-44e0-8f48-97755c5ea0b1/volumes" Jan 01 09:35:51 crc kubenswrapper[4867]: I0101 09:35:51.330973 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 09:35:51 crc kubenswrapper[4867]: I0101 09:35:51.331728 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 09:35:51 crc kubenswrapper[4867]: I0101 09:35:51.331774 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-69jph" Jan 01 09:35:51 crc kubenswrapper[4867]: I0101 09:35:51.332501 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0dc83063321e6b3a84dc0ca3e3ffa3ce1c8d7f541189e09b33c7c3311259fbcc"} pod="openshift-machine-config-operator/machine-config-daemon-69jph" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 01 09:35:51 crc kubenswrapper[4867]: I0101 09:35:51.332568 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" containerID="cri-o://0dc83063321e6b3a84dc0ca3e3ffa3ce1c8d7f541189e09b33c7c3311259fbcc" gracePeriod=600 Jan 01 09:35:52 crc kubenswrapper[4867]: I0101 09:35:52.340068 4867 generic.go:334] "Generic (PLEG): container finished" podID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerID="0dc83063321e6b3a84dc0ca3e3ffa3ce1c8d7f541189e09b33c7c3311259fbcc" exitCode=0 Jan 01 09:35:52 crc kubenswrapper[4867]: I0101 09:35:52.340219 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerDied","Data":"0dc83063321e6b3a84dc0ca3e3ffa3ce1c8d7f541189e09b33c7c3311259fbcc"} Jan 01 09:35:52 crc kubenswrapper[4867]: I0101 09:35:52.340655 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerStarted","Data":"fd98e574ca7574623e1ed48bfa390eba695323f03a3d7c0849c3b4c42842cbf1"} Jan 01 09:35:52 crc kubenswrapper[4867]: I0101 09:35:52.340702 4867 scope.go:117] "RemoveContainer" containerID="fe8217f3f465aea496163b54d945ea6122fe0cdc382c79e94a49c603f1007815" Jan 01 09:37:51 crc kubenswrapper[4867]: I0101 09:37:51.331458 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 09:37:51 crc kubenswrapper[4867]: I0101 09:37:51.332277 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 09:38:21 crc kubenswrapper[4867]: I0101 09:38:21.331337 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 09:38:21 crc kubenswrapper[4867]: I0101 09:38:21.331877 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 09:38:28 crc kubenswrapper[4867]: I0101 09:38:28.508844 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-thj57"] Jan 01 09:38:28 crc kubenswrapper[4867]: E0101 09:38:28.509794 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f910e576-5949-44e0-8f48-97755c5ea0b1" containerName="registry-server" Jan 01 09:38:28 crc kubenswrapper[4867]: I0101 09:38:28.509818 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f910e576-5949-44e0-8f48-97755c5ea0b1" containerName="registry-server" Jan 01 09:38:28 crc kubenswrapper[4867]: E0101 09:38:28.509838 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f910e576-5949-44e0-8f48-97755c5ea0b1" containerName="extract-utilities" Jan 01 09:38:28 crc kubenswrapper[4867]: I0101 09:38:28.509846 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f910e576-5949-44e0-8f48-97755c5ea0b1" containerName="extract-utilities" Jan 01 09:38:28 crc kubenswrapper[4867]: E0101 09:38:28.509863 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f910e576-5949-44e0-8f48-97755c5ea0b1" containerName="extract-content" Jan 01 09:38:28 crc kubenswrapper[4867]: I0101 09:38:28.509874 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f910e576-5949-44e0-8f48-97755c5ea0b1" containerName="extract-content" Jan 01 09:38:28 crc kubenswrapper[4867]: I0101 09:38:28.510076 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f910e576-5949-44e0-8f48-97755c5ea0b1" containerName="registry-server" Jan 01 09:38:28 crc kubenswrapper[4867]: I0101 09:38:28.512831 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-thj57" Jan 01 09:38:28 crc kubenswrapper[4867]: I0101 09:38:28.519538 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-thj57"] Jan 01 09:38:28 crc kubenswrapper[4867]: I0101 09:38:28.666953 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0982718e-19ea-4818-85c5-054447e6bf66-catalog-content\") pod \"community-operators-thj57\" (UID: \"0982718e-19ea-4818-85c5-054447e6bf66\") " pod="openshift-marketplace/community-operators-thj57" Jan 01 09:38:28 crc kubenswrapper[4867]: I0101 09:38:28.666999 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fw6j\" (UniqueName: \"kubernetes.io/projected/0982718e-19ea-4818-85c5-054447e6bf66-kube-api-access-8fw6j\") pod \"community-operators-thj57\" (UID: \"0982718e-19ea-4818-85c5-054447e6bf66\") " pod="openshift-marketplace/community-operators-thj57" Jan 01 09:38:28 crc kubenswrapper[4867]: I0101 09:38:28.667101 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0982718e-19ea-4818-85c5-054447e6bf66-utilities\") pod \"community-operators-thj57\" (UID: \"0982718e-19ea-4818-85c5-054447e6bf66\") " pod="openshift-marketplace/community-operators-thj57" Jan 01 09:38:28 crc kubenswrapper[4867]: I0101 09:38:28.768929 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0982718e-19ea-4818-85c5-054447e6bf66-catalog-content\") pod \"community-operators-thj57\" (UID: \"0982718e-19ea-4818-85c5-054447e6bf66\") " pod="openshift-marketplace/community-operators-thj57" Jan 01 09:38:28 crc kubenswrapper[4867]: I0101 09:38:28.768975 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fw6j\" (UniqueName: \"kubernetes.io/projected/0982718e-19ea-4818-85c5-054447e6bf66-kube-api-access-8fw6j\") pod \"community-operators-thj57\" (UID: \"0982718e-19ea-4818-85c5-054447e6bf66\") " pod="openshift-marketplace/community-operators-thj57" Jan 01 09:38:28 crc kubenswrapper[4867]: I0101 09:38:28.769012 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0982718e-19ea-4818-85c5-054447e6bf66-utilities\") pod \"community-operators-thj57\" (UID: \"0982718e-19ea-4818-85c5-054447e6bf66\") " pod="openshift-marketplace/community-operators-thj57" Jan 01 09:38:28 crc kubenswrapper[4867]: I0101 09:38:28.769574 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0982718e-19ea-4818-85c5-054447e6bf66-catalog-content\") pod \"community-operators-thj57\" (UID: \"0982718e-19ea-4818-85c5-054447e6bf66\") " pod="openshift-marketplace/community-operators-thj57" Jan 01 09:38:28 crc kubenswrapper[4867]: I0101 09:38:28.769630 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0982718e-19ea-4818-85c5-054447e6bf66-utilities\") pod \"community-operators-thj57\" (UID: \"0982718e-19ea-4818-85c5-054447e6bf66\") " pod="openshift-marketplace/community-operators-thj57" Jan 01 09:38:28 crc kubenswrapper[4867]: I0101 09:38:28.789669 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fw6j\" (UniqueName: \"kubernetes.io/projected/0982718e-19ea-4818-85c5-054447e6bf66-kube-api-access-8fw6j\") pod \"community-operators-thj57\" (UID: \"0982718e-19ea-4818-85c5-054447e6bf66\") " pod="openshift-marketplace/community-operators-thj57" Jan 01 09:38:28 crc kubenswrapper[4867]: I0101 09:38:28.838343 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-thj57" Jan 01 09:38:29 crc kubenswrapper[4867]: I0101 09:38:29.336040 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-thj57"] Jan 01 09:38:29 crc kubenswrapper[4867]: I0101 09:38:29.771868 4867 generic.go:334] "Generic (PLEG): container finished" podID="0982718e-19ea-4818-85c5-054447e6bf66" containerID="e8ef9b97d6935b19d4467e517e9ebe6c61b17f5acb0dd9eeb42b8400e3177fb7" exitCode=0 Jan 01 09:38:29 crc kubenswrapper[4867]: I0101 09:38:29.771980 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-thj57" event={"ID":"0982718e-19ea-4818-85c5-054447e6bf66","Type":"ContainerDied","Data":"e8ef9b97d6935b19d4467e517e9ebe6c61b17f5acb0dd9eeb42b8400e3177fb7"} Jan 01 09:38:29 crc kubenswrapper[4867]: I0101 09:38:29.772021 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-thj57" event={"ID":"0982718e-19ea-4818-85c5-054447e6bf66","Type":"ContainerStarted","Data":"f9422646e7cd6055f96754fe3e054f42dd34a66b5184a5e02a6900f3fab2973a"} Jan 01 09:38:29 crc kubenswrapper[4867]: I0101 09:38:29.776654 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 01 09:38:30 crc kubenswrapper[4867]: I0101 09:38:30.783550 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-thj57" event={"ID":"0982718e-19ea-4818-85c5-054447e6bf66","Type":"ContainerStarted","Data":"677f3ef7dbeeb0c2a10750cf70324b1d56def658a681efb0cf351e25886dbfef"} Jan 01 09:38:31 crc kubenswrapper[4867]: I0101 09:38:31.794907 4867 generic.go:334] "Generic (PLEG): container finished" podID="0982718e-19ea-4818-85c5-054447e6bf66" containerID="677f3ef7dbeeb0c2a10750cf70324b1d56def658a681efb0cf351e25886dbfef" exitCode=0 Jan 01 09:38:31 crc kubenswrapper[4867]: I0101 09:38:31.795042 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-thj57" event={"ID":"0982718e-19ea-4818-85c5-054447e6bf66","Type":"ContainerDied","Data":"677f3ef7dbeeb0c2a10750cf70324b1d56def658a681efb0cf351e25886dbfef"} Jan 01 09:38:32 crc kubenswrapper[4867]: I0101 09:38:32.815028 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-thj57" event={"ID":"0982718e-19ea-4818-85c5-054447e6bf66","Type":"ContainerStarted","Data":"6217b2db24d7d7fbec8e8decf7cc21f0b55dcece90251f843b7d886e7aa05085"} Jan 01 09:38:32 crc kubenswrapper[4867]: I0101 09:38:32.840136 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-thj57" podStartSLOduration=2.4398860510000002 podStartE2EDuration="4.84010089s" podCreationTimestamp="2026-01-01 09:38:28 +0000 UTC" firstStartedPulling="2026-01-01 09:38:29.776260881 +0000 UTC m=+4318.911529680" lastFinishedPulling="2026-01-01 09:38:32.17647572 +0000 UTC m=+4321.311744519" observedRunningTime="2026-01-01 09:38:32.837920958 +0000 UTC m=+4321.973189757" watchObservedRunningTime="2026-01-01 09:38:32.84010089 +0000 UTC m=+4321.975369689" Jan 01 09:38:38 crc kubenswrapper[4867]: I0101 09:38:38.839195 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-thj57" Jan 01 09:38:38 crc kubenswrapper[4867]: I0101 09:38:38.839807 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-thj57" Jan 01 09:38:38 crc kubenswrapper[4867]: I0101 09:38:38.889508 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-thj57" Jan 01 09:38:38 crc kubenswrapper[4867]: I0101 09:38:38.951508 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-thj57" Jan 01 09:38:39 crc kubenswrapper[4867]: I0101 09:38:39.146220 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-thj57"] Jan 01 09:38:40 crc kubenswrapper[4867]: I0101 09:38:40.876283 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-thj57" podUID="0982718e-19ea-4818-85c5-054447e6bf66" containerName="registry-server" containerID="cri-o://6217b2db24d7d7fbec8e8decf7cc21f0b55dcece90251f843b7d886e7aa05085" gracePeriod=2 Jan 01 09:38:41 crc kubenswrapper[4867]: I0101 09:38:41.883972 4867 generic.go:334] "Generic (PLEG): container finished" podID="0982718e-19ea-4818-85c5-054447e6bf66" containerID="6217b2db24d7d7fbec8e8decf7cc21f0b55dcece90251f843b7d886e7aa05085" exitCode=0 Jan 01 09:38:41 crc kubenswrapper[4867]: I0101 09:38:41.884023 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-thj57" event={"ID":"0982718e-19ea-4818-85c5-054447e6bf66","Type":"ContainerDied","Data":"6217b2db24d7d7fbec8e8decf7cc21f0b55dcece90251f843b7d886e7aa05085"} Jan 01 09:38:41 crc kubenswrapper[4867]: I0101 09:38:41.953419 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-thj57" Jan 01 09:38:42 crc kubenswrapper[4867]: I0101 09:38:42.072068 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0982718e-19ea-4818-85c5-054447e6bf66-utilities\") pod \"0982718e-19ea-4818-85c5-054447e6bf66\" (UID: \"0982718e-19ea-4818-85c5-054447e6bf66\") " Jan 01 09:38:42 crc kubenswrapper[4867]: I0101 09:38:42.072143 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fw6j\" (UniqueName: \"kubernetes.io/projected/0982718e-19ea-4818-85c5-054447e6bf66-kube-api-access-8fw6j\") pod \"0982718e-19ea-4818-85c5-054447e6bf66\" (UID: \"0982718e-19ea-4818-85c5-054447e6bf66\") " Jan 01 09:38:42 crc kubenswrapper[4867]: I0101 09:38:42.072234 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0982718e-19ea-4818-85c5-054447e6bf66-catalog-content\") pod \"0982718e-19ea-4818-85c5-054447e6bf66\" (UID: \"0982718e-19ea-4818-85c5-054447e6bf66\") " Jan 01 09:38:42 crc kubenswrapper[4867]: I0101 09:38:42.073588 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0982718e-19ea-4818-85c5-054447e6bf66-utilities" (OuterVolumeSpecName: "utilities") pod "0982718e-19ea-4818-85c5-054447e6bf66" (UID: "0982718e-19ea-4818-85c5-054447e6bf66"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:38:42 crc kubenswrapper[4867]: I0101 09:38:42.125776 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0982718e-19ea-4818-85c5-054447e6bf66-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0982718e-19ea-4818-85c5-054447e6bf66" (UID: "0982718e-19ea-4818-85c5-054447e6bf66"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:38:42 crc kubenswrapper[4867]: I0101 09:38:42.174141 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0982718e-19ea-4818-85c5-054447e6bf66-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 09:38:42 crc kubenswrapper[4867]: I0101 09:38:42.174180 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0982718e-19ea-4818-85c5-054447e6bf66-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 09:38:42 crc kubenswrapper[4867]: I0101 09:38:42.255365 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0982718e-19ea-4818-85c5-054447e6bf66-kube-api-access-8fw6j" (OuterVolumeSpecName: "kube-api-access-8fw6j") pod "0982718e-19ea-4818-85c5-054447e6bf66" (UID: "0982718e-19ea-4818-85c5-054447e6bf66"). InnerVolumeSpecName "kube-api-access-8fw6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:38:42 crc kubenswrapper[4867]: I0101 09:38:42.276327 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fw6j\" (UniqueName: \"kubernetes.io/projected/0982718e-19ea-4818-85c5-054447e6bf66-kube-api-access-8fw6j\") on node \"crc\" DevicePath \"\"" Jan 01 09:38:42 crc kubenswrapper[4867]: I0101 09:38:42.917262 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-thj57" event={"ID":"0982718e-19ea-4818-85c5-054447e6bf66","Type":"ContainerDied","Data":"f9422646e7cd6055f96754fe3e054f42dd34a66b5184a5e02a6900f3fab2973a"} Jan 01 09:38:42 crc kubenswrapper[4867]: I0101 09:38:42.917365 4867 scope.go:117] "RemoveContainer" containerID="6217b2db24d7d7fbec8e8decf7cc21f0b55dcece90251f843b7d886e7aa05085" Jan 01 09:38:42 crc kubenswrapper[4867]: I0101 09:38:42.917444 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-thj57" Jan 01 09:38:42 crc kubenswrapper[4867]: I0101 09:38:42.942349 4867 scope.go:117] "RemoveContainer" containerID="677f3ef7dbeeb0c2a10750cf70324b1d56def658a681efb0cf351e25886dbfef" Jan 01 09:38:42 crc kubenswrapper[4867]: I0101 09:38:42.961843 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-thj57"] Jan 01 09:38:42 crc kubenswrapper[4867]: I0101 09:38:42.971384 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-thj57"] Jan 01 09:38:42 crc kubenswrapper[4867]: I0101 09:38:42.987309 4867 scope.go:117] "RemoveContainer" containerID="e8ef9b97d6935b19d4467e517e9ebe6c61b17f5acb0dd9eeb42b8400e3177fb7" Jan 01 09:38:43 crc kubenswrapper[4867]: I0101 09:38:43.141790 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0982718e-19ea-4818-85c5-054447e6bf66" path="/var/lib/kubelet/pods/0982718e-19ea-4818-85c5-054447e6bf66/volumes" Jan 01 09:38:51 crc kubenswrapper[4867]: I0101 09:38:51.331750 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 09:38:51 crc kubenswrapper[4867]: I0101 09:38:51.332679 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 09:38:51 crc kubenswrapper[4867]: I0101 09:38:51.332751 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-69jph" Jan 01 09:38:51 crc kubenswrapper[4867]: I0101 09:38:51.334104 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fd98e574ca7574623e1ed48bfa390eba695323f03a3d7c0849c3b4c42842cbf1"} pod="openshift-machine-config-operator/machine-config-daemon-69jph" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 01 09:38:51 crc kubenswrapper[4867]: I0101 09:38:51.334209 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" containerID="cri-o://fd98e574ca7574623e1ed48bfa390eba695323f03a3d7c0849c3b4c42842cbf1" gracePeriod=600 Jan 01 09:38:51 crc kubenswrapper[4867]: E0101 09:38:51.476682 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:38:52 crc kubenswrapper[4867]: I0101 09:38:52.025186 4867 generic.go:334] "Generic (PLEG): container finished" podID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerID="fd98e574ca7574623e1ed48bfa390eba695323f03a3d7c0849c3b4c42842cbf1" exitCode=0 Jan 01 09:38:52 crc kubenswrapper[4867]: I0101 09:38:52.025268 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerDied","Data":"fd98e574ca7574623e1ed48bfa390eba695323f03a3d7c0849c3b4c42842cbf1"} Jan 01 09:38:52 crc kubenswrapper[4867]: I0101 09:38:52.025368 4867 scope.go:117] "RemoveContainer" containerID="0dc83063321e6b3a84dc0ca3e3ffa3ce1c8d7f541189e09b33c7c3311259fbcc" Jan 01 09:38:52 crc kubenswrapper[4867]: I0101 09:38:52.026122 4867 scope.go:117] "RemoveContainer" containerID="fd98e574ca7574623e1ed48bfa390eba695323f03a3d7c0849c3b4c42842cbf1" Jan 01 09:38:52 crc kubenswrapper[4867]: E0101 09:38:52.026565 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:39:03 crc kubenswrapper[4867]: I0101 09:39:03.129661 4867 scope.go:117] "RemoveContainer" containerID="fd98e574ca7574623e1ed48bfa390eba695323f03a3d7c0849c3b4c42842cbf1" Jan 01 09:39:03 crc kubenswrapper[4867]: E0101 09:39:03.130828 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:39:18 crc kubenswrapper[4867]: I0101 09:39:18.128362 4867 scope.go:117] "RemoveContainer" containerID="fd98e574ca7574623e1ed48bfa390eba695323f03a3d7c0849c3b4c42842cbf1" Jan 01 09:39:18 crc kubenswrapper[4867]: E0101 09:39:18.129539 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:39:32 crc kubenswrapper[4867]: I0101 09:39:32.128767 4867 scope.go:117] "RemoveContainer" containerID="fd98e574ca7574623e1ed48bfa390eba695323f03a3d7c0849c3b4c42842cbf1" Jan 01 09:39:32 crc kubenswrapper[4867]: E0101 09:39:32.131076 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:39:43 crc kubenswrapper[4867]: I0101 09:39:43.128841 4867 scope.go:117] "RemoveContainer" containerID="fd98e574ca7574623e1ed48bfa390eba695323f03a3d7c0849c3b4c42842cbf1" Jan 01 09:39:43 crc kubenswrapper[4867]: E0101 09:39:43.129981 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:39:54 crc kubenswrapper[4867]: I0101 09:39:54.129712 4867 scope.go:117] "RemoveContainer" containerID="fd98e574ca7574623e1ed48bfa390eba695323f03a3d7c0849c3b4c42842cbf1" Jan 01 09:39:54 crc kubenswrapper[4867]: E0101 09:39:54.130660 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:40:08 crc kubenswrapper[4867]: I0101 09:40:08.129254 4867 scope.go:117] "RemoveContainer" containerID="fd98e574ca7574623e1ed48bfa390eba695323f03a3d7c0849c3b4c42842cbf1" Jan 01 09:40:08 crc kubenswrapper[4867]: E0101 09:40:08.132309 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:40:23 crc kubenswrapper[4867]: I0101 09:40:23.128859 4867 scope.go:117] "RemoveContainer" containerID="fd98e574ca7574623e1ed48bfa390eba695323f03a3d7c0849c3b4c42842cbf1" Jan 01 09:40:23 crc kubenswrapper[4867]: E0101 09:40:23.129659 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:40:34 crc kubenswrapper[4867]: I0101 09:40:34.129694 4867 scope.go:117] "RemoveContainer" containerID="fd98e574ca7574623e1ed48bfa390eba695323f03a3d7c0849c3b4c42842cbf1" Jan 01 09:40:34 crc kubenswrapper[4867]: E0101 09:40:34.130942 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:40:49 crc kubenswrapper[4867]: I0101 09:40:49.129460 4867 scope.go:117] "RemoveContainer" containerID="fd98e574ca7574623e1ed48bfa390eba695323f03a3d7c0849c3b4c42842cbf1" Jan 01 09:40:49 crc kubenswrapper[4867]: E0101 09:40:49.130521 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:41:02 crc kubenswrapper[4867]: I0101 09:41:02.128813 4867 scope.go:117] "RemoveContainer" containerID="fd98e574ca7574623e1ed48bfa390eba695323f03a3d7c0849c3b4c42842cbf1" Jan 01 09:41:02 crc kubenswrapper[4867]: E0101 09:41:02.131457 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:41:14 crc kubenswrapper[4867]: I0101 09:41:14.128751 4867 scope.go:117] "RemoveContainer" containerID="fd98e574ca7574623e1ed48bfa390eba695323f03a3d7c0849c3b4c42842cbf1" Jan 01 09:41:14 crc kubenswrapper[4867]: E0101 09:41:14.129728 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:41:25 crc kubenswrapper[4867]: I0101 09:41:25.129042 4867 scope.go:117] "RemoveContainer" containerID="fd98e574ca7574623e1ed48bfa390eba695323f03a3d7c0849c3b4c42842cbf1" Jan 01 09:41:25 crc kubenswrapper[4867]: E0101 09:41:25.130445 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:41:37 crc kubenswrapper[4867]: I0101 09:41:37.129365 4867 scope.go:117] "RemoveContainer" containerID="fd98e574ca7574623e1ed48bfa390eba695323f03a3d7c0849c3b4c42842cbf1" Jan 01 09:41:37 crc kubenswrapper[4867]: E0101 09:41:37.130372 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:41:48 crc kubenswrapper[4867]: I0101 09:41:48.128584 4867 scope.go:117] "RemoveContainer" containerID="fd98e574ca7574623e1ed48bfa390eba695323f03a3d7c0849c3b4c42842cbf1" Jan 01 09:41:48 crc kubenswrapper[4867]: E0101 09:41:48.129538 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:41:59 crc kubenswrapper[4867]: I0101 09:41:59.128973 4867 scope.go:117] "RemoveContainer" containerID="fd98e574ca7574623e1ed48bfa390eba695323f03a3d7c0849c3b4c42842cbf1" Jan 01 09:41:59 crc kubenswrapper[4867]: E0101 09:41:59.129571 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:42:10 crc kubenswrapper[4867]: I0101 09:42:10.129026 4867 scope.go:117] "RemoveContainer" containerID="fd98e574ca7574623e1ed48bfa390eba695323f03a3d7c0849c3b4c42842cbf1" Jan 01 09:42:10 crc kubenswrapper[4867]: E0101 09:42:10.130159 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:42:21 crc kubenswrapper[4867]: I0101 09:42:21.138200 4867 scope.go:117] "RemoveContainer" containerID="fd98e574ca7574623e1ed48bfa390eba695323f03a3d7c0849c3b4c42842cbf1" Jan 01 09:42:21 crc kubenswrapper[4867]: E0101 09:42:21.139927 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:42:32 crc kubenswrapper[4867]: I0101 09:42:32.128728 4867 scope.go:117] "RemoveContainer" containerID="fd98e574ca7574623e1ed48bfa390eba695323f03a3d7c0849c3b4c42842cbf1" Jan 01 09:42:32 crc kubenswrapper[4867]: E0101 09:42:32.129989 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:42:47 crc kubenswrapper[4867]: I0101 09:42:47.129059 4867 scope.go:117] "RemoveContainer" containerID="fd98e574ca7574623e1ed48bfa390eba695323f03a3d7c0849c3b4c42842cbf1" Jan 01 09:42:47 crc kubenswrapper[4867]: E0101 09:42:47.130426 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:42:58 crc kubenswrapper[4867]: I0101 09:42:58.129181 4867 scope.go:117] "RemoveContainer" containerID="fd98e574ca7574623e1ed48bfa390eba695323f03a3d7c0849c3b4c42842cbf1" Jan 01 09:42:58 crc kubenswrapper[4867]: E0101 09:42:58.130188 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:43:13 crc kubenswrapper[4867]: I0101 09:43:13.130176 4867 scope.go:117] "RemoveContainer" containerID="fd98e574ca7574623e1ed48bfa390eba695323f03a3d7c0849c3b4c42842cbf1" Jan 01 09:43:13 crc kubenswrapper[4867]: E0101 09:43:13.131323 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:43:25 crc kubenswrapper[4867]: I0101 09:43:25.129485 4867 scope.go:117] "RemoveContainer" containerID="fd98e574ca7574623e1ed48bfa390eba695323f03a3d7c0849c3b4c42842cbf1" Jan 01 09:43:25 crc kubenswrapper[4867]: E0101 09:43:25.130923 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:43:36 crc kubenswrapper[4867]: I0101 09:43:36.129772 4867 scope.go:117] "RemoveContainer" containerID="fd98e574ca7574623e1ed48bfa390eba695323f03a3d7c0849c3b4c42842cbf1" Jan 01 09:43:36 crc kubenswrapper[4867]: E0101 09:43:36.130766 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:43:49 crc kubenswrapper[4867]: I0101 09:43:49.128867 4867 scope.go:117] "RemoveContainer" containerID="fd98e574ca7574623e1ed48bfa390eba695323f03a3d7c0849c3b4c42842cbf1" Jan 01 09:43:49 crc kubenswrapper[4867]: E0101 09:43:49.129607 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:44:04 crc kubenswrapper[4867]: I0101 09:44:04.129545 4867 scope.go:117] "RemoveContainer" containerID="fd98e574ca7574623e1ed48bfa390eba695323f03a3d7c0849c3b4c42842cbf1" Jan 01 09:44:04 crc kubenswrapper[4867]: I0101 09:44:04.949113 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerStarted","Data":"7c3a0bf8d3dcccff6661f6d6d4e4e646f3451d0401617d8e0355386059505e8b"} Jan 01 09:44:23 crc kubenswrapper[4867]: I0101 09:44:23.234104 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wjv5x"] Jan 01 09:44:23 crc kubenswrapper[4867]: E0101 09:44:23.235537 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0982718e-19ea-4818-85c5-054447e6bf66" containerName="registry-server" Jan 01 09:44:23 crc kubenswrapper[4867]: I0101 09:44:23.235571 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="0982718e-19ea-4818-85c5-054447e6bf66" containerName="registry-server" Jan 01 09:44:23 crc kubenswrapper[4867]: E0101 09:44:23.235610 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0982718e-19ea-4818-85c5-054447e6bf66" containerName="extract-utilities" Jan 01 09:44:23 crc kubenswrapper[4867]: I0101 09:44:23.235628 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="0982718e-19ea-4818-85c5-054447e6bf66" containerName="extract-utilities" Jan 01 09:44:23 crc kubenswrapper[4867]: E0101 09:44:23.235670 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0982718e-19ea-4818-85c5-054447e6bf66" containerName="extract-content" Jan 01 09:44:23 crc kubenswrapper[4867]: I0101 09:44:23.235686 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="0982718e-19ea-4818-85c5-054447e6bf66" containerName="extract-content" Jan 01 09:44:23 crc kubenswrapper[4867]: I0101 09:44:23.236040 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="0982718e-19ea-4818-85c5-054447e6bf66" containerName="registry-server" Jan 01 09:44:23 crc kubenswrapper[4867]: I0101 09:44:23.238000 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wjv5x" Jan 01 09:44:23 crc kubenswrapper[4867]: I0101 09:44:23.246659 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wjv5x"] Jan 01 09:44:23 crc kubenswrapper[4867]: I0101 09:44:23.318064 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/340707ef-fc11-4fda-b96a-b1aac158d220-catalog-content\") pod \"redhat-operators-wjv5x\" (UID: \"340707ef-fc11-4fda-b96a-b1aac158d220\") " pod="openshift-marketplace/redhat-operators-wjv5x" Jan 01 09:44:23 crc kubenswrapper[4867]: I0101 09:44:23.318189 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmhsc\" (UniqueName: \"kubernetes.io/projected/340707ef-fc11-4fda-b96a-b1aac158d220-kube-api-access-gmhsc\") pod \"redhat-operators-wjv5x\" (UID: \"340707ef-fc11-4fda-b96a-b1aac158d220\") " pod="openshift-marketplace/redhat-operators-wjv5x" Jan 01 09:44:23 crc kubenswrapper[4867]: I0101 09:44:23.318306 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/340707ef-fc11-4fda-b96a-b1aac158d220-utilities\") pod \"redhat-operators-wjv5x\" (UID: \"340707ef-fc11-4fda-b96a-b1aac158d220\") " pod="openshift-marketplace/redhat-operators-wjv5x" Jan 01 09:44:23 crc kubenswrapper[4867]: I0101 09:44:23.420280 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/340707ef-fc11-4fda-b96a-b1aac158d220-utilities\") pod \"redhat-operators-wjv5x\" (UID: \"340707ef-fc11-4fda-b96a-b1aac158d220\") " pod="openshift-marketplace/redhat-operators-wjv5x" Jan 01 09:44:23 crc kubenswrapper[4867]: I0101 09:44:23.420473 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/340707ef-fc11-4fda-b96a-b1aac158d220-catalog-content\") pod \"redhat-operators-wjv5x\" (UID: \"340707ef-fc11-4fda-b96a-b1aac158d220\") " pod="openshift-marketplace/redhat-operators-wjv5x" Jan 01 09:44:23 crc kubenswrapper[4867]: I0101 09:44:23.420569 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmhsc\" (UniqueName: \"kubernetes.io/projected/340707ef-fc11-4fda-b96a-b1aac158d220-kube-api-access-gmhsc\") pod \"redhat-operators-wjv5x\" (UID: \"340707ef-fc11-4fda-b96a-b1aac158d220\") " pod="openshift-marketplace/redhat-operators-wjv5x" Jan 01 09:44:23 crc kubenswrapper[4867]: I0101 09:44:23.421230 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/340707ef-fc11-4fda-b96a-b1aac158d220-utilities\") pod \"redhat-operators-wjv5x\" (UID: \"340707ef-fc11-4fda-b96a-b1aac158d220\") " pod="openshift-marketplace/redhat-operators-wjv5x" Jan 01 09:44:23 crc kubenswrapper[4867]: I0101 09:44:23.421459 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/340707ef-fc11-4fda-b96a-b1aac158d220-catalog-content\") pod \"redhat-operators-wjv5x\" (UID: \"340707ef-fc11-4fda-b96a-b1aac158d220\") " pod="openshift-marketplace/redhat-operators-wjv5x" Jan 01 09:44:23 crc kubenswrapper[4867]: I0101 09:44:23.448630 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmhsc\" (UniqueName: \"kubernetes.io/projected/340707ef-fc11-4fda-b96a-b1aac158d220-kube-api-access-gmhsc\") pod \"redhat-operators-wjv5x\" (UID: \"340707ef-fc11-4fda-b96a-b1aac158d220\") " pod="openshift-marketplace/redhat-operators-wjv5x" Jan 01 09:44:23 crc kubenswrapper[4867]: I0101 09:44:23.578274 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wjv5x" Jan 01 09:44:24 crc kubenswrapper[4867]: I0101 09:44:24.010783 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wjv5x"] Jan 01 09:44:24 crc kubenswrapper[4867]: W0101 09:44:24.020297 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod340707ef_fc11_4fda_b96a_b1aac158d220.slice/crio-bdf47cd12a464479fd56ab11da7c26a01f43cc7b608a6a47f5c9ac51039e388b WatchSource:0}: Error finding container bdf47cd12a464479fd56ab11da7c26a01f43cc7b608a6a47f5c9ac51039e388b: Status 404 returned error can't find the container with id bdf47cd12a464479fd56ab11da7c26a01f43cc7b608a6a47f5c9ac51039e388b Jan 01 09:44:24 crc kubenswrapper[4867]: I0101 09:44:24.115246 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wjv5x" event={"ID":"340707ef-fc11-4fda-b96a-b1aac158d220","Type":"ContainerStarted","Data":"bdf47cd12a464479fd56ab11da7c26a01f43cc7b608a6a47f5c9ac51039e388b"} Jan 01 09:44:25 crc kubenswrapper[4867]: I0101 09:44:25.126483 4867 generic.go:334] "Generic (PLEG): container finished" podID="340707ef-fc11-4fda-b96a-b1aac158d220" containerID="55aeab25fe498cc3b56b04ef2cf1cfa6e29d3e84b2e6b8b6eff66c0da754b36e" exitCode=0 Jan 01 09:44:25 crc kubenswrapper[4867]: I0101 09:44:25.126601 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wjv5x" event={"ID":"340707ef-fc11-4fda-b96a-b1aac158d220","Type":"ContainerDied","Data":"55aeab25fe498cc3b56b04ef2cf1cfa6e29d3e84b2e6b8b6eff66c0da754b36e"} Jan 01 09:44:25 crc kubenswrapper[4867]: I0101 09:44:25.129494 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 01 09:44:26 crc kubenswrapper[4867]: I0101 09:44:26.139293 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wjv5x" event={"ID":"340707ef-fc11-4fda-b96a-b1aac158d220","Type":"ContainerStarted","Data":"d55c2035785d864faaa5ef5a63efd60bfe7abec1e213b48dcd4db93fdadfe34c"} Jan 01 09:44:27 crc kubenswrapper[4867]: I0101 09:44:27.152509 4867 generic.go:334] "Generic (PLEG): container finished" podID="340707ef-fc11-4fda-b96a-b1aac158d220" containerID="d55c2035785d864faaa5ef5a63efd60bfe7abec1e213b48dcd4db93fdadfe34c" exitCode=0 Jan 01 09:44:27 crc kubenswrapper[4867]: I0101 09:44:27.152573 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wjv5x" event={"ID":"340707ef-fc11-4fda-b96a-b1aac158d220","Type":"ContainerDied","Data":"d55c2035785d864faaa5ef5a63efd60bfe7abec1e213b48dcd4db93fdadfe34c"} Jan 01 09:44:28 crc kubenswrapper[4867]: I0101 09:44:28.168790 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wjv5x" event={"ID":"340707ef-fc11-4fda-b96a-b1aac158d220","Type":"ContainerStarted","Data":"b61ee5653e0db0ec8d175e74c95581f063bb1fbe6ae9ff1db793d4d498cfe175"} Jan 01 09:44:28 crc kubenswrapper[4867]: I0101 09:44:28.198968 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wjv5x" podStartSLOduration=2.673459961 podStartE2EDuration="5.19887702s" podCreationTimestamp="2026-01-01 09:44:23 +0000 UTC" firstStartedPulling="2026-01-01 09:44:25.128972328 +0000 UTC m=+4674.264241137" lastFinishedPulling="2026-01-01 09:44:27.654389397 +0000 UTC m=+4676.789658196" observedRunningTime="2026-01-01 09:44:28.193803096 +0000 UTC m=+4677.329071915" watchObservedRunningTime="2026-01-01 09:44:28.19887702 +0000 UTC m=+4677.334145799" Jan 01 09:44:33 crc kubenswrapper[4867]: I0101 09:44:33.579121 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wjv5x" Jan 01 09:44:33 crc kubenswrapper[4867]: I0101 09:44:33.579789 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wjv5x" Jan 01 09:44:34 crc kubenswrapper[4867]: I0101 09:44:34.643463 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wjv5x" podUID="340707ef-fc11-4fda-b96a-b1aac158d220" containerName="registry-server" probeResult="failure" output=< Jan 01 09:44:34 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Jan 01 09:44:34 crc kubenswrapper[4867]: > Jan 01 09:44:43 crc kubenswrapper[4867]: I0101 09:44:43.660294 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wjv5x" Jan 01 09:44:43 crc kubenswrapper[4867]: I0101 09:44:43.728130 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wjv5x" Jan 01 09:44:43 crc kubenswrapper[4867]: I0101 09:44:43.906465 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wjv5x"] Jan 01 09:44:45 crc kubenswrapper[4867]: I0101 09:44:45.319769 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wjv5x" podUID="340707ef-fc11-4fda-b96a-b1aac158d220" containerName="registry-server" containerID="cri-o://b61ee5653e0db0ec8d175e74c95581f063bb1fbe6ae9ff1db793d4d498cfe175" gracePeriod=2 Jan 01 09:44:45 crc kubenswrapper[4867]: I0101 09:44:45.841592 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wjv5x" Jan 01 09:44:45 crc kubenswrapper[4867]: I0101 09:44:45.889552 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/340707ef-fc11-4fda-b96a-b1aac158d220-utilities\") pod \"340707ef-fc11-4fda-b96a-b1aac158d220\" (UID: \"340707ef-fc11-4fda-b96a-b1aac158d220\") " Jan 01 09:44:45 crc kubenswrapper[4867]: I0101 09:44:45.889807 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/340707ef-fc11-4fda-b96a-b1aac158d220-catalog-content\") pod \"340707ef-fc11-4fda-b96a-b1aac158d220\" (UID: \"340707ef-fc11-4fda-b96a-b1aac158d220\") " Jan 01 09:44:45 crc kubenswrapper[4867]: I0101 09:44:45.889883 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmhsc\" (UniqueName: \"kubernetes.io/projected/340707ef-fc11-4fda-b96a-b1aac158d220-kube-api-access-gmhsc\") pod \"340707ef-fc11-4fda-b96a-b1aac158d220\" (UID: \"340707ef-fc11-4fda-b96a-b1aac158d220\") " Jan 01 09:44:45 crc kubenswrapper[4867]: I0101 09:44:45.891735 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/340707ef-fc11-4fda-b96a-b1aac158d220-utilities" (OuterVolumeSpecName: "utilities") pod "340707ef-fc11-4fda-b96a-b1aac158d220" (UID: "340707ef-fc11-4fda-b96a-b1aac158d220"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:44:45 crc kubenswrapper[4867]: I0101 09:44:45.898680 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/340707ef-fc11-4fda-b96a-b1aac158d220-kube-api-access-gmhsc" (OuterVolumeSpecName: "kube-api-access-gmhsc") pod "340707ef-fc11-4fda-b96a-b1aac158d220" (UID: "340707ef-fc11-4fda-b96a-b1aac158d220"). InnerVolumeSpecName "kube-api-access-gmhsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:44:45 crc kubenswrapper[4867]: I0101 09:44:45.991935 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmhsc\" (UniqueName: \"kubernetes.io/projected/340707ef-fc11-4fda-b96a-b1aac158d220-kube-api-access-gmhsc\") on node \"crc\" DevicePath \"\"" Jan 01 09:44:45 crc kubenswrapper[4867]: I0101 09:44:45.991970 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/340707ef-fc11-4fda-b96a-b1aac158d220-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 09:44:46 crc kubenswrapper[4867]: I0101 09:44:46.056777 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/340707ef-fc11-4fda-b96a-b1aac158d220-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "340707ef-fc11-4fda-b96a-b1aac158d220" (UID: "340707ef-fc11-4fda-b96a-b1aac158d220"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:44:46 crc kubenswrapper[4867]: I0101 09:44:46.092828 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/340707ef-fc11-4fda-b96a-b1aac158d220-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 09:44:46 crc kubenswrapper[4867]: I0101 09:44:46.333395 4867 generic.go:334] "Generic (PLEG): container finished" podID="340707ef-fc11-4fda-b96a-b1aac158d220" containerID="b61ee5653e0db0ec8d175e74c95581f063bb1fbe6ae9ff1db793d4d498cfe175" exitCode=0 Jan 01 09:44:46 crc kubenswrapper[4867]: I0101 09:44:46.333470 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wjv5x" event={"ID":"340707ef-fc11-4fda-b96a-b1aac158d220","Type":"ContainerDied","Data":"b61ee5653e0db0ec8d175e74c95581f063bb1fbe6ae9ff1db793d4d498cfe175"} Jan 01 09:44:46 crc kubenswrapper[4867]: I0101 09:44:46.333539 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wjv5x" event={"ID":"340707ef-fc11-4fda-b96a-b1aac158d220","Type":"ContainerDied","Data":"bdf47cd12a464479fd56ab11da7c26a01f43cc7b608a6a47f5c9ac51039e388b"} Jan 01 09:44:46 crc kubenswrapper[4867]: I0101 09:44:46.333558 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wjv5x" Jan 01 09:44:46 crc kubenswrapper[4867]: I0101 09:44:46.333585 4867 scope.go:117] "RemoveContainer" containerID="b61ee5653e0db0ec8d175e74c95581f063bb1fbe6ae9ff1db793d4d498cfe175" Jan 01 09:44:46 crc kubenswrapper[4867]: I0101 09:44:46.365006 4867 scope.go:117] "RemoveContainer" containerID="d55c2035785d864faaa5ef5a63efd60bfe7abec1e213b48dcd4db93fdadfe34c" Jan 01 09:44:46 crc kubenswrapper[4867]: I0101 09:44:46.397412 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wjv5x"] Jan 01 09:44:46 crc kubenswrapper[4867]: I0101 09:44:46.408344 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wjv5x"] Jan 01 09:44:46 crc kubenswrapper[4867]: I0101 09:44:46.422556 4867 scope.go:117] "RemoveContainer" containerID="55aeab25fe498cc3b56b04ef2cf1cfa6e29d3e84b2e6b8b6eff66c0da754b36e" Jan 01 09:44:46 crc kubenswrapper[4867]: I0101 09:44:46.450207 4867 scope.go:117] "RemoveContainer" containerID="b61ee5653e0db0ec8d175e74c95581f063bb1fbe6ae9ff1db793d4d498cfe175" Jan 01 09:44:46 crc kubenswrapper[4867]: E0101 09:44:46.450786 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b61ee5653e0db0ec8d175e74c95581f063bb1fbe6ae9ff1db793d4d498cfe175\": container with ID starting with b61ee5653e0db0ec8d175e74c95581f063bb1fbe6ae9ff1db793d4d498cfe175 not found: ID does not exist" containerID="b61ee5653e0db0ec8d175e74c95581f063bb1fbe6ae9ff1db793d4d498cfe175" Jan 01 09:44:46 crc kubenswrapper[4867]: I0101 09:44:46.450841 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b61ee5653e0db0ec8d175e74c95581f063bb1fbe6ae9ff1db793d4d498cfe175"} err="failed to get container status \"b61ee5653e0db0ec8d175e74c95581f063bb1fbe6ae9ff1db793d4d498cfe175\": rpc error: code = NotFound desc = could not find container \"b61ee5653e0db0ec8d175e74c95581f063bb1fbe6ae9ff1db793d4d498cfe175\": container with ID starting with b61ee5653e0db0ec8d175e74c95581f063bb1fbe6ae9ff1db793d4d498cfe175 not found: ID does not exist" Jan 01 09:44:46 crc kubenswrapper[4867]: I0101 09:44:46.450874 4867 scope.go:117] "RemoveContainer" containerID="d55c2035785d864faaa5ef5a63efd60bfe7abec1e213b48dcd4db93fdadfe34c" Jan 01 09:44:46 crc kubenswrapper[4867]: E0101 09:44:46.451346 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d55c2035785d864faaa5ef5a63efd60bfe7abec1e213b48dcd4db93fdadfe34c\": container with ID starting with d55c2035785d864faaa5ef5a63efd60bfe7abec1e213b48dcd4db93fdadfe34c not found: ID does not exist" containerID="d55c2035785d864faaa5ef5a63efd60bfe7abec1e213b48dcd4db93fdadfe34c" Jan 01 09:44:46 crc kubenswrapper[4867]: I0101 09:44:46.451458 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d55c2035785d864faaa5ef5a63efd60bfe7abec1e213b48dcd4db93fdadfe34c"} err="failed to get container status \"d55c2035785d864faaa5ef5a63efd60bfe7abec1e213b48dcd4db93fdadfe34c\": rpc error: code = NotFound desc = could not find container \"d55c2035785d864faaa5ef5a63efd60bfe7abec1e213b48dcd4db93fdadfe34c\": container with ID starting with d55c2035785d864faaa5ef5a63efd60bfe7abec1e213b48dcd4db93fdadfe34c not found: ID does not exist" Jan 01 09:44:46 crc kubenswrapper[4867]: I0101 09:44:46.451500 4867 scope.go:117] "RemoveContainer" containerID="55aeab25fe498cc3b56b04ef2cf1cfa6e29d3e84b2e6b8b6eff66c0da754b36e" Jan 01 09:44:46 crc kubenswrapper[4867]: E0101 09:44:46.452030 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55aeab25fe498cc3b56b04ef2cf1cfa6e29d3e84b2e6b8b6eff66c0da754b36e\": container with ID starting with 55aeab25fe498cc3b56b04ef2cf1cfa6e29d3e84b2e6b8b6eff66c0da754b36e not found: ID does not exist" containerID="55aeab25fe498cc3b56b04ef2cf1cfa6e29d3e84b2e6b8b6eff66c0da754b36e" Jan 01 09:44:46 crc kubenswrapper[4867]: I0101 09:44:46.452069 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55aeab25fe498cc3b56b04ef2cf1cfa6e29d3e84b2e6b8b6eff66c0da754b36e"} err="failed to get container status \"55aeab25fe498cc3b56b04ef2cf1cfa6e29d3e84b2e6b8b6eff66c0da754b36e\": rpc error: code = NotFound desc = could not find container \"55aeab25fe498cc3b56b04ef2cf1cfa6e29d3e84b2e6b8b6eff66c0da754b36e\": container with ID starting with 55aeab25fe498cc3b56b04ef2cf1cfa6e29d3e84b2e6b8b6eff66c0da754b36e not found: ID does not exist" Jan 01 09:44:47 crc kubenswrapper[4867]: I0101 09:44:47.146382 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="340707ef-fc11-4fda-b96a-b1aac158d220" path="/var/lib/kubelet/pods/340707ef-fc11-4fda-b96a-b1aac158d220/volumes" Jan 01 09:45:00 crc kubenswrapper[4867]: I0101 09:45:00.199724 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29454345-g2v9r"] Jan 01 09:45:00 crc kubenswrapper[4867]: E0101 09:45:00.200762 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="340707ef-fc11-4fda-b96a-b1aac158d220" containerName="registry-server" Jan 01 09:45:00 crc kubenswrapper[4867]: I0101 09:45:00.200781 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="340707ef-fc11-4fda-b96a-b1aac158d220" containerName="registry-server" Jan 01 09:45:00 crc kubenswrapper[4867]: E0101 09:45:00.200822 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="340707ef-fc11-4fda-b96a-b1aac158d220" containerName="extract-utilities" Jan 01 09:45:00 crc kubenswrapper[4867]: I0101 09:45:00.200834 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="340707ef-fc11-4fda-b96a-b1aac158d220" containerName="extract-utilities" Jan 01 09:45:00 crc kubenswrapper[4867]: E0101 09:45:00.200863 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="340707ef-fc11-4fda-b96a-b1aac158d220" containerName="extract-content" Jan 01 09:45:00 crc kubenswrapper[4867]: I0101 09:45:00.200873 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="340707ef-fc11-4fda-b96a-b1aac158d220" containerName="extract-content" Jan 01 09:45:00 crc kubenswrapper[4867]: I0101 09:45:00.201111 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="340707ef-fc11-4fda-b96a-b1aac158d220" containerName="registry-server" Jan 01 09:45:00 crc kubenswrapper[4867]: I0101 09:45:00.201849 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29454345-g2v9r" Jan 01 09:45:00 crc kubenswrapper[4867]: I0101 09:45:00.203875 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 01 09:45:00 crc kubenswrapper[4867]: I0101 09:45:00.204745 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 01 09:45:00 crc kubenswrapper[4867]: I0101 09:45:00.209225 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29454345-g2v9r"] Jan 01 09:45:00 crc kubenswrapper[4867]: I0101 09:45:00.318931 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv4x7\" (UniqueName: \"kubernetes.io/projected/34024b9f-635b-4ef5-b8a3-4744d9271771-kube-api-access-sv4x7\") pod \"collect-profiles-29454345-g2v9r\" (UID: \"34024b9f-635b-4ef5-b8a3-4744d9271771\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454345-g2v9r" Jan 01 09:45:00 crc kubenswrapper[4867]: I0101 09:45:00.319264 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34024b9f-635b-4ef5-b8a3-4744d9271771-config-volume\") pod \"collect-profiles-29454345-g2v9r\" (UID: \"34024b9f-635b-4ef5-b8a3-4744d9271771\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454345-g2v9r" Jan 01 09:45:00 crc kubenswrapper[4867]: I0101 09:45:00.319348 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34024b9f-635b-4ef5-b8a3-4744d9271771-secret-volume\") pod \"collect-profiles-29454345-g2v9r\" (UID: \"34024b9f-635b-4ef5-b8a3-4744d9271771\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454345-g2v9r" Jan 01 09:45:00 crc kubenswrapper[4867]: I0101 09:45:00.420512 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34024b9f-635b-4ef5-b8a3-4744d9271771-config-volume\") pod \"collect-profiles-29454345-g2v9r\" (UID: \"34024b9f-635b-4ef5-b8a3-4744d9271771\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454345-g2v9r" Jan 01 09:45:00 crc kubenswrapper[4867]: I0101 09:45:00.420564 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34024b9f-635b-4ef5-b8a3-4744d9271771-secret-volume\") pod \"collect-profiles-29454345-g2v9r\" (UID: \"34024b9f-635b-4ef5-b8a3-4744d9271771\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454345-g2v9r" Jan 01 09:45:00 crc kubenswrapper[4867]: I0101 09:45:00.420638 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv4x7\" (UniqueName: \"kubernetes.io/projected/34024b9f-635b-4ef5-b8a3-4744d9271771-kube-api-access-sv4x7\") pod \"collect-profiles-29454345-g2v9r\" (UID: \"34024b9f-635b-4ef5-b8a3-4744d9271771\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454345-g2v9r" Jan 01 09:45:00 crc kubenswrapper[4867]: I0101 09:45:00.423657 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34024b9f-635b-4ef5-b8a3-4744d9271771-config-volume\") pod \"collect-profiles-29454345-g2v9r\" (UID: \"34024b9f-635b-4ef5-b8a3-4744d9271771\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454345-g2v9r" Jan 01 09:45:00 crc kubenswrapper[4867]: I0101 09:45:00.435391 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34024b9f-635b-4ef5-b8a3-4744d9271771-secret-volume\") pod \"collect-profiles-29454345-g2v9r\" (UID: \"34024b9f-635b-4ef5-b8a3-4744d9271771\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454345-g2v9r" Jan 01 09:45:00 crc kubenswrapper[4867]: I0101 09:45:00.438542 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv4x7\" (UniqueName: \"kubernetes.io/projected/34024b9f-635b-4ef5-b8a3-4744d9271771-kube-api-access-sv4x7\") pod \"collect-profiles-29454345-g2v9r\" (UID: \"34024b9f-635b-4ef5-b8a3-4744d9271771\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454345-g2v9r" Jan 01 09:45:00 crc kubenswrapper[4867]: I0101 09:45:00.524440 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29454345-g2v9r" Jan 01 09:45:00 crc kubenswrapper[4867]: I0101 09:45:00.999404 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29454345-g2v9r"] Jan 01 09:45:01 crc kubenswrapper[4867]: I0101 09:45:01.499319 4867 generic.go:334] "Generic (PLEG): container finished" podID="34024b9f-635b-4ef5-b8a3-4744d9271771" containerID="2507317eb0ea7695db6993e99a59b72a13a71641a00153a9b84d39e480faf3a2" exitCode=0 Jan 01 09:45:01 crc kubenswrapper[4867]: I0101 09:45:01.499372 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29454345-g2v9r" event={"ID":"34024b9f-635b-4ef5-b8a3-4744d9271771","Type":"ContainerDied","Data":"2507317eb0ea7695db6993e99a59b72a13a71641a00153a9b84d39e480faf3a2"} Jan 01 09:45:01 crc kubenswrapper[4867]: I0101 09:45:01.499402 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29454345-g2v9r" event={"ID":"34024b9f-635b-4ef5-b8a3-4744d9271771","Type":"ContainerStarted","Data":"f7366f7287b95be3bca5837e4f1db73ff117605f66434a4f8f12462a9637604e"} Jan 01 09:45:03 crc kubenswrapper[4867]: I0101 09:45:03.153015 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29454345-g2v9r" Jan 01 09:45:03 crc kubenswrapper[4867]: I0101 09:45:03.264120 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sv4x7\" (UniqueName: \"kubernetes.io/projected/34024b9f-635b-4ef5-b8a3-4744d9271771-kube-api-access-sv4x7\") pod \"34024b9f-635b-4ef5-b8a3-4744d9271771\" (UID: \"34024b9f-635b-4ef5-b8a3-4744d9271771\") " Jan 01 09:45:03 crc kubenswrapper[4867]: I0101 09:45:03.264186 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34024b9f-635b-4ef5-b8a3-4744d9271771-config-volume\") pod \"34024b9f-635b-4ef5-b8a3-4744d9271771\" (UID: \"34024b9f-635b-4ef5-b8a3-4744d9271771\") " Jan 01 09:45:03 crc kubenswrapper[4867]: I0101 09:45:03.264284 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34024b9f-635b-4ef5-b8a3-4744d9271771-secret-volume\") pod \"34024b9f-635b-4ef5-b8a3-4744d9271771\" (UID: \"34024b9f-635b-4ef5-b8a3-4744d9271771\") " Jan 01 09:45:03 crc kubenswrapper[4867]: I0101 09:45:03.265118 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34024b9f-635b-4ef5-b8a3-4744d9271771-config-volume" (OuterVolumeSpecName: "config-volume") pod "34024b9f-635b-4ef5-b8a3-4744d9271771" (UID: "34024b9f-635b-4ef5-b8a3-4744d9271771"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 09:45:03 crc kubenswrapper[4867]: I0101 09:45:03.269643 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34024b9f-635b-4ef5-b8a3-4744d9271771-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "34024b9f-635b-4ef5-b8a3-4744d9271771" (UID: "34024b9f-635b-4ef5-b8a3-4744d9271771"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 09:45:03 crc kubenswrapper[4867]: I0101 09:45:03.269897 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34024b9f-635b-4ef5-b8a3-4744d9271771-kube-api-access-sv4x7" (OuterVolumeSpecName: "kube-api-access-sv4x7") pod "34024b9f-635b-4ef5-b8a3-4744d9271771" (UID: "34024b9f-635b-4ef5-b8a3-4744d9271771"). InnerVolumeSpecName "kube-api-access-sv4x7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:45:03 crc kubenswrapper[4867]: I0101 09:45:03.365947 4867 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34024b9f-635b-4ef5-b8a3-4744d9271771-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 01 09:45:03 crc kubenswrapper[4867]: I0101 09:45:03.365979 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sv4x7\" (UniqueName: \"kubernetes.io/projected/34024b9f-635b-4ef5-b8a3-4744d9271771-kube-api-access-sv4x7\") on node \"crc\" DevicePath \"\"" Jan 01 09:45:03 crc kubenswrapper[4867]: I0101 09:45:03.365988 4867 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34024b9f-635b-4ef5-b8a3-4744d9271771-config-volume\") on node \"crc\" DevicePath \"\"" Jan 01 09:45:03 crc kubenswrapper[4867]: I0101 09:45:03.517542 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29454345-g2v9r" event={"ID":"34024b9f-635b-4ef5-b8a3-4744d9271771","Type":"ContainerDied","Data":"f7366f7287b95be3bca5837e4f1db73ff117605f66434a4f8f12462a9637604e"} Jan 01 09:45:03 crc kubenswrapper[4867]: I0101 09:45:03.517588 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29454345-g2v9r" Jan 01 09:45:03 crc kubenswrapper[4867]: I0101 09:45:03.517598 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7366f7287b95be3bca5837e4f1db73ff117605f66434a4f8f12462a9637604e" Jan 01 09:45:04 crc kubenswrapper[4867]: I0101 09:45:04.261136 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29454300-8klth"] Jan 01 09:45:04 crc kubenswrapper[4867]: I0101 09:45:04.268739 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29454300-8klth"] Jan 01 09:45:05 crc kubenswrapper[4867]: I0101 09:45:05.147762 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c0f53bc-2a77-4cc1-a6a5-adb9597eab05" path="/var/lib/kubelet/pods/4c0f53bc-2a77-4cc1-a6a5-adb9597eab05/volumes" Jan 01 09:45:30 crc kubenswrapper[4867]: I0101 09:45:30.524292 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m4rx9"] Jan 01 09:45:30 crc kubenswrapper[4867]: E0101 09:45:30.525699 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34024b9f-635b-4ef5-b8a3-4744d9271771" containerName="collect-profiles" Jan 01 09:45:30 crc kubenswrapper[4867]: I0101 09:45:30.525735 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="34024b9f-635b-4ef5-b8a3-4744d9271771" containerName="collect-profiles" Jan 01 09:45:30 crc kubenswrapper[4867]: I0101 09:45:30.526128 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="34024b9f-635b-4ef5-b8a3-4744d9271771" containerName="collect-profiles" Jan 01 09:45:30 crc kubenswrapper[4867]: I0101 09:45:30.528831 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m4rx9" Jan 01 09:45:30 crc kubenswrapper[4867]: I0101 09:45:30.535913 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7734904d-d323-49a5-9a4f-1bf471300d1b-catalog-content\") pod \"redhat-marketplace-m4rx9\" (UID: \"7734904d-d323-49a5-9a4f-1bf471300d1b\") " pod="openshift-marketplace/redhat-marketplace-m4rx9" Jan 01 09:45:30 crc kubenswrapper[4867]: I0101 09:45:30.536106 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjnxf\" (UniqueName: \"kubernetes.io/projected/7734904d-d323-49a5-9a4f-1bf471300d1b-kube-api-access-hjnxf\") pod \"redhat-marketplace-m4rx9\" (UID: \"7734904d-d323-49a5-9a4f-1bf471300d1b\") " pod="openshift-marketplace/redhat-marketplace-m4rx9" Jan 01 09:45:30 crc kubenswrapper[4867]: I0101 09:45:30.536224 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7734904d-d323-49a5-9a4f-1bf471300d1b-utilities\") pod \"redhat-marketplace-m4rx9\" (UID: \"7734904d-d323-49a5-9a4f-1bf471300d1b\") " pod="openshift-marketplace/redhat-marketplace-m4rx9" Jan 01 09:45:30 crc kubenswrapper[4867]: I0101 09:45:30.539280 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m4rx9"] Jan 01 09:45:30 crc kubenswrapper[4867]: I0101 09:45:30.637107 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjnxf\" (UniqueName: \"kubernetes.io/projected/7734904d-d323-49a5-9a4f-1bf471300d1b-kube-api-access-hjnxf\") pod \"redhat-marketplace-m4rx9\" (UID: \"7734904d-d323-49a5-9a4f-1bf471300d1b\") " pod="openshift-marketplace/redhat-marketplace-m4rx9" Jan 01 09:45:30 crc kubenswrapper[4867]: I0101 09:45:30.637186 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7734904d-d323-49a5-9a4f-1bf471300d1b-utilities\") pod \"redhat-marketplace-m4rx9\" (UID: \"7734904d-d323-49a5-9a4f-1bf471300d1b\") " pod="openshift-marketplace/redhat-marketplace-m4rx9" Jan 01 09:45:30 crc kubenswrapper[4867]: I0101 09:45:30.637243 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7734904d-d323-49a5-9a4f-1bf471300d1b-catalog-content\") pod \"redhat-marketplace-m4rx9\" (UID: \"7734904d-d323-49a5-9a4f-1bf471300d1b\") " pod="openshift-marketplace/redhat-marketplace-m4rx9" Jan 01 09:45:30 crc kubenswrapper[4867]: I0101 09:45:30.637836 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7734904d-d323-49a5-9a4f-1bf471300d1b-catalog-content\") pod \"redhat-marketplace-m4rx9\" (UID: \"7734904d-d323-49a5-9a4f-1bf471300d1b\") " pod="openshift-marketplace/redhat-marketplace-m4rx9" Jan 01 09:45:30 crc kubenswrapper[4867]: I0101 09:45:30.638138 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7734904d-d323-49a5-9a4f-1bf471300d1b-utilities\") pod \"redhat-marketplace-m4rx9\" (UID: \"7734904d-d323-49a5-9a4f-1bf471300d1b\") " pod="openshift-marketplace/redhat-marketplace-m4rx9" Jan 01 09:45:30 crc kubenswrapper[4867]: I0101 09:45:30.671140 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjnxf\" (UniqueName: \"kubernetes.io/projected/7734904d-d323-49a5-9a4f-1bf471300d1b-kube-api-access-hjnxf\") pod \"redhat-marketplace-m4rx9\" (UID: \"7734904d-d323-49a5-9a4f-1bf471300d1b\") " pod="openshift-marketplace/redhat-marketplace-m4rx9" Jan 01 09:45:30 crc kubenswrapper[4867]: I0101 09:45:30.860496 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m4rx9" Jan 01 09:45:31 crc kubenswrapper[4867]: I0101 09:45:31.101341 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m4rx9"] Jan 01 09:45:31 crc kubenswrapper[4867]: I0101 09:45:31.795660 4867 generic.go:334] "Generic (PLEG): container finished" podID="7734904d-d323-49a5-9a4f-1bf471300d1b" containerID="ad7fe275189b2e0a6eac7fb79fe972bea63f8eb9f7046a040860977b10a1bd81" exitCode=0 Jan 01 09:45:31 crc kubenswrapper[4867]: I0101 09:45:31.795863 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4rx9" event={"ID":"7734904d-d323-49a5-9a4f-1bf471300d1b","Type":"ContainerDied","Data":"ad7fe275189b2e0a6eac7fb79fe972bea63f8eb9f7046a040860977b10a1bd81"} Jan 01 09:45:31 crc kubenswrapper[4867]: I0101 09:45:31.796087 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4rx9" event={"ID":"7734904d-d323-49a5-9a4f-1bf471300d1b","Type":"ContainerStarted","Data":"40f7f59be89d414d7bf5e157bae58e7ec8bef9c55be4cd778fbac3f66b94662e"} Jan 01 09:45:32 crc kubenswrapper[4867]: I0101 09:45:32.803774 4867 generic.go:334] "Generic (PLEG): container finished" podID="7734904d-d323-49a5-9a4f-1bf471300d1b" containerID="4a0b43b93c7eb96b9d93d08dc396fd5e0fd278a52e7ca834d1b155f38b15fcee" exitCode=0 Jan 01 09:45:32 crc kubenswrapper[4867]: I0101 09:45:32.803869 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4rx9" event={"ID":"7734904d-d323-49a5-9a4f-1bf471300d1b","Type":"ContainerDied","Data":"4a0b43b93c7eb96b9d93d08dc396fd5e0fd278a52e7ca834d1b155f38b15fcee"} Jan 01 09:45:33 crc kubenswrapper[4867]: I0101 09:45:33.818139 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4rx9" event={"ID":"7734904d-d323-49a5-9a4f-1bf471300d1b","Type":"ContainerStarted","Data":"d42145686a11492d59c24e09bba5f9df3bbd09971a6a53f4302835e2f838a9c2"} Jan 01 09:45:33 crc kubenswrapper[4867]: I0101 09:45:33.856044 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m4rx9" podStartSLOduration=2.452481257 podStartE2EDuration="3.856016302s" podCreationTimestamp="2026-01-01 09:45:30 +0000 UTC" firstStartedPulling="2026-01-01 09:45:31.799303286 +0000 UTC m=+4740.934572095" lastFinishedPulling="2026-01-01 09:45:33.202838371 +0000 UTC m=+4742.338107140" observedRunningTime="2026-01-01 09:45:33.848858118 +0000 UTC m=+4742.984126947" watchObservedRunningTime="2026-01-01 09:45:33.856016302 +0000 UTC m=+4742.991285111" Jan 01 09:45:40 crc kubenswrapper[4867]: I0101 09:45:40.699679 4867 scope.go:117] "RemoveContainer" containerID="9db6757a9fbebbca1b077381f9abe995099da64e2913983df7598039ed8ba7e3" Jan 01 09:45:40 crc kubenswrapper[4867]: I0101 09:45:40.860970 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m4rx9" Jan 01 09:45:40 crc kubenswrapper[4867]: I0101 09:45:40.861014 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m4rx9" Jan 01 09:45:40 crc kubenswrapper[4867]: I0101 09:45:40.894632 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m4rx9" Jan 01 09:45:40 crc kubenswrapper[4867]: I0101 09:45:40.933128 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m4rx9" Jan 01 09:45:41 crc kubenswrapper[4867]: I0101 09:45:41.146508 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m4rx9"] Jan 01 09:45:42 crc kubenswrapper[4867]: I0101 09:45:42.892386 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m4rx9" podUID="7734904d-d323-49a5-9a4f-1bf471300d1b" containerName="registry-server" containerID="cri-o://d42145686a11492d59c24e09bba5f9df3bbd09971a6a53f4302835e2f838a9c2" gracePeriod=2 Jan 01 09:45:43 crc kubenswrapper[4867]: I0101 09:45:43.865763 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m4rx9" Jan 01 09:45:43 crc kubenswrapper[4867]: I0101 09:45:43.921809 4867 generic.go:334] "Generic (PLEG): container finished" podID="7734904d-d323-49a5-9a4f-1bf471300d1b" containerID="d42145686a11492d59c24e09bba5f9df3bbd09971a6a53f4302835e2f838a9c2" exitCode=0 Jan 01 09:45:43 crc kubenswrapper[4867]: I0101 09:45:43.921846 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m4rx9" Jan 01 09:45:43 crc kubenswrapper[4867]: I0101 09:45:43.921864 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4rx9" event={"ID":"7734904d-d323-49a5-9a4f-1bf471300d1b","Type":"ContainerDied","Data":"d42145686a11492d59c24e09bba5f9df3bbd09971a6a53f4302835e2f838a9c2"} Jan 01 09:45:43 crc kubenswrapper[4867]: I0101 09:45:43.922468 4867 scope.go:117] "RemoveContainer" containerID="d42145686a11492d59c24e09bba5f9df3bbd09971a6a53f4302835e2f838a9c2" Jan 01 09:45:43 crc kubenswrapper[4867]: I0101 09:45:43.922952 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4rx9" event={"ID":"7734904d-d323-49a5-9a4f-1bf471300d1b","Type":"ContainerDied","Data":"40f7f59be89d414d7bf5e157bae58e7ec8bef9c55be4cd778fbac3f66b94662e"} Jan 01 09:45:43 crc kubenswrapper[4867]: I0101 09:45:43.941464 4867 scope.go:117] "RemoveContainer" containerID="4a0b43b93c7eb96b9d93d08dc396fd5e0fd278a52e7ca834d1b155f38b15fcee" Jan 01 09:45:43 crc kubenswrapper[4867]: I0101 09:45:43.963632 4867 scope.go:117] "RemoveContainer" containerID="ad7fe275189b2e0a6eac7fb79fe972bea63f8eb9f7046a040860977b10a1bd81" Jan 01 09:45:43 crc kubenswrapper[4867]: I0101 09:45:43.978325 4867 scope.go:117] "RemoveContainer" containerID="d42145686a11492d59c24e09bba5f9df3bbd09971a6a53f4302835e2f838a9c2" Jan 01 09:45:43 crc kubenswrapper[4867]: E0101 09:45:43.978719 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d42145686a11492d59c24e09bba5f9df3bbd09971a6a53f4302835e2f838a9c2\": container with ID starting with d42145686a11492d59c24e09bba5f9df3bbd09971a6a53f4302835e2f838a9c2 not found: ID does not exist" containerID="d42145686a11492d59c24e09bba5f9df3bbd09971a6a53f4302835e2f838a9c2" Jan 01 09:45:43 crc kubenswrapper[4867]: I0101 09:45:43.978749 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d42145686a11492d59c24e09bba5f9df3bbd09971a6a53f4302835e2f838a9c2"} err="failed to get container status \"d42145686a11492d59c24e09bba5f9df3bbd09971a6a53f4302835e2f838a9c2\": rpc error: code = NotFound desc = could not find container \"d42145686a11492d59c24e09bba5f9df3bbd09971a6a53f4302835e2f838a9c2\": container with ID starting with d42145686a11492d59c24e09bba5f9df3bbd09971a6a53f4302835e2f838a9c2 not found: ID does not exist" Jan 01 09:45:43 crc kubenswrapper[4867]: I0101 09:45:43.978769 4867 scope.go:117] "RemoveContainer" containerID="4a0b43b93c7eb96b9d93d08dc396fd5e0fd278a52e7ca834d1b155f38b15fcee" Jan 01 09:45:43 crc kubenswrapper[4867]: E0101 09:45:43.979281 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a0b43b93c7eb96b9d93d08dc396fd5e0fd278a52e7ca834d1b155f38b15fcee\": container with ID starting with 4a0b43b93c7eb96b9d93d08dc396fd5e0fd278a52e7ca834d1b155f38b15fcee not found: ID does not exist" containerID="4a0b43b93c7eb96b9d93d08dc396fd5e0fd278a52e7ca834d1b155f38b15fcee" Jan 01 09:45:43 crc kubenswrapper[4867]: I0101 09:45:43.979330 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a0b43b93c7eb96b9d93d08dc396fd5e0fd278a52e7ca834d1b155f38b15fcee"} err="failed to get container status \"4a0b43b93c7eb96b9d93d08dc396fd5e0fd278a52e7ca834d1b155f38b15fcee\": rpc error: code = NotFound desc = could not find container \"4a0b43b93c7eb96b9d93d08dc396fd5e0fd278a52e7ca834d1b155f38b15fcee\": container with ID starting with 4a0b43b93c7eb96b9d93d08dc396fd5e0fd278a52e7ca834d1b155f38b15fcee not found: ID does not exist" Jan 01 09:45:43 crc kubenswrapper[4867]: I0101 09:45:43.979392 4867 scope.go:117] "RemoveContainer" containerID="ad7fe275189b2e0a6eac7fb79fe972bea63f8eb9f7046a040860977b10a1bd81" Jan 01 09:45:43 crc kubenswrapper[4867]: E0101 09:45:43.979695 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad7fe275189b2e0a6eac7fb79fe972bea63f8eb9f7046a040860977b10a1bd81\": container with ID starting with ad7fe275189b2e0a6eac7fb79fe972bea63f8eb9f7046a040860977b10a1bd81 not found: ID does not exist" containerID="ad7fe275189b2e0a6eac7fb79fe972bea63f8eb9f7046a040860977b10a1bd81" Jan 01 09:45:43 crc kubenswrapper[4867]: I0101 09:45:43.979718 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad7fe275189b2e0a6eac7fb79fe972bea63f8eb9f7046a040860977b10a1bd81"} err="failed to get container status \"ad7fe275189b2e0a6eac7fb79fe972bea63f8eb9f7046a040860977b10a1bd81\": rpc error: code = NotFound desc = could not find container \"ad7fe275189b2e0a6eac7fb79fe972bea63f8eb9f7046a040860977b10a1bd81\": container with ID starting with ad7fe275189b2e0a6eac7fb79fe972bea63f8eb9f7046a040860977b10a1bd81 not found: ID does not exist" Jan 01 09:45:43 crc kubenswrapper[4867]: I0101 09:45:43.990651 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7734904d-d323-49a5-9a4f-1bf471300d1b-catalog-content\") pod \"7734904d-d323-49a5-9a4f-1bf471300d1b\" (UID: \"7734904d-d323-49a5-9a4f-1bf471300d1b\") " Jan 01 09:45:43 crc kubenswrapper[4867]: I0101 09:45:43.990718 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7734904d-d323-49a5-9a4f-1bf471300d1b-utilities\") pod \"7734904d-d323-49a5-9a4f-1bf471300d1b\" (UID: \"7734904d-d323-49a5-9a4f-1bf471300d1b\") " Jan 01 09:45:43 crc kubenswrapper[4867]: I0101 09:45:43.990748 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjnxf\" (UniqueName: \"kubernetes.io/projected/7734904d-d323-49a5-9a4f-1bf471300d1b-kube-api-access-hjnxf\") pod \"7734904d-d323-49a5-9a4f-1bf471300d1b\" (UID: \"7734904d-d323-49a5-9a4f-1bf471300d1b\") " Jan 01 09:45:43 crc kubenswrapper[4867]: I0101 09:45:43.991569 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7734904d-d323-49a5-9a4f-1bf471300d1b-utilities" (OuterVolumeSpecName: "utilities") pod "7734904d-d323-49a5-9a4f-1bf471300d1b" (UID: "7734904d-d323-49a5-9a4f-1bf471300d1b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:45:44 crc kubenswrapper[4867]: I0101 09:45:43.999900 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7734904d-d323-49a5-9a4f-1bf471300d1b-kube-api-access-hjnxf" (OuterVolumeSpecName: "kube-api-access-hjnxf") pod "7734904d-d323-49a5-9a4f-1bf471300d1b" (UID: "7734904d-d323-49a5-9a4f-1bf471300d1b"). InnerVolumeSpecName "kube-api-access-hjnxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:45:44 crc kubenswrapper[4867]: I0101 09:45:44.016671 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7734904d-d323-49a5-9a4f-1bf471300d1b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7734904d-d323-49a5-9a4f-1bf471300d1b" (UID: "7734904d-d323-49a5-9a4f-1bf471300d1b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:45:44 crc kubenswrapper[4867]: I0101 09:45:44.092193 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7734904d-d323-49a5-9a4f-1bf471300d1b-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 09:45:44 crc kubenswrapper[4867]: I0101 09:45:44.092233 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjnxf\" (UniqueName: \"kubernetes.io/projected/7734904d-d323-49a5-9a4f-1bf471300d1b-kube-api-access-hjnxf\") on node \"crc\" DevicePath \"\"" Jan 01 09:45:44 crc kubenswrapper[4867]: I0101 09:45:44.092246 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7734904d-d323-49a5-9a4f-1bf471300d1b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 09:45:44 crc kubenswrapper[4867]: I0101 09:45:44.262459 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m4rx9"] Jan 01 09:45:44 crc kubenswrapper[4867]: I0101 09:45:44.270494 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m4rx9"] Jan 01 09:45:45 crc kubenswrapper[4867]: I0101 09:45:45.146478 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7734904d-d323-49a5-9a4f-1bf471300d1b" path="/var/lib/kubelet/pods/7734904d-d323-49a5-9a4f-1bf471300d1b/volumes" Jan 01 09:46:21 crc kubenswrapper[4867]: I0101 09:46:21.032177 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-xmktq"] Jan 01 09:46:21 crc kubenswrapper[4867]: I0101 09:46:21.040155 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-xmktq"] Jan 01 09:46:21 crc kubenswrapper[4867]: I0101 09:46:21.146683 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83f18347-b9b6-4c1c-ab58-6d987317b853" path="/var/lib/kubelet/pods/83f18347-b9b6-4c1c-ab58-6d987317b853/volumes" Jan 01 09:46:21 crc kubenswrapper[4867]: I0101 09:46:21.211226 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-tf4jv"] Jan 01 09:46:21 crc kubenswrapper[4867]: E0101 09:46:21.211653 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7734904d-d323-49a5-9a4f-1bf471300d1b" containerName="extract-utilities" Jan 01 09:46:21 crc kubenswrapper[4867]: I0101 09:46:21.211673 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7734904d-d323-49a5-9a4f-1bf471300d1b" containerName="extract-utilities" Jan 01 09:46:21 crc kubenswrapper[4867]: E0101 09:46:21.211702 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7734904d-d323-49a5-9a4f-1bf471300d1b" containerName="registry-server" Jan 01 09:46:21 crc kubenswrapper[4867]: I0101 09:46:21.211711 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7734904d-d323-49a5-9a4f-1bf471300d1b" containerName="registry-server" Jan 01 09:46:21 crc kubenswrapper[4867]: E0101 09:46:21.211723 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7734904d-d323-49a5-9a4f-1bf471300d1b" containerName="extract-content" Jan 01 09:46:21 crc kubenswrapper[4867]: I0101 09:46:21.211732 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7734904d-d323-49a5-9a4f-1bf471300d1b" containerName="extract-content" Jan 01 09:46:21 crc kubenswrapper[4867]: I0101 09:46:21.211943 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="7734904d-d323-49a5-9a4f-1bf471300d1b" containerName="registry-server" Jan 01 09:46:21 crc kubenswrapper[4867]: I0101 09:46:21.212512 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-tf4jv" Jan 01 09:46:21 crc kubenswrapper[4867]: I0101 09:46:21.216749 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 01 09:46:21 crc kubenswrapper[4867]: I0101 09:46:21.216797 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 01 09:46:21 crc kubenswrapper[4867]: I0101 09:46:21.217168 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 01 09:46:21 crc kubenswrapper[4867]: I0101 09:46:21.217563 4867 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-6tq7b" Jan 01 09:46:21 crc kubenswrapper[4867]: I0101 09:46:21.220858 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-tf4jv"] Jan 01 09:46:21 crc kubenswrapper[4867]: I0101 09:46:21.331509 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 09:46:21 crc kubenswrapper[4867]: I0101 09:46:21.331594 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 09:46:21 crc kubenswrapper[4867]: I0101 09:46:21.356554 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/897075ed-f807-4082-9598-91a47ddff6a4-node-mnt\") pod \"crc-storage-crc-tf4jv\" (UID: \"897075ed-f807-4082-9598-91a47ddff6a4\") " pod="crc-storage/crc-storage-crc-tf4jv" Jan 01 09:46:21 crc kubenswrapper[4867]: I0101 09:46:21.356643 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/897075ed-f807-4082-9598-91a47ddff6a4-crc-storage\") pod \"crc-storage-crc-tf4jv\" (UID: \"897075ed-f807-4082-9598-91a47ddff6a4\") " pod="crc-storage/crc-storage-crc-tf4jv" Jan 01 09:46:21 crc kubenswrapper[4867]: I0101 09:46:21.357018 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b27pm\" (UniqueName: \"kubernetes.io/projected/897075ed-f807-4082-9598-91a47ddff6a4-kube-api-access-b27pm\") pod \"crc-storage-crc-tf4jv\" (UID: \"897075ed-f807-4082-9598-91a47ddff6a4\") " pod="crc-storage/crc-storage-crc-tf4jv" Jan 01 09:46:21 crc kubenswrapper[4867]: I0101 09:46:21.459092 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b27pm\" (UniqueName: \"kubernetes.io/projected/897075ed-f807-4082-9598-91a47ddff6a4-kube-api-access-b27pm\") pod \"crc-storage-crc-tf4jv\" (UID: \"897075ed-f807-4082-9598-91a47ddff6a4\") " pod="crc-storage/crc-storage-crc-tf4jv" Jan 01 09:46:21 crc kubenswrapper[4867]: I0101 09:46:21.459273 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/897075ed-f807-4082-9598-91a47ddff6a4-node-mnt\") pod \"crc-storage-crc-tf4jv\" (UID: \"897075ed-f807-4082-9598-91a47ddff6a4\") " pod="crc-storage/crc-storage-crc-tf4jv" Jan 01 09:46:21 crc kubenswrapper[4867]: I0101 09:46:21.459352 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/897075ed-f807-4082-9598-91a47ddff6a4-crc-storage\") pod \"crc-storage-crc-tf4jv\" (UID: \"897075ed-f807-4082-9598-91a47ddff6a4\") " pod="crc-storage/crc-storage-crc-tf4jv" Jan 01 09:46:21 crc kubenswrapper[4867]: I0101 09:46:21.459828 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/897075ed-f807-4082-9598-91a47ddff6a4-node-mnt\") pod \"crc-storage-crc-tf4jv\" (UID: \"897075ed-f807-4082-9598-91a47ddff6a4\") " pod="crc-storage/crc-storage-crc-tf4jv" Jan 01 09:46:21 crc kubenswrapper[4867]: I0101 09:46:21.460795 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/897075ed-f807-4082-9598-91a47ddff6a4-crc-storage\") pod \"crc-storage-crc-tf4jv\" (UID: \"897075ed-f807-4082-9598-91a47ddff6a4\") " pod="crc-storage/crc-storage-crc-tf4jv" Jan 01 09:46:21 crc kubenswrapper[4867]: I0101 09:46:21.493323 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b27pm\" (UniqueName: \"kubernetes.io/projected/897075ed-f807-4082-9598-91a47ddff6a4-kube-api-access-b27pm\") pod \"crc-storage-crc-tf4jv\" (UID: \"897075ed-f807-4082-9598-91a47ddff6a4\") " pod="crc-storage/crc-storage-crc-tf4jv" Jan 01 09:46:21 crc kubenswrapper[4867]: I0101 09:46:21.543668 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-tf4jv" Jan 01 09:46:22 crc kubenswrapper[4867]: I0101 09:46:22.070220 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-tf4jv"] Jan 01 09:46:22 crc kubenswrapper[4867]: I0101 09:46:22.263216 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-tf4jv" event={"ID":"897075ed-f807-4082-9598-91a47ddff6a4","Type":"ContainerStarted","Data":"1c9c0d15537ccbd2c41d3a1d5462d3d22e9e518022749c562960b2d9614ac574"} Jan 01 09:46:23 crc kubenswrapper[4867]: I0101 09:46:23.274782 4867 generic.go:334] "Generic (PLEG): container finished" podID="897075ed-f807-4082-9598-91a47ddff6a4" containerID="24afb404a91637b9d73eadb239f1b53dd5c9073622ed14aa097f4a0ba7aa3093" exitCode=0 Jan 01 09:46:23 crc kubenswrapper[4867]: I0101 09:46:23.274877 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-tf4jv" event={"ID":"897075ed-f807-4082-9598-91a47ddff6a4","Type":"ContainerDied","Data":"24afb404a91637b9d73eadb239f1b53dd5c9073622ed14aa097f4a0ba7aa3093"} Jan 01 09:46:24 crc kubenswrapper[4867]: I0101 09:46:24.663225 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-tf4jv" Jan 01 09:46:24 crc kubenswrapper[4867]: I0101 09:46:24.821966 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/897075ed-f807-4082-9598-91a47ddff6a4-crc-storage\") pod \"897075ed-f807-4082-9598-91a47ddff6a4\" (UID: \"897075ed-f807-4082-9598-91a47ddff6a4\") " Jan 01 09:46:24 crc kubenswrapper[4867]: I0101 09:46:24.822119 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/897075ed-f807-4082-9598-91a47ddff6a4-node-mnt\") pod \"897075ed-f807-4082-9598-91a47ddff6a4\" (UID: \"897075ed-f807-4082-9598-91a47ddff6a4\") " Jan 01 09:46:24 crc kubenswrapper[4867]: I0101 09:46:24.822169 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b27pm\" (UniqueName: \"kubernetes.io/projected/897075ed-f807-4082-9598-91a47ddff6a4-kube-api-access-b27pm\") pod \"897075ed-f807-4082-9598-91a47ddff6a4\" (UID: \"897075ed-f807-4082-9598-91a47ddff6a4\") " Jan 01 09:46:24 crc kubenswrapper[4867]: I0101 09:46:24.822221 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/897075ed-f807-4082-9598-91a47ddff6a4-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "897075ed-f807-4082-9598-91a47ddff6a4" (UID: "897075ed-f807-4082-9598-91a47ddff6a4"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 09:46:24 crc kubenswrapper[4867]: I0101 09:46:24.822600 4867 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/897075ed-f807-4082-9598-91a47ddff6a4-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 01 09:46:24 crc kubenswrapper[4867]: I0101 09:46:24.828581 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/897075ed-f807-4082-9598-91a47ddff6a4-kube-api-access-b27pm" (OuterVolumeSpecName: "kube-api-access-b27pm") pod "897075ed-f807-4082-9598-91a47ddff6a4" (UID: "897075ed-f807-4082-9598-91a47ddff6a4"). InnerVolumeSpecName "kube-api-access-b27pm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:46:24 crc kubenswrapper[4867]: I0101 09:46:24.838256 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/897075ed-f807-4082-9598-91a47ddff6a4-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "897075ed-f807-4082-9598-91a47ddff6a4" (UID: "897075ed-f807-4082-9598-91a47ddff6a4"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 09:46:24 crc kubenswrapper[4867]: I0101 09:46:24.924764 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b27pm\" (UniqueName: \"kubernetes.io/projected/897075ed-f807-4082-9598-91a47ddff6a4-kube-api-access-b27pm\") on node \"crc\" DevicePath \"\"" Jan 01 09:46:24 crc kubenswrapper[4867]: I0101 09:46:24.924827 4867 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/897075ed-f807-4082-9598-91a47ddff6a4-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 01 09:46:25 crc kubenswrapper[4867]: I0101 09:46:25.294397 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-tf4jv" event={"ID":"897075ed-f807-4082-9598-91a47ddff6a4","Type":"ContainerDied","Data":"1c9c0d15537ccbd2c41d3a1d5462d3d22e9e518022749c562960b2d9614ac574"} Jan 01 09:46:25 crc kubenswrapper[4867]: I0101 09:46:25.294438 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c9c0d15537ccbd2c41d3a1d5462d3d22e9e518022749c562960b2d9614ac574" Jan 01 09:46:25 crc kubenswrapper[4867]: I0101 09:46:25.294471 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-tf4jv" Jan 01 09:46:27 crc kubenswrapper[4867]: I0101 09:46:27.059484 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-tf4jv"] Jan 01 09:46:27 crc kubenswrapper[4867]: I0101 09:46:27.070057 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-tf4jv"] Jan 01 09:46:27 crc kubenswrapper[4867]: I0101 09:46:27.142411 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="897075ed-f807-4082-9598-91a47ddff6a4" path="/var/lib/kubelet/pods/897075ed-f807-4082-9598-91a47ddff6a4/volumes" Jan 01 09:46:27 crc kubenswrapper[4867]: I0101 09:46:27.163120 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-csvwj"] Jan 01 09:46:27 crc kubenswrapper[4867]: E0101 09:46:27.163582 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="897075ed-f807-4082-9598-91a47ddff6a4" containerName="storage" Jan 01 09:46:27 crc kubenswrapper[4867]: I0101 09:46:27.163612 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="897075ed-f807-4082-9598-91a47ddff6a4" containerName="storage" Jan 01 09:46:27 crc kubenswrapper[4867]: I0101 09:46:27.163823 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="897075ed-f807-4082-9598-91a47ddff6a4" containerName="storage" Jan 01 09:46:27 crc kubenswrapper[4867]: I0101 09:46:27.164572 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-csvwj" Jan 01 09:46:27 crc kubenswrapper[4867]: I0101 09:46:27.168114 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 01 09:46:27 crc kubenswrapper[4867]: I0101 09:46:27.168439 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 01 09:46:27 crc kubenswrapper[4867]: I0101 09:46:27.168689 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 01 09:46:27 crc kubenswrapper[4867]: I0101 09:46:27.168976 4867 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-6tq7b" Jan 01 09:46:27 crc kubenswrapper[4867]: I0101 09:46:27.174543 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-csvwj"] Jan 01 09:46:27 crc kubenswrapper[4867]: I0101 09:46:27.260463 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a79b522d-eaa8-4b3a-bf47-357e0fd4ca66-node-mnt\") pod \"crc-storage-crc-csvwj\" (UID: \"a79b522d-eaa8-4b3a-bf47-357e0fd4ca66\") " pod="crc-storage/crc-storage-crc-csvwj" Jan 01 09:46:27 crc kubenswrapper[4867]: I0101 09:46:27.260726 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk7fc\" (UniqueName: \"kubernetes.io/projected/a79b522d-eaa8-4b3a-bf47-357e0fd4ca66-kube-api-access-dk7fc\") pod \"crc-storage-crc-csvwj\" (UID: \"a79b522d-eaa8-4b3a-bf47-357e0fd4ca66\") " pod="crc-storage/crc-storage-crc-csvwj" Jan 01 09:46:27 crc kubenswrapper[4867]: I0101 09:46:27.260806 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a79b522d-eaa8-4b3a-bf47-357e0fd4ca66-crc-storage\") pod \"crc-storage-crc-csvwj\" (UID: \"a79b522d-eaa8-4b3a-bf47-357e0fd4ca66\") " pod="crc-storage/crc-storage-crc-csvwj" Jan 01 09:46:27 crc kubenswrapper[4867]: I0101 09:46:27.362567 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk7fc\" (UniqueName: \"kubernetes.io/projected/a79b522d-eaa8-4b3a-bf47-357e0fd4ca66-kube-api-access-dk7fc\") pod \"crc-storage-crc-csvwj\" (UID: \"a79b522d-eaa8-4b3a-bf47-357e0fd4ca66\") " pod="crc-storage/crc-storage-crc-csvwj" Jan 01 09:46:27 crc kubenswrapper[4867]: I0101 09:46:27.362650 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a79b522d-eaa8-4b3a-bf47-357e0fd4ca66-crc-storage\") pod \"crc-storage-crc-csvwj\" (UID: \"a79b522d-eaa8-4b3a-bf47-357e0fd4ca66\") " pod="crc-storage/crc-storage-crc-csvwj" Jan 01 09:46:27 crc kubenswrapper[4867]: I0101 09:46:27.362761 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a79b522d-eaa8-4b3a-bf47-357e0fd4ca66-node-mnt\") pod \"crc-storage-crc-csvwj\" (UID: \"a79b522d-eaa8-4b3a-bf47-357e0fd4ca66\") " pod="crc-storage/crc-storage-crc-csvwj" Jan 01 09:46:27 crc kubenswrapper[4867]: I0101 09:46:27.363118 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a79b522d-eaa8-4b3a-bf47-357e0fd4ca66-node-mnt\") pod \"crc-storage-crc-csvwj\" (UID: \"a79b522d-eaa8-4b3a-bf47-357e0fd4ca66\") " pod="crc-storage/crc-storage-crc-csvwj" Jan 01 09:46:27 crc kubenswrapper[4867]: I0101 09:46:27.364643 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a79b522d-eaa8-4b3a-bf47-357e0fd4ca66-crc-storage\") pod \"crc-storage-crc-csvwj\" (UID: \"a79b522d-eaa8-4b3a-bf47-357e0fd4ca66\") " pod="crc-storage/crc-storage-crc-csvwj" Jan 01 09:46:27 crc kubenswrapper[4867]: I0101 09:46:27.395319 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk7fc\" (UniqueName: \"kubernetes.io/projected/a79b522d-eaa8-4b3a-bf47-357e0fd4ca66-kube-api-access-dk7fc\") pod \"crc-storage-crc-csvwj\" (UID: \"a79b522d-eaa8-4b3a-bf47-357e0fd4ca66\") " pod="crc-storage/crc-storage-crc-csvwj" Jan 01 09:46:27 crc kubenswrapper[4867]: I0101 09:46:27.488460 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-csvwj" Jan 01 09:46:27 crc kubenswrapper[4867]: I0101 09:46:27.702154 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-csvwj"] Jan 01 09:46:28 crc kubenswrapper[4867]: I0101 09:46:28.321753 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-csvwj" event={"ID":"a79b522d-eaa8-4b3a-bf47-357e0fd4ca66","Type":"ContainerStarted","Data":"a13ecc62982b0eade2163bbe0d4a4807258313a24a925ec04bb8f7131937f1fc"} Jan 01 09:46:29 crc kubenswrapper[4867]: I0101 09:46:29.334179 4867 generic.go:334] "Generic (PLEG): container finished" podID="a79b522d-eaa8-4b3a-bf47-357e0fd4ca66" containerID="bb9dc939e8730f7f4108e86157b5d7fd57dc5c900ab7aace96009310d8f264c4" exitCode=0 Jan 01 09:46:29 crc kubenswrapper[4867]: I0101 09:46:29.334226 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-csvwj" event={"ID":"a79b522d-eaa8-4b3a-bf47-357e0fd4ca66","Type":"ContainerDied","Data":"bb9dc939e8730f7f4108e86157b5d7fd57dc5c900ab7aace96009310d8f264c4"} Jan 01 09:46:30 crc kubenswrapper[4867]: I0101 09:46:30.742207 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-csvwj" Jan 01 09:46:30 crc kubenswrapper[4867]: I0101 09:46:30.922240 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk7fc\" (UniqueName: \"kubernetes.io/projected/a79b522d-eaa8-4b3a-bf47-357e0fd4ca66-kube-api-access-dk7fc\") pod \"a79b522d-eaa8-4b3a-bf47-357e0fd4ca66\" (UID: \"a79b522d-eaa8-4b3a-bf47-357e0fd4ca66\") " Jan 01 09:46:30 crc kubenswrapper[4867]: I0101 09:46:30.922343 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a79b522d-eaa8-4b3a-bf47-357e0fd4ca66-crc-storage\") pod \"a79b522d-eaa8-4b3a-bf47-357e0fd4ca66\" (UID: \"a79b522d-eaa8-4b3a-bf47-357e0fd4ca66\") " Jan 01 09:46:30 crc kubenswrapper[4867]: I0101 09:46:30.922530 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a79b522d-eaa8-4b3a-bf47-357e0fd4ca66-node-mnt\") pod \"a79b522d-eaa8-4b3a-bf47-357e0fd4ca66\" (UID: \"a79b522d-eaa8-4b3a-bf47-357e0fd4ca66\") " Jan 01 09:46:30 crc kubenswrapper[4867]: I0101 09:46:30.922688 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a79b522d-eaa8-4b3a-bf47-357e0fd4ca66-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "a79b522d-eaa8-4b3a-bf47-357e0fd4ca66" (UID: "a79b522d-eaa8-4b3a-bf47-357e0fd4ca66"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 09:46:30 crc kubenswrapper[4867]: I0101 09:46:30.923014 4867 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a79b522d-eaa8-4b3a-bf47-357e0fd4ca66-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 01 09:46:30 crc kubenswrapper[4867]: I0101 09:46:30.941814 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a79b522d-eaa8-4b3a-bf47-357e0fd4ca66-kube-api-access-dk7fc" (OuterVolumeSpecName: "kube-api-access-dk7fc") pod "a79b522d-eaa8-4b3a-bf47-357e0fd4ca66" (UID: "a79b522d-eaa8-4b3a-bf47-357e0fd4ca66"). InnerVolumeSpecName "kube-api-access-dk7fc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:46:30 crc kubenswrapper[4867]: I0101 09:46:30.953239 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a79b522d-eaa8-4b3a-bf47-357e0fd4ca66-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "a79b522d-eaa8-4b3a-bf47-357e0fd4ca66" (UID: "a79b522d-eaa8-4b3a-bf47-357e0fd4ca66"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 09:46:31 crc kubenswrapper[4867]: I0101 09:46:31.024630 4867 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a79b522d-eaa8-4b3a-bf47-357e0fd4ca66-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 01 09:46:31 crc kubenswrapper[4867]: I0101 09:46:31.024679 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk7fc\" (UniqueName: \"kubernetes.io/projected/a79b522d-eaa8-4b3a-bf47-357e0fd4ca66-kube-api-access-dk7fc\") on node \"crc\" DevicePath \"\"" Jan 01 09:46:31 crc kubenswrapper[4867]: I0101 09:46:31.355405 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-csvwj" event={"ID":"a79b522d-eaa8-4b3a-bf47-357e0fd4ca66","Type":"ContainerDied","Data":"a13ecc62982b0eade2163bbe0d4a4807258313a24a925ec04bb8f7131937f1fc"} Jan 01 09:46:31 crc kubenswrapper[4867]: I0101 09:46:31.355859 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a13ecc62982b0eade2163bbe0d4a4807258313a24a925ec04bb8f7131937f1fc" Jan 01 09:46:31 crc kubenswrapper[4867]: I0101 09:46:31.355667 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-csvwj" Jan 01 09:46:40 crc kubenswrapper[4867]: I0101 09:46:40.762582 4867 scope.go:117] "RemoveContainer" containerID="6521b471c3b91363304ff1b54df8af56dceaefccba854ac8c9ba2997112ae659" Jan 01 09:46:51 crc kubenswrapper[4867]: I0101 09:46:51.331005 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 09:46:51 crc kubenswrapper[4867]: I0101 09:46:51.331863 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 09:47:21 crc kubenswrapper[4867]: I0101 09:47:21.331554 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 09:47:21 crc kubenswrapper[4867]: I0101 09:47:21.332206 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 09:47:21 crc kubenswrapper[4867]: I0101 09:47:21.332257 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-69jph" Jan 01 09:47:21 crc kubenswrapper[4867]: I0101 09:47:21.333037 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7c3a0bf8d3dcccff6661f6d6d4e4e646f3451d0401617d8e0355386059505e8b"} pod="openshift-machine-config-operator/machine-config-daemon-69jph" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 01 09:47:21 crc kubenswrapper[4867]: I0101 09:47:21.333105 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" containerID="cri-o://7c3a0bf8d3dcccff6661f6d6d4e4e646f3451d0401617d8e0355386059505e8b" gracePeriod=600 Jan 01 09:47:21 crc kubenswrapper[4867]: I0101 09:47:21.920020 4867 generic.go:334] "Generic (PLEG): container finished" podID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerID="7c3a0bf8d3dcccff6661f6d6d4e4e646f3451d0401617d8e0355386059505e8b" exitCode=0 Jan 01 09:47:21 crc kubenswrapper[4867]: I0101 09:47:21.920111 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerDied","Data":"7c3a0bf8d3dcccff6661f6d6d4e4e646f3451d0401617d8e0355386059505e8b"} Jan 01 09:47:21 crc kubenswrapper[4867]: I0101 09:47:21.920307 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerStarted","Data":"5c8950a1f682766a61154106a21527175eab09dc1f0de8f7e4d1ac387a869c79"} Jan 01 09:47:21 crc kubenswrapper[4867]: I0101 09:47:21.920328 4867 scope.go:117] "RemoveContainer" containerID="fd98e574ca7574623e1ed48bfa390eba695323f03a3d7c0849c3b4c42842cbf1" Jan 01 09:48:43 crc kubenswrapper[4867]: I0101 09:48:43.927777 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8xwxl"] Jan 01 09:48:43 crc kubenswrapper[4867]: E0101 09:48:43.930036 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a79b522d-eaa8-4b3a-bf47-357e0fd4ca66" containerName="storage" Jan 01 09:48:43 crc kubenswrapper[4867]: I0101 09:48:43.930099 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="a79b522d-eaa8-4b3a-bf47-357e0fd4ca66" containerName="storage" Jan 01 09:48:43 crc kubenswrapper[4867]: I0101 09:48:43.930751 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="a79b522d-eaa8-4b3a-bf47-357e0fd4ca66" containerName="storage" Jan 01 09:48:43 crc kubenswrapper[4867]: I0101 09:48:43.935281 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8xwxl" Jan 01 09:48:43 crc kubenswrapper[4867]: I0101 09:48:43.962953 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8xwxl"] Jan 01 09:48:43 crc kubenswrapper[4867]: I0101 09:48:43.998222 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bc32b87-684c-4313-a009-44dc00e8b713-utilities\") pod \"certified-operators-8xwxl\" (UID: \"2bc32b87-684c-4313-a009-44dc00e8b713\") " pod="openshift-marketplace/certified-operators-8xwxl" Jan 01 09:48:43 crc kubenswrapper[4867]: I0101 09:48:43.998501 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bc32b87-684c-4313-a009-44dc00e8b713-catalog-content\") pod \"certified-operators-8xwxl\" (UID: \"2bc32b87-684c-4313-a009-44dc00e8b713\") " pod="openshift-marketplace/certified-operators-8xwxl" Jan 01 09:48:43 crc kubenswrapper[4867]: I0101 09:48:43.998676 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql5s2\" (UniqueName: \"kubernetes.io/projected/2bc32b87-684c-4313-a009-44dc00e8b713-kube-api-access-ql5s2\") pod \"certified-operators-8xwxl\" (UID: \"2bc32b87-684c-4313-a009-44dc00e8b713\") " pod="openshift-marketplace/certified-operators-8xwxl" Jan 01 09:48:44 crc kubenswrapper[4867]: I0101 09:48:44.100495 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bc32b87-684c-4313-a009-44dc00e8b713-utilities\") pod \"certified-operators-8xwxl\" (UID: \"2bc32b87-684c-4313-a009-44dc00e8b713\") " pod="openshift-marketplace/certified-operators-8xwxl" Jan 01 09:48:44 crc kubenswrapper[4867]: I0101 09:48:44.100655 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bc32b87-684c-4313-a009-44dc00e8b713-catalog-content\") pod \"certified-operators-8xwxl\" (UID: \"2bc32b87-684c-4313-a009-44dc00e8b713\") " pod="openshift-marketplace/certified-operators-8xwxl" Jan 01 09:48:44 crc kubenswrapper[4867]: I0101 09:48:44.100692 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql5s2\" (UniqueName: \"kubernetes.io/projected/2bc32b87-684c-4313-a009-44dc00e8b713-kube-api-access-ql5s2\") pod \"certified-operators-8xwxl\" (UID: \"2bc32b87-684c-4313-a009-44dc00e8b713\") " pod="openshift-marketplace/certified-operators-8xwxl" Jan 01 09:48:44 crc kubenswrapper[4867]: I0101 09:48:44.101207 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bc32b87-684c-4313-a009-44dc00e8b713-catalog-content\") pod \"certified-operators-8xwxl\" (UID: \"2bc32b87-684c-4313-a009-44dc00e8b713\") " pod="openshift-marketplace/certified-operators-8xwxl" Jan 01 09:48:44 crc kubenswrapper[4867]: I0101 09:48:44.101419 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bc32b87-684c-4313-a009-44dc00e8b713-utilities\") pod \"certified-operators-8xwxl\" (UID: \"2bc32b87-684c-4313-a009-44dc00e8b713\") " pod="openshift-marketplace/certified-operators-8xwxl" Jan 01 09:48:44 crc kubenswrapper[4867]: I0101 09:48:44.111728 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2mgpp"] Jan 01 09:48:44 crc kubenswrapper[4867]: I0101 09:48:44.113428 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2mgpp" Jan 01 09:48:44 crc kubenswrapper[4867]: I0101 09:48:44.125487 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2mgpp"] Jan 01 09:48:44 crc kubenswrapper[4867]: I0101 09:48:44.132329 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql5s2\" (UniqueName: \"kubernetes.io/projected/2bc32b87-684c-4313-a009-44dc00e8b713-kube-api-access-ql5s2\") pod \"certified-operators-8xwxl\" (UID: \"2bc32b87-684c-4313-a009-44dc00e8b713\") " pod="openshift-marketplace/certified-operators-8xwxl" Jan 01 09:48:44 crc kubenswrapper[4867]: I0101 09:48:44.201908 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l2n7\" (UniqueName: \"kubernetes.io/projected/132d1c2d-1465-423c-b366-00bda22e79f0-kube-api-access-5l2n7\") pod \"community-operators-2mgpp\" (UID: \"132d1c2d-1465-423c-b366-00bda22e79f0\") " pod="openshift-marketplace/community-operators-2mgpp" Jan 01 09:48:44 crc kubenswrapper[4867]: I0101 09:48:44.202185 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/132d1c2d-1465-423c-b366-00bda22e79f0-utilities\") pod \"community-operators-2mgpp\" (UID: \"132d1c2d-1465-423c-b366-00bda22e79f0\") " pod="openshift-marketplace/community-operators-2mgpp" Jan 01 09:48:44 crc kubenswrapper[4867]: I0101 09:48:44.202561 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/132d1c2d-1465-423c-b366-00bda22e79f0-catalog-content\") pod \"community-operators-2mgpp\" (UID: \"132d1c2d-1465-423c-b366-00bda22e79f0\") " pod="openshift-marketplace/community-operators-2mgpp" Jan 01 09:48:44 crc kubenswrapper[4867]: I0101 09:48:44.268120 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8xwxl" Jan 01 09:48:44 crc kubenswrapper[4867]: I0101 09:48:44.306160 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/132d1c2d-1465-423c-b366-00bda22e79f0-catalog-content\") pod \"community-operators-2mgpp\" (UID: \"132d1c2d-1465-423c-b366-00bda22e79f0\") " pod="openshift-marketplace/community-operators-2mgpp" Jan 01 09:48:44 crc kubenswrapper[4867]: I0101 09:48:44.306514 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l2n7\" (UniqueName: \"kubernetes.io/projected/132d1c2d-1465-423c-b366-00bda22e79f0-kube-api-access-5l2n7\") pod \"community-operators-2mgpp\" (UID: \"132d1c2d-1465-423c-b366-00bda22e79f0\") " pod="openshift-marketplace/community-operators-2mgpp" Jan 01 09:48:44 crc kubenswrapper[4867]: I0101 09:48:44.306576 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/132d1c2d-1465-423c-b366-00bda22e79f0-utilities\") pod \"community-operators-2mgpp\" (UID: \"132d1c2d-1465-423c-b366-00bda22e79f0\") " pod="openshift-marketplace/community-operators-2mgpp" Jan 01 09:48:44 crc kubenswrapper[4867]: I0101 09:48:44.307035 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/132d1c2d-1465-423c-b366-00bda22e79f0-utilities\") pod \"community-operators-2mgpp\" (UID: \"132d1c2d-1465-423c-b366-00bda22e79f0\") " pod="openshift-marketplace/community-operators-2mgpp" Jan 01 09:48:44 crc kubenswrapper[4867]: I0101 09:48:44.310275 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/132d1c2d-1465-423c-b366-00bda22e79f0-catalog-content\") pod \"community-operators-2mgpp\" (UID: \"132d1c2d-1465-423c-b366-00bda22e79f0\") " pod="openshift-marketplace/community-operators-2mgpp" Jan 01 09:48:44 crc kubenswrapper[4867]: I0101 09:48:44.328260 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l2n7\" (UniqueName: \"kubernetes.io/projected/132d1c2d-1465-423c-b366-00bda22e79f0-kube-api-access-5l2n7\") pod \"community-operators-2mgpp\" (UID: \"132d1c2d-1465-423c-b366-00bda22e79f0\") " pod="openshift-marketplace/community-operators-2mgpp" Jan 01 09:48:44 crc kubenswrapper[4867]: I0101 09:48:44.472083 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2mgpp" Jan 01 09:48:44 crc kubenswrapper[4867]: I0101 09:48:44.754636 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8xwxl"] Jan 01 09:48:44 crc kubenswrapper[4867]: I0101 09:48:44.775933 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2mgpp"] Jan 01 09:48:45 crc kubenswrapper[4867]: I0101 09:48:45.702978 4867 generic.go:334] "Generic (PLEG): container finished" podID="132d1c2d-1465-423c-b366-00bda22e79f0" containerID="d2b24393238ccf7d876971a027e0745917f237f88a009b0e73eccb172001843f" exitCode=0 Jan 01 09:48:45 crc kubenswrapper[4867]: I0101 09:48:45.703147 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2mgpp" event={"ID":"132d1c2d-1465-423c-b366-00bda22e79f0","Type":"ContainerDied","Data":"d2b24393238ccf7d876971a027e0745917f237f88a009b0e73eccb172001843f"} Jan 01 09:48:45 crc kubenswrapper[4867]: I0101 09:48:45.703364 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2mgpp" event={"ID":"132d1c2d-1465-423c-b366-00bda22e79f0","Type":"ContainerStarted","Data":"a64f1ba1ab18351b14fdd5bfd270253fc1583fa0881fe225f62c0d5f5f40a1c1"} Jan 01 09:48:45 crc kubenswrapper[4867]: I0101 09:48:45.706177 4867 generic.go:334] "Generic (PLEG): container finished" podID="2bc32b87-684c-4313-a009-44dc00e8b713" containerID="3f91c2163e064274a6e6ea300083f667688cacfbde02b44c0f5bdcd649a9da8c" exitCode=0 Jan 01 09:48:45 crc kubenswrapper[4867]: I0101 09:48:45.706214 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xwxl" event={"ID":"2bc32b87-684c-4313-a009-44dc00e8b713","Type":"ContainerDied","Data":"3f91c2163e064274a6e6ea300083f667688cacfbde02b44c0f5bdcd649a9da8c"} Jan 01 09:48:45 crc kubenswrapper[4867]: I0101 09:48:45.706232 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xwxl" event={"ID":"2bc32b87-684c-4313-a009-44dc00e8b713","Type":"ContainerStarted","Data":"b47139e5402236a93a0992b86911f5ef3845d2608cc75d2240968d2efd0526e3"} Jan 01 09:48:46 crc kubenswrapper[4867]: I0101 09:48:46.716559 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xwxl" event={"ID":"2bc32b87-684c-4313-a009-44dc00e8b713","Type":"ContainerStarted","Data":"2cf78b412fd9b338e442fa3e19c6b5e4897c7f03a7887bcbb888ef18706afa11"} Jan 01 09:48:47 crc kubenswrapper[4867]: I0101 09:48:47.359231 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-bsvvf"] Jan 01 09:48:47 crc kubenswrapper[4867]: I0101 09:48:47.360743 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5986db9b4f-bsvvf" Jan 01 09:48:47 crc kubenswrapper[4867]: I0101 09:48:47.363736 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-mqfpf" Jan 01 09:48:47 crc kubenswrapper[4867]: I0101 09:48:47.366534 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 01 09:48:47 crc kubenswrapper[4867]: I0101 09:48:47.366702 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 01 09:48:47 crc kubenswrapper[4867]: I0101 09:48:47.373773 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-bsvvf"] Jan 01 09:48:47 crc kubenswrapper[4867]: I0101 09:48:47.376736 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 01 09:48:47 crc kubenswrapper[4867]: I0101 09:48:47.386499 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-fsj5f"] Jan 01 09:48:47 crc kubenswrapper[4867]: I0101 09:48:47.387667 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56bbd59dc5-fsj5f" Jan 01 09:48:47 crc kubenswrapper[4867]: I0101 09:48:47.389670 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 01 09:48:47 crc kubenswrapper[4867]: I0101 09:48:47.414330 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-fsj5f"] Jan 01 09:48:47 crc kubenswrapper[4867]: I0101 09:48:47.472488 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad0c7bb5-2153-4f92-9f1e-6ce26912ef55-dns-svc\") pod \"dnsmasq-dns-56bbd59dc5-fsj5f\" (UID: \"ad0c7bb5-2153-4f92-9f1e-6ce26912ef55\") " pod="openstack/dnsmasq-dns-56bbd59dc5-fsj5f" Jan 01 09:48:47 crc kubenswrapper[4867]: I0101 09:48:47.472554 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a868a15-fca3-401d-91b9-1503d9f5a1ae-config\") pod \"dnsmasq-dns-5986db9b4f-bsvvf\" (UID: \"7a868a15-fca3-401d-91b9-1503d9f5a1ae\") " pod="openstack/dnsmasq-dns-5986db9b4f-bsvvf" Jan 01 09:48:47 crc kubenswrapper[4867]: I0101 09:48:47.472602 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v92qj\" (UniqueName: \"kubernetes.io/projected/ad0c7bb5-2153-4f92-9f1e-6ce26912ef55-kube-api-access-v92qj\") pod \"dnsmasq-dns-56bbd59dc5-fsj5f\" (UID: \"ad0c7bb5-2153-4f92-9f1e-6ce26912ef55\") " pod="openstack/dnsmasq-dns-56bbd59dc5-fsj5f" Jan 01 09:48:47 crc kubenswrapper[4867]: I0101 09:48:47.472661 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsbbs\" (UniqueName: \"kubernetes.io/projected/7a868a15-fca3-401d-91b9-1503d9f5a1ae-kube-api-access-qsbbs\") pod \"dnsmasq-dns-5986db9b4f-bsvvf\" (UID: \"7a868a15-fca3-401d-91b9-1503d9f5a1ae\") " pod="openstack/dnsmasq-dns-5986db9b4f-bsvvf" Jan 01 09:48:47 crc kubenswrapper[4867]: I0101 09:48:47.472756 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad0c7bb5-2153-4f92-9f1e-6ce26912ef55-config\") pod \"dnsmasq-dns-56bbd59dc5-fsj5f\" (UID: \"ad0c7bb5-2153-4f92-9f1e-6ce26912ef55\") " pod="openstack/dnsmasq-dns-56bbd59dc5-fsj5f" Jan 01 09:48:47 crc kubenswrapper[4867]: I0101 09:48:47.574547 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad0c7bb5-2153-4f92-9f1e-6ce26912ef55-config\") pod \"dnsmasq-dns-56bbd59dc5-fsj5f\" (UID: \"ad0c7bb5-2153-4f92-9f1e-6ce26912ef55\") " pod="openstack/dnsmasq-dns-56bbd59dc5-fsj5f" Jan 01 09:48:47 crc kubenswrapper[4867]: I0101 09:48:47.574654 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad0c7bb5-2153-4f92-9f1e-6ce26912ef55-dns-svc\") pod \"dnsmasq-dns-56bbd59dc5-fsj5f\" (UID: \"ad0c7bb5-2153-4f92-9f1e-6ce26912ef55\") " pod="openstack/dnsmasq-dns-56bbd59dc5-fsj5f" Jan 01 09:48:47 crc kubenswrapper[4867]: I0101 09:48:47.574686 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a868a15-fca3-401d-91b9-1503d9f5a1ae-config\") pod \"dnsmasq-dns-5986db9b4f-bsvvf\" (UID: \"7a868a15-fca3-401d-91b9-1503d9f5a1ae\") " pod="openstack/dnsmasq-dns-5986db9b4f-bsvvf" Jan 01 09:48:47 crc kubenswrapper[4867]: I0101 09:48:47.574732 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v92qj\" (UniqueName: \"kubernetes.io/projected/ad0c7bb5-2153-4f92-9f1e-6ce26912ef55-kube-api-access-v92qj\") pod \"dnsmasq-dns-56bbd59dc5-fsj5f\" (UID: \"ad0c7bb5-2153-4f92-9f1e-6ce26912ef55\") " pod="openstack/dnsmasq-dns-56bbd59dc5-fsj5f" Jan 01 09:48:47 crc kubenswrapper[4867]: I0101 09:48:47.574774 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsbbs\" (UniqueName: \"kubernetes.io/projected/7a868a15-fca3-401d-91b9-1503d9f5a1ae-kube-api-access-qsbbs\") pod \"dnsmasq-dns-5986db9b4f-bsvvf\" (UID: \"7a868a15-fca3-401d-91b9-1503d9f5a1ae\") " pod="openstack/dnsmasq-dns-5986db9b4f-bsvvf" Jan 01 09:48:47 crc kubenswrapper[4867]: I0101 09:48:47.575709 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a868a15-fca3-401d-91b9-1503d9f5a1ae-config\") pod \"dnsmasq-dns-5986db9b4f-bsvvf\" (UID: \"7a868a15-fca3-401d-91b9-1503d9f5a1ae\") " pod="openstack/dnsmasq-dns-5986db9b4f-bsvvf" Jan 01 09:48:47 crc kubenswrapper[4867]: I0101 09:48:47.575860 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad0c7bb5-2153-4f92-9f1e-6ce26912ef55-dns-svc\") pod \"dnsmasq-dns-56bbd59dc5-fsj5f\" (UID: \"ad0c7bb5-2153-4f92-9f1e-6ce26912ef55\") " pod="openstack/dnsmasq-dns-56bbd59dc5-fsj5f" Jan 01 09:48:47 crc kubenswrapper[4867]: I0101 09:48:47.576188 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad0c7bb5-2153-4f92-9f1e-6ce26912ef55-config\") pod \"dnsmasq-dns-56bbd59dc5-fsj5f\" (UID: \"ad0c7bb5-2153-4f92-9f1e-6ce26912ef55\") " pod="openstack/dnsmasq-dns-56bbd59dc5-fsj5f" Jan 01 09:48:47 crc kubenswrapper[4867]: I0101 09:48:47.598535 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsbbs\" (UniqueName: \"kubernetes.io/projected/7a868a15-fca3-401d-91b9-1503d9f5a1ae-kube-api-access-qsbbs\") pod \"dnsmasq-dns-5986db9b4f-bsvvf\" (UID: \"7a868a15-fca3-401d-91b9-1503d9f5a1ae\") " pod="openstack/dnsmasq-dns-5986db9b4f-bsvvf" Jan 01 09:48:47 crc kubenswrapper[4867]: I0101 09:48:47.598864 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v92qj\" (UniqueName: \"kubernetes.io/projected/ad0c7bb5-2153-4f92-9f1e-6ce26912ef55-kube-api-access-v92qj\") pod \"dnsmasq-dns-56bbd59dc5-fsj5f\" (UID: \"ad0c7bb5-2153-4f92-9f1e-6ce26912ef55\") " pod="openstack/dnsmasq-dns-56bbd59dc5-fsj5f" Jan 01 09:48:47 crc kubenswrapper[4867]: I0101 09:48:47.676462 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5986db9b4f-bsvvf" Jan 01 09:48:47 crc kubenswrapper[4867]: I0101 09:48:47.702075 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56bbd59dc5-fsj5f" Jan 01 09:48:47 crc kubenswrapper[4867]: I0101 09:48:47.725820 4867 generic.go:334] "Generic (PLEG): container finished" podID="132d1c2d-1465-423c-b366-00bda22e79f0" containerID="2f41a924d5e2656df175d830666818348bb54e2ae1e82101dc8eff0273965325" exitCode=0 Jan 01 09:48:47 crc kubenswrapper[4867]: I0101 09:48:47.726565 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2mgpp" event={"ID":"132d1c2d-1465-423c-b366-00bda22e79f0","Type":"ContainerDied","Data":"2f41a924d5e2656df175d830666818348bb54e2ae1e82101dc8eff0273965325"} Jan 01 09:48:47 crc kubenswrapper[4867]: I0101 09:48:47.738421 4867 generic.go:334] "Generic (PLEG): container finished" podID="2bc32b87-684c-4313-a009-44dc00e8b713" containerID="2cf78b412fd9b338e442fa3e19c6b5e4897c7f03a7887bcbb888ef18706afa11" exitCode=0 Jan 01 09:48:47 crc kubenswrapper[4867]: I0101 09:48:47.738463 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xwxl" event={"ID":"2bc32b87-684c-4313-a009-44dc00e8b713","Type":"ContainerDied","Data":"2cf78b412fd9b338e442fa3e19c6b5e4897c7f03a7887bcbb888ef18706afa11"} Jan 01 09:48:47 crc kubenswrapper[4867]: I0101 09:48:47.787811 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-fsj5f"] Jan 01 09:48:47 crc kubenswrapper[4867]: I0101 09:48:47.841522 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95587bc99-hndrb"] Jan 01 09:48:47 crc kubenswrapper[4867]: I0101 09:48:47.842943 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95587bc99-hndrb" Jan 01 09:48:47 crc kubenswrapper[4867]: I0101 09:48:47.846778 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-hndrb"] Jan 01 09:48:47 crc kubenswrapper[4867]: I0101 09:48:47.985069 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba666d1e-4416-4028-b20f-6c1e135e3b5d-config\") pod \"dnsmasq-dns-95587bc99-hndrb\" (UID: \"ba666d1e-4416-4028-b20f-6c1e135e3b5d\") " pod="openstack/dnsmasq-dns-95587bc99-hndrb" Jan 01 09:48:47 crc kubenswrapper[4867]: I0101 09:48:47.985157 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba666d1e-4416-4028-b20f-6c1e135e3b5d-dns-svc\") pod \"dnsmasq-dns-95587bc99-hndrb\" (UID: \"ba666d1e-4416-4028-b20f-6c1e135e3b5d\") " pod="openstack/dnsmasq-dns-95587bc99-hndrb" Jan 01 09:48:47 crc kubenswrapper[4867]: I0101 09:48:47.985187 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvdlf\" (UniqueName: \"kubernetes.io/projected/ba666d1e-4416-4028-b20f-6c1e135e3b5d-kube-api-access-rvdlf\") pod \"dnsmasq-dns-95587bc99-hndrb\" (UID: \"ba666d1e-4416-4028-b20f-6c1e135e3b5d\") " pod="openstack/dnsmasq-dns-95587bc99-hndrb" Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.068308 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-bsvvf"] Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.086692 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba666d1e-4416-4028-b20f-6c1e135e3b5d-dns-svc\") pod \"dnsmasq-dns-95587bc99-hndrb\" (UID: \"ba666d1e-4416-4028-b20f-6c1e135e3b5d\") " pod="openstack/dnsmasq-dns-95587bc99-hndrb" Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.086747 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvdlf\" (UniqueName: \"kubernetes.io/projected/ba666d1e-4416-4028-b20f-6c1e135e3b5d-kube-api-access-rvdlf\") pod \"dnsmasq-dns-95587bc99-hndrb\" (UID: \"ba666d1e-4416-4028-b20f-6c1e135e3b5d\") " pod="openstack/dnsmasq-dns-95587bc99-hndrb" Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.086823 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba666d1e-4416-4028-b20f-6c1e135e3b5d-config\") pod \"dnsmasq-dns-95587bc99-hndrb\" (UID: \"ba666d1e-4416-4028-b20f-6c1e135e3b5d\") " pod="openstack/dnsmasq-dns-95587bc99-hndrb" Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.087623 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba666d1e-4416-4028-b20f-6c1e135e3b5d-dns-svc\") pod \"dnsmasq-dns-95587bc99-hndrb\" (UID: \"ba666d1e-4416-4028-b20f-6c1e135e3b5d\") " pod="openstack/dnsmasq-dns-95587bc99-hndrb" Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.088025 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba666d1e-4416-4028-b20f-6c1e135e3b5d-config\") pod \"dnsmasq-dns-95587bc99-hndrb\" (UID: \"ba666d1e-4416-4028-b20f-6c1e135e3b5d\") " pod="openstack/dnsmasq-dns-95587bc99-hndrb" Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.093435 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-f5274"] Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.097143 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-f5274" Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.109944 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvdlf\" (UniqueName: \"kubernetes.io/projected/ba666d1e-4416-4028-b20f-6c1e135e3b5d-kube-api-access-rvdlf\") pod \"dnsmasq-dns-95587bc99-hndrb\" (UID: \"ba666d1e-4416-4028-b20f-6c1e135e3b5d\") " pod="openstack/dnsmasq-dns-95587bc99-hndrb" Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.112690 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-f5274"] Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.188526 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95587bc99-hndrb" Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.188673 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqm76\" (UniqueName: \"kubernetes.io/projected/81bb7bf2-99db-499a-b2ff-c681fe62ee62-kube-api-access-sqm76\") pod \"dnsmasq-dns-5d79f765b5-f5274\" (UID: \"81bb7bf2-99db-499a-b2ff-c681fe62ee62\") " pod="openstack/dnsmasq-dns-5d79f765b5-f5274" Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.188732 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81bb7bf2-99db-499a-b2ff-c681fe62ee62-dns-svc\") pod \"dnsmasq-dns-5d79f765b5-f5274\" (UID: \"81bb7bf2-99db-499a-b2ff-c681fe62ee62\") " pod="openstack/dnsmasq-dns-5d79f765b5-f5274" Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.188755 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81bb7bf2-99db-499a-b2ff-c681fe62ee62-config\") pod \"dnsmasq-dns-5d79f765b5-f5274\" (UID: \"81bb7bf2-99db-499a-b2ff-c681fe62ee62\") " pod="openstack/dnsmasq-dns-5d79f765b5-f5274" Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.225333 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-bsvvf"] Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.290381 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqm76\" (UniqueName: \"kubernetes.io/projected/81bb7bf2-99db-499a-b2ff-c681fe62ee62-kube-api-access-sqm76\") pod \"dnsmasq-dns-5d79f765b5-f5274\" (UID: \"81bb7bf2-99db-499a-b2ff-c681fe62ee62\") " pod="openstack/dnsmasq-dns-5d79f765b5-f5274" Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.290442 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81bb7bf2-99db-499a-b2ff-c681fe62ee62-dns-svc\") pod \"dnsmasq-dns-5d79f765b5-f5274\" (UID: \"81bb7bf2-99db-499a-b2ff-c681fe62ee62\") " pod="openstack/dnsmasq-dns-5d79f765b5-f5274" Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.290462 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81bb7bf2-99db-499a-b2ff-c681fe62ee62-config\") pod \"dnsmasq-dns-5d79f765b5-f5274\" (UID: \"81bb7bf2-99db-499a-b2ff-c681fe62ee62\") " pod="openstack/dnsmasq-dns-5d79f765b5-f5274" Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.291252 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81bb7bf2-99db-499a-b2ff-c681fe62ee62-dns-svc\") pod \"dnsmasq-dns-5d79f765b5-f5274\" (UID: \"81bb7bf2-99db-499a-b2ff-c681fe62ee62\") " pod="openstack/dnsmasq-dns-5d79f765b5-f5274" Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.291401 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81bb7bf2-99db-499a-b2ff-c681fe62ee62-config\") pod \"dnsmasq-dns-5d79f765b5-f5274\" (UID: \"81bb7bf2-99db-499a-b2ff-c681fe62ee62\") " pod="openstack/dnsmasq-dns-5d79f765b5-f5274" Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.306269 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-fsj5f"] Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.309231 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqm76\" (UniqueName: \"kubernetes.io/projected/81bb7bf2-99db-499a-b2ff-c681fe62ee62-kube-api-access-sqm76\") pod \"dnsmasq-dns-5d79f765b5-f5274\" (UID: \"81bb7bf2-99db-499a-b2ff-c681fe62ee62\") " pod="openstack/dnsmasq-dns-5d79f765b5-f5274" Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.438407 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-f5274" Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.654562 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-hndrb"] Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.747219 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-hndrb" event={"ID":"ba666d1e-4416-4028-b20f-6c1e135e3b5d","Type":"ContainerStarted","Data":"3dfc1445873af392fb1d071b039285f020b253f5ac4195d291542d10fd48a707"} Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.749393 4867 generic.go:334] "Generic (PLEG): container finished" podID="7a868a15-fca3-401d-91b9-1503d9f5a1ae" containerID="917f67a32b1bd4f9ea9a245795bbcc299c354ee587efb0b4daf37daf29c1ecaf" exitCode=0 Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.749454 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5986db9b4f-bsvvf" event={"ID":"7a868a15-fca3-401d-91b9-1503d9f5a1ae","Type":"ContainerDied","Data":"917f67a32b1bd4f9ea9a245795bbcc299c354ee587efb0b4daf37daf29c1ecaf"} Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.749482 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5986db9b4f-bsvvf" event={"ID":"7a868a15-fca3-401d-91b9-1503d9f5a1ae","Type":"ContainerStarted","Data":"fc308a2bd1fb1f13481910a5c3a4dcfcc98a85998a877a0deb3d3cec24722e18"} Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.762314 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2mgpp" event={"ID":"132d1c2d-1465-423c-b366-00bda22e79f0","Type":"ContainerStarted","Data":"70ad49e05e131fcaacc37b190f363d65438d5a97ed21df7492e88c78b2db390d"} Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.783022 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xwxl" event={"ID":"2bc32b87-684c-4313-a009-44dc00e8b713","Type":"ContainerStarted","Data":"a4f2229348b672ac020e97abf0c84f748b642f94818ddc742c90605833c966dd"} Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.784751 4867 generic.go:334] "Generic (PLEG): container finished" podID="ad0c7bb5-2153-4f92-9f1e-6ce26912ef55" containerID="23c361cd5458050babdc2fd8ad2296f54fdf84a4faf507822f7ef32c66c76c33" exitCode=0 Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.784782 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56bbd59dc5-fsj5f" event={"ID":"ad0c7bb5-2153-4f92-9f1e-6ce26912ef55","Type":"ContainerDied","Data":"23c361cd5458050babdc2fd8ad2296f54fdf84a4faf507822f7ef32c66c76c33"} Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.784799 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56bbd59dc5-fsj5f" event={"ID":"ad0c7bb5-2153-4f92-9f1e-6ce26912ef55","Type":"ContainerStarted","Data":"7bd0c3464c1b0544a4759a6da813aa029bf0831a84d8b6d502b188795e1a9aea"} Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.807395 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2mgpp" podStartSLOduration=2.18181933 podStartE2EDuration="4.807378834s" podCreationTimestamp="2026-01-01 09:48:44 +0000 UTC" firstStartedPulling="2026-01-01 09:48:45.707550468 +0000 UTC m=+4934.842819237" lastFinishedPulling="2026-01-01 09:48:48.333109982 +0000 UTC m=+4937.468378741" observedRunningTime="2026-01-01 09:48:48.802652059 +0000 UTC m=+4937.937920838" watchObservedRunningTime="2026-01-01 09:48:48.807378834 +0000 UTC m=+4937.942647603" Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.848983 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8xwxl" podStartSLOduration=3.1998555140000002 podStartE2EDuration="5.848965049s" podCreationTimestamp="2026-01-01 09:48:43 +0000 UTC" firstStartedPulling="2026-01-01 09:48:45.707735834 +0000 UTC m=+4934.843004613" lastFinishedPulling="2026-01-01 09:48:48.356845379 +0000 UTC m=+4937.492114148" observedRunningTime="2026-01-01 09:48:48.846266092 +0000 UTC m=+4937.981534881" watchObservedRunningTime="2026-01-01 09:48:48.848965049 +0000 UTC m=+4937.984233818" Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.933431 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-f5274"] Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.957529 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.967208 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.970260 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.970810 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.976569 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.976638 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.976661 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.976676 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 01 09:48:48 crc kubenswrapper[4867]: I0101 09:48:48.992649 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-5cv2r" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.030169 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.106329 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " pod="openstack/rabbitmq-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.106384 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c1cdc226-d0b2-4896-8a47-cf3823b8ca7b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1cdc226-d0b2-4896-8a47-cf3823b8ca7b\") pod \"rabbitmq-server-0\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " pod="openstack/rabbitmq-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.106416 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " pod="openstack/rabbitmq-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.106433 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-config-data\") pod \"rabbitmq-server-0\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " pod="openstack/rabbitmq-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.106449 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " pod="openstack/rabbitmq-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.106514 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " pod="openstack/rabbitmq-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.106532 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " pod="openstack/rabbitmq-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.106554 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcnn5\" (UniqueName: \"kubernetes.io/projected/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-kube-api-access-bcnn5\") pod \"rabbitmq-server-0\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " pod="openstack/rabbitmq-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.106569 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " pod="openstack/rabbitmq-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.106596 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " pod="openstack/rabbitmq-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.106637 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " pod="openstack/rabbitmq-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.165744 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5986db9b4f-bsvvf" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.207600 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " pod="openstack/rabbitmq-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.207660 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " pod="openstack/rabbitmq-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.207693 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcnn5\" (UniqueName: \"kubernetes.io/projected/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-kube-api-access-bcnn5\") pod \"rabbitmq-server-0\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " pod="openstack/rabbitmq-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.207723 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " pod="openstack/rabbitmq-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.207758 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " pod="openstack/rabbitmq-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.207797 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " pod="openstack/rabbitmq-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.207837 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " pod="openstack/rabbitmq-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.207873 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c1cdc226-d0b2-4896-8a47-cf3823b8ca7b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1cdc226-d0b2-4896-8a47-cf3823b8ca7b\") pod \"rabbitmq-server-0\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " pod="openstack/rabbitmq-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.207921 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " pod="openstack/rabbitmq-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.207945 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-config-data\") pod \"rabbitmq-server-0\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " pod="openstack/rabbitmq-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.207966 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " pod="openstack/rabbitmq-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.208578 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 01 09:48:49 crc kubenswrapper[4867]: E0101 09:48:49.208829 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a868a15-fca3-401d-91b9-1503d9f5a1ae" containerName="init" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.208841 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a868a15-fca3-401d-91b9-1503d9f5a1ae" containerName="init" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.208992 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a868a15-fca3-401d-91b9-1503d9f5a1ae" containerName="init" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.209649 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.210361 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " pod="openstack/rabbitmq-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.211038 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " pod="openstack/rabbitmq-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.211324 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " pod="openstack/rabbitmq-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.212744 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-config-data\") pod \"rabbitmq-server-0\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " pod="openstack/rabbitmq-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.213466 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " pod="openstack/rabbitmq-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.214004 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.214336 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-29z5g" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.214388 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.214470 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.214555 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.214779 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.214859 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.214967 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " pod="openstack/rabbitmq-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.217030 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " pod="openstack/rabbitmq-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.217345 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " pod="openstack/rabbitmq-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.218034 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " pod="openstack/rabbitmq-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.218266 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56bbd59dc5-fsj5f" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.220241 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.222287 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.222318 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c1cdc226-d0b2-4896-8a47-cf3823b8ca7b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1cdc226-d0b2-4896-8a47-cf3823b8ca7b\") pod \"rabbitmq-server-0\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/398ce38eb77f06525022db021525dff8a29db5447dc71986068b172874e15a02/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.236985 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcnn5\" (UniqueName: \"kubernetes.io/projected/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-kube-api-access-bcnn5\") pod \"rabbitmq-server-0\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " pod="openstack/rabbitmq-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.307376 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c1cdc226-d0b2-4896-8a47-cf3823b8ca7b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1cdc226-d0b2-4896-8a47-cf3823b8ca7b\") pod \"rabbitmq-server-0\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " pod="openstack/rabbitmq-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.308767 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v92qj\" (UniqueName: \"kubernetes.io/projected/ad0c7bb5-2153-4f92-9f1e-6ce26912ef55-kube-api-access-v92qj\") pod \"ad0c7bb5-2153-4f92-9f1e-6ce26912ef55\" (UID: \"ad0c7bb5-2153-4f92-9f1e-6ce26912ef55\") " Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.308927 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad0c7bb5-2153-4f92-9f1e-6ce26912ef55-config\") pod \"ad0c7bb5-2153-4f92-9f1e-6ce26912ef55\" (UID: \"ad0c7bb5-2153-4f92-9f1e-6ce26912ef55\") " Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.309379 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad0c7bb5-2153-4f92-9f1e-6ce26912ef55-dns-svc\") pod \"ad0c7bb5-2153-4f92-9f1e-6ce26912ef55\" (UID: \"ad0c7bb5-2153-4f92-9f1e-6ce26912ef55\") " Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.310141 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsbbs\" (UniqueName: \"kubernetes.io/projected/7a868a15-fca3-401d-91b9-1503d9f5a1ae-kube-api-access-qsbbs\") pod \"7a868a15-fca3-401d-91b9-1503d9f5a1ae\" (UID: \"7a868a15-fca3-401d-91b9-1503d9f5a1ae\") " Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.311438 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a868a15-fca3-401d-91b9-1503d9f5a1ae-config\") pod \"7a868a15-fca3-401d-91b9-1503d9f5a1ae\" (UID: \"7a868a15-fca3-401d-91b9-1503d9f5a1ae\") " Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.313264 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a603b88e-42cc-47f4-a96a-3644524346dc-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.313400 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a603b88e-42cc-47f4-a96a-3644524346dc-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.313459 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a603b88e-42cc-47f4-a96a-3644524346dc-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.313485 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a603b88e-42cc-47f4-a96a-3644524346dc-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.313530 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a603b88e-42cc-47f4-a96a-3644524346dc-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.313549 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a603b88e-42cc-47f4-a96a-3644524346dc-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.313576 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p5qc\" (UniqueName: \"kubernetes.io/projected/a603b88e-42cc-47f4-a96a-3644524346dc-kube-api-access-7p5qc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.313750 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a603b88e-42cc-47f4-a96a-3644524346dc-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.313791 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a603b88e-42cc-47f4-a96a-3644524346dc-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.313809 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a603b88e-42cc-47f4-a96a-3644524346dc-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.314017 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0d8044e9-d567-4c1d-9023-1bedd67fe59f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d8044e9-d567-4c1d-9023-1bedd67fe59f\") pod \"rabbitmq-cell1-server-0\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.315340 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad0c7bb5-2153-4f92-9f1e-6ce26912ef55-kube-api-access-v92qj" (OuterVolumeSpecName: "kube-api-access-v92qj") pod "ad0c7bb5-2153-4f92-9f1e-6ce26912ef55" (UID: "ad0c7bb5-2153-4f92-9f1e-6ce26912ef55"). InnerVolumeSpecName "kube-api-access-v92qj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.315403 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a868a15-fca3-401d-91b9-1503d9f5a1ae-kube-api-access-qsbbs" (OuterVolumeSpecName: "kube-api-access-qsbbs") pod "7a868a15-fca3-401d-91b9-1503d9f5a1ae" (UID: "7a868a15-fca3-401d-91b9-1503d9f5a1ae"). InnerVolumeSpecName "kube-api-access-qsbbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.325695 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a868a15-fca3-401d-91b9-1503d9f5a1ae-config" (OuterVolumeSpecName: "config") pod "7a868a15-fca3-401d-91b9-1503d9f5a1ae" (UID: "7a868a15-fca3-401d-91b9-1503d9f5a1ae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.328036 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad0c7bb5-2153-4f92-9f1e-6ce26912ef55-config" (OuterVolumeSpecName: "config") pod "ad0c7bb5-2153-4f92-9f1e-6ce26912ef55" (UID: "ad0c7bb5-2153-4f92-9f1e-6ce26912ef55"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.331959 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad0c7bb5-2153-4f92-9f1e-6ce26912ef55-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ad0c7bb5-2153-4f92-9f1e-6ce26912ef55" (UID: "ad0c7bb5-2153-4f92-9f1e-6ce26912ef55"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.344056 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.415722 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a603b88e-42cc-47f4-a96a-3644524346dc-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.415766 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a603b88e-42cc-47f4-a96a-3644524346dc-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.415788 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a603b88e-42cc-47f4-a96a-3644524346dc-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.415824 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0d8044e9-d567-4c1d-9023-1bedd67fe59f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d8044e9-d567-4c1d-9023-1bedd67fe59f\") pod \"rabbitmq-cell1-server-0\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.415862 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a603b88e-42cc-47f4-a96a-3644524346dc-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.415908 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a603b88e-42cc-47f4-a96a-3644524346dc-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.415931 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a603b88e-42cc-47f4-a96a-3644524346dc-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.415947 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a603b88e-42cc-47f4-a96a-3644524346dc-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.415960 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a603b88e-42cc-47f4-a96a-3644524346dc-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.415975 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a603b88e-42cc-47f4-a96a-3644524346dc-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.415991 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p5qc\" (UniqueName: \"kubernetes.io/projected/a603b88e-42cc-47f4-a96a-3644524346dc-kube-api-access-7p5qc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.416038 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v92qj\" (UniqueName: \"kubernetes.io/projected/ad0c7bb5-2153-4f92-9f1e-6ce26912ef55-kube-api-access-v92qj\") on node \"crc\" DevicePath \"\"" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.416052 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad0c7bb5-2153-4f92-9f1e-6ce26912ef55-config\") on node \"crc\" DevicePath \"\"" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.416062 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad0c7bb5-2153-4f92-9f1e-6ce26912ef55-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.416071 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsbbs\" (UniqueName: \"kubernetes.io/projected/7a868a15-fca3-401d-91b9-1503d9f5a1ae-kube-api-access-qsbbs\") on node \"crc\" DevicePath \"\"" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.416079 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a868a15-fca3-401d-91b9-1503d9f5a1ae-config\") on node \"crc\" DevicePath \"\"" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.417036 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a603b88e-42cc-47f4-a96a-3644524346dc-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.417349 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a603b88e-42cc-47f4-a96a-3644524346dc-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.417604 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a603b88e-42cc-47f4-a96a-3644524346dc-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.417769 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a603b88e-42cc-47f4-a96a-3644524346dc-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.418115 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a603b88e-42cc-47f4-a96a-3644524346dc-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.419281 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a603b88e-42cc-47f4-a96a-3644524346dc-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.422387 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a603b88e-42cc-47f4-a96a-3644524346dc-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.423385 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.423422 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0d8044e9-d567-4c1d-9023-1bedd67fe59f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d8044e9-d567-4c1d-9023-1bedd67fe59f\") pod \"rabbitmq-cell1-server-0\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/dd0c54f362857cd9958c91636dfb491521abaa6cf57e38a339687a41ed7e5e9c/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.424217 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a603b88e-42cc-47f4-a96a-3644524346dc-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.439902 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a603b88e-42cc-47f4-a96a-3644524346dc-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.440019 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p5qc\" (UniqueName: \"kubernetes.io/projected/a603b88e-42cc-47f4-a96a-3644524346dc-kube-api-access-7p5qc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.466868 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0d8044e9-d567-4c1d-9023-1bedd67fe59f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d8044e9-d567-4c1d-9023-1bedd67fe59f\") pod \"rabbitmq-cell1-server-0\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.528973 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.787977 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.792395 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56bbd59dc5-fsj5f" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.793555 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56bbd59dc5-fsj5f" event={"ID":"ad0c7bb5-2153-4f92-9f1e-6ce26912ef55","Type":"ContainerDied","Data":"7bd0c3464c1b0544a4759a6da813aa029bf0831a84d8b6d502b188795e1a9aea"} Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.793779 4867 scope.go:117] "RemoveContainer" containerID="23c361cd5458050babdc2fd8ad2296f54fdf84a4faf507822f7ef32c66c76c33" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.800937 4867 generic.go:334] "Generic (PLEG): container finished" podID="ba666d1e-4416-4028-b20f-6c1e135e3b5d" containerID="0891dc57487589eaa25f58d417f1ab106dec37594027238be411ad1d3ec6964a" exitCode=0 Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.801007 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-hndrb" event={"ID":"ba666d1e-4416-4028-b20f-6c1e135e3b5d","Type":"ContainerDied","Data":"0891dc57487589eaa25f58d417f1ab106dec37594027238be411ad1d3ec6964a"} Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.805430 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5986db9b4f-bsvvf" event={"ID":"7a868a15-fca3-401d-91b9-1503d9f5a1ae","Type":"ContainerDied","Data":"fc308a2bd1fb1f13481910a5c3a4dcfcc98a85998a877a0deb3d3cec24722e18"} Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.805499 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5986db9b4f-bsvvf" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.807218 4867 generic.go:334] "Generic (PLEG): container finished" podID="81bb7bf2-99db-499a-b2ff-c681fe62ee62" containerID="ff622dce526fde4c13b9a3056f7eaedf53a838cc7e97bdb68f2996dce13290cb" exitCode=0 Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.808304 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-f5274" event={"ID":"81bb7bf2-99db-499a-b2ff-c681fe62ee62","Type":"ContainerDied","Data":"ff622dce526fde4c13b9a3056f7eaedf53a838cc7e97bdb68f2996dce13290cb"} Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.808338 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-f5274" event={"ID":"81bb7bf2-99db-499a-b2ff-c681fe62ee62","Type":"ContainerStarted","Data":"56936dd884090fe2913f73ff89436b4332c7d6bd42b0252efd80c864c9d06d5c"} Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.929963 4867 scope.go:117] "RemoveContainer" containerID="917f67a32b1bd4f9ea9a245795bbcc299c354ee587efb0b4daf37daf29c1ecaf" Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.976106 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-fsj5f"] Jan 01 09:48:49 crc kubenswrapper[4867]: I0101 09:48:49.981894 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-fsj5f"] Jan 01 09:48:50 crc kubenswrapper[4867]: E0101 09:48:50.031703 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad0c7bb5_2153_4f92_9f1e_6ce26912ef55.slice/crio-7bd0c3464c1b0544a4759a6da813aa029bf0831a84d8b6d502b188795e1a9aea\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a868a15_fca3_401d_91b9_1503d9f5a1ae.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad0c7bb5_2153_4f92_9f1e_6ce26912ef55.slice\": RecentStats: unable to find data in memory cache]" Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.032677 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-bsvvf"] Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.039488 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-bsvvf"] Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.041758 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 01 09:48:50 crc kubenswrapper[4867]: W0101 09:48:50.063563 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda603b88e_42cc_47f4_a96a_3644524346dc.slice/crio-d9f3ebf90061af7b45a03d11204bcdf26f2ab807b53bbbea5609913ad2ce3ce4 WatchSource:0}: Error finding container d9f3ebf90061af7b45a03d11204bcdf26f2ab807b53bbbea5609913ad2ce3ce4: Status 404 returned error can't find the container with id d9f3ebf90061af7b45a03d11204bcdf26f2ab807b53bbbea5609913ad2ce3ce4 Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.388066 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 01 09:48:50 crc kubenswrapper[4867]: E0101 09:48:50.388611 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad0c7bb5-2153-4f92-9f1e-6ce26912ef55" containerName="init" Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.388623 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad0c7bb5-2153-4f92-9f1e-6ce26912ef55" containerName="init" Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.388767 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad0c7bb5-2153-4f92-9f1e-6ce26912ef55" containerName="init" Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.389488 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.391298 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.391564 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-qmm6j" Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.391773 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.392759 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.401191 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.403165 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.554246 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fd0654b-6402-47a9-baa9-e172d84990a1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9fd0654b-6402-47a9-baa9-e172d84990a1\") " pod="openstack/openstack-galera-0" Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.554330 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fd0654b-6402-47a9-baa9-e172d84990a1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9fd0654b-6402-47a9-baa9-e172d84990a1\") " pod="openstack/openstack-galera-0" Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.554508 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9fd0654b-6402-47a9-baa9-e172d84990a1-config-data-default\") pod \"openstack-galera-0\" (UID: \"9fd0654b-6402-47a9-baa9-e172d84990a1\") " pod="openstack/openstack-galera-0" Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.554587 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9fd0654b-6402-47a9-baa9-e172d84990a1-kolla-config\") pod \"openstack-galera-0\" (UID: \"9fd0654b-6402-47a9-baa9-e172d84990a1\") " pod="openstack/openstack-galera-0" Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.554628 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9fd0654b-6402-47a9-baa9-e172d84990a1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9fd0654b-6402-47a9-baa9-e172d84990a1\") " pod="openstack/openstack-galera-0" Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.554691 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ea5991c8-da7d-4678-8b0b-e37c2e76b334\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ea5991c8-da7d-4678-8b0b-e37c2e76b334\") pod \"openstack-galera-0\" (UID: \"9fd0654b-6402-47a9-baa9-e172d84990a1\") " pod="openstack/openstack-galera-0" Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.554735 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v66r\" (UniqueName: \"kubernetes.io/projected/9fd0654b-6402-47a9-baa9-e172d84990a1-kube-api-access-7v66r\") pod \"openstack-galera-0\" (UID: \"9fd0654b-6402-47a9-baa9-e172d84990a1\") " pod="openstack/openstack-galera-0" Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.554819 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fd0654b-6402-47a9-baa9-e172d84990a1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9fd0654b-6402-47a9-baa9-e172d84990a1\") " pod="openstack/openstack-galera-0" Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.656570 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9fd0654b-6402-47a9-baa9-e172d84990a1-config-data-default\") pod \"openstack-galera-0\" (UID: \"9fd0654b-6402-47a9-baa9-e172d84990a1\") " pod="openstack/openstack-galera-0" Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.656615 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9fd0654b-6402-47a9-baa9-e172d84990a1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9fd0654b-6402-47a9-baa9-e172d84990a1\") " pod="openstack/openstack-galera-0" Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.656631 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9fd0654b-6402-47a9-baa9-e172d84990a1-kolla-config\") pod \"openstack-galera-0\" (UID: \"9fd0654b-6402-47a9-baa9-e172d84990a1\") " pod="openstack/openstack-galera-0" Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.656656 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ea5991c8-da7d-4678-8b0b-e37c2e76b334\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ea5991c8-da7d-4678-8b0b-e37c2e76b334\") pod \"openstack-galera-0\" (UID: \"9fd0654b-6402-47a9-baa9-e172d84990a1\") " pod="openstack/openstack-galera-0" Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.656673 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v66r\" (UniqueName: \"kubernetes.io/projected/9fd0654b-6402-47a9-baa9-e172d84990a1-kube-api-access-7v66r\") pod \"openstack-galera-0\" (UID: \"9fd0654b-6402-47a9-baa9-e172d84990a1\") " pod="openstack/openstack-galera-0" Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.656704 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fd0654b-6402-47a9-baa9-e172d84990a1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9fd0654b-6402-47a9-baa9-e172d84990a1\") " pod="openstack/openstack-galera-0" Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.656763 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fd0654b-6402-47a9-baa9-e172d84990a1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9fd0654b-6402-47a9-baa9-e172d84990a1\") " pod="openstack/openstack-galera-0" Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.656815 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fd0654b-6402-47a9-baa9-e172d84990a1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9fd0654b-6402-47a9-baa9-e172d84990a1\") " pod="openstack/openstack-galera-0" Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.657246 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9fd0654b-6402-47a9-baa9-e172d84990a1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9fd0654b-6402-47a9-baa9-e172d84990a1\") " pod="openstack/openstack-galera-0" Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.657786 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9fd0654b-6402-47a9-baa9-e172d84990a1-kolla-config\") pod \"openstack-galera-0\" (UID: \"9fd0654b-6402-47a9-baa9-e172d84990a1\") " pod="openstack/openstack-galera-0" Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.658071 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9fd0654b-6402-47a9-baa9-e172d84990a1-config-data-default\") pod \"openstack-galera-0\" (UID: \"9fd0654b-6402-47a9-baa9-e172d84990a1\") " pod="openstack/openstack-galera-0" Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.659095 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fd0654b-6402-47a9-baa9-e172d84990a1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9fd0654b-6402-47a9-baa9-e172d84990a1\") " pod="openstack/openstack-galera-0" Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.660120 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.660162 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ea5991c8-da7d-4678-8b0b-e37c2e76b334\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ea5991c8-da7d-4678-8b0b-e37c2e76b334\") pod \"openstack-galera-0\" (UID: \"9fd0654b-6402-47a9-baa9-e172d84990a1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f39b15aac49f998c170c8d08656c409a2742fbb980e3195eb98f99443caed60e/globalmount\"" pod="openstack/openstack-galera-0" Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.661606 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fd0654b-6402-47a9-baa9-e172d84990a1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9fd0654b-6402-47a9-baa9-e172d84990a1\") " pod="openstack/openstack-galera-0" Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.663802 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fd0654b-6402-47a9-baa9-e172d84990a1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9fd0654b-6402-47a9-baa9-e172d84990a1\") " pod="openstack/openstack-galera-0" Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.674506 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v66r\" (UniqueName: \"kubernetes.io/projected/9fd0654b-6402-47a9-baa9-e172d84990a1-kube-api-access-7v66r\") pod \"openstack-galera-0\" (UID: \"9fd0654b-6402-47a9-baa9-e172d84990a1\") " pod="openstack/openstack-galera-0" Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.693956 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ea5991c8-da7d-4678-8b0b-e37c2e76b334\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ea5991c8-da7d-4678-8b0b-e37c2e76b334\") pod \"openstack-galera-0\" (UID: \"9fd0654b-6402-47a9-baa9-e172d84990a1\") " pod="openstack/openstack-galera-0" Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.703189 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.823088 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-f5274" event={"ID":"81bb7bf2-99db-499a-b2ff-c681fe62ee62","Type":"ContainerStarted","Data":"21c2cda6b4c913ee56d222ab092ee936939fe988b4b0e73a36eb95b2dd038586"} Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.823848 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d79f765b5-f5274" Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.825411 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a603b88e-42cc-47f4-a96a-3644524346dc","Type":"ContainerStarted","Data":"d9f3ebf90061af7b45a03d11204bcdf26f2ab807b53bbbea5609913ad2ce3ce4"} Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.826398 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"78e20b53-c66f-44e9-8fb6-2280f8c50ac6","Type":"ContainerStarted","Data":"83d9f8b2de0ef17a0ad692faf7e23f7ad73bc04d5de6e185b96042641fd23a06"} Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.838799 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-hndrb" event={"ID":"ba666d1e-4416-4028-b20f-6c1e135e3b5d","Type":"ContainerStarted","Data":"05c9a8a2ecea927228db9ba76148d839639dc0f964f5c69f95da2d4327bb6a13"} Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.839585 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-95587bc99-hndrb" Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.851141 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d79f765b5-f5274" podStartSLOduration=2.85112093 podStartE2EDuration="2.85112093s" podCreationTimestamp="2026-01-01 09:48:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 09:48:50.845446458 +0000 UTC m=+4939.980715247" watchObservedRunningTime="2026-01-01 09:48:50.85112093 +0000 UTC m=+4939.986389699" Jan 01 09:48:50 crc kubenswrapper[4867]: I0101 09:48:50.877019 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-95587bc99-hndrb" podStartSLOduration=3.877002888 podStartE2EDuration="3.877002888s" podCreationTimestamp="2026-01-01 09:48:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 09:48:50.871531932 +0000 UTC m=+4940.006800711" watchObservedRunningTime="2026-01-01 09:48:50.877002888 +0000 UTC m=+4940.012271657" Jan 01 09:48:51 crc kubenswrapper[4867]: I0101 09:48:51.136860 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a868a15-fca3-401d-91b9-1503d9f5a1ae" path="/var/lib/kubelet/pods/7a868a15-fca3-401d-91b9-1503d9f5a1ae/volumes" Jan 01 09:48:51 crc kubenswrapper[4867]: I0101 09:48:51.137769 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad0c7bb5-2153-4f92-9f1e-6ce26912ef55" path="/var/lib/kubelet/pods/ad0c7bb5-2153-4f92-9f1e-6ce26912ef55/volumes" Jan 01 09:48:51 crc kubenswrapper[4867]: I0101 09:48:51.182703 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 01 09:48:51 crc kubenswrapper[4867]: I0101 09:48:51.849062 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 01 09:48:51 crc kubenswrapper[4867]: I0101 09:48:51.852875 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 01 09:48:51 crc kubenswrapper[4867]: I0101 09:48:51.855903 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 01 09:48:51 crc kubenswrapper[4867]: I0101 09:48:51.856276 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-mlz4k" Jan 01 09:48:51 crc kubenswrapper[4867]: I0101 09:48:51.856510 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 01 09:48:51 crc kubenswrapper[4867]: I0101 09:48:51.856862 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 01 09:48:51 crc kubenswrapper[4867]: I0101 09:48:51.860804 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a603b88e-42cc-47f4-a96a-3644524346dc","Type":"ContainerStarted","Data":"d84b3f1af876135f0b78c47863e34090e2e81d7a0f4f0c10f945c6ff02707a86"} Jan 01 09:48:51 crc kubenswrapper[4867]: I0101 09:48:51.862755 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 01 09:48:51 crc kubenswrapper[4867]: I0101 09:48:51.863262 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9fd0654b-6402-47a9-baa9-e172d84990a1","Type":"ContainerStarted","Data":"a90ffb9e01c7ae7f5733371bc2d2c6e0cd458b3a7c73372a70d322023a01bb4d"} Jan 01 09:48:51 crc kubenswrapper[4867]: I0101 09:48:51.863300 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9fd0654b-6402-47a9-baa9-e172d84990a1","Type":"ContainerStarted","Data":"5ef8ef39b52f605d3685c473217cad8d66e31bdae9d24963cab59f0b116b5bf9"} Jan 01 09:48:51 crc kubenswrapper[4867]: I0101 09:48:51.866860 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"78e20b53-c66f-44e9-8fb6-2280f8c50ac6","Type":"ContainerStarted","Data":"848c6eebae8f0e3deb1fad084edaf2fc41e19cd1f787ea58d7543d38aae862d0"} Jan 01 09:48:51 crc kubenswrapper[4867]: I0101 09:48:51.988310 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32626dd7-5d1c-4beb-84bd-b97be403cc4a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"32626dd7-5d1c-4beb-84bd-b97be403cc4a\") " pod="openstack/openstack-cell1-galera-0" Jan 01 09:48:51 crc kubenswrapper[4867]: I0101 09:48:51.988688 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/32626dd7-5d1c-4beb-84bd-b97be403cc4a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"32626dd7-5d1c-4beb-84bd-b97be403cc4a\") " pod="openstack/openstack-cell1-galera-0" Jan 01 09:48:51 crc kubenswrapper[4867]: I0101 09:48:51.988729 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/32626dd7-5d1c-4beb-84bd-b97be403cc4a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"32626dd7-5d1c-4beb-84bd-b97be403cc4a\") " pod="openstack/openstack-cell1-galera-0" Jan 01 09:48:51 crc kubenswrapper[4867]: I0101 09:48:51.988948 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-48e9b1ec-6c56-4020-a802-5ff70b1c2c27\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-48e9b1ec-6c56-4020-a802-5ff70b1c2c27\") pod \"openstack-cell1-galera-0\" (UID: \"32626dd7-5d1c-4beb-84bd-b97be403cc4a\") " pod="openstack/openstack-cell1-galera-0" Jan 01 09:48:51 crc kubenswrapper[4867]: I0101 09:48:51.988970 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bmpl\" (UniqueName: \"kubernetes.io/projected/32626dd7-5d1c-4beb-84bd-b97be403cc4a-kube-api-access-7bmpl\") pod \"openstack-cell1-galera-0\" (UID: \"32626dd7-5d1c-4beb-84bd-b97be403cc4a\") " pod="openstack/openstack-cell1-galera-0" Jan 01 09:48:51 crc kubenswrapper[4867]: I0101 09:48:51.989067 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32626dd7-5d1c-4beb-84bd-b97be403cc4a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"32626dd7-5d1c-4beb-84bd-b97be403cc4a\") " pod="openstack/openstack-cell1-galera-0" Jan 01 09:48:51 crc kubenswrapper[4867]: I0101 09:48:51.989104 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/32626dd7-5d1c-4beb-84bd-b97be403cc4a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"32626dd7-5d1c-4beb-84bd-b97be403cc4a\") " pod="openstack/openstack-cell1-galera-0" Jan 01 09:48:51 crc kubenswrapper[4867]: I0101 09:48:51.989172 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/32626dd7-5d1c-4beb-84bd-b97be403cc4a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"32626dd7-5d1c-4beb-84bd-b97be403cc4a\") " pod="openstack/openstack-cell1-galera-0" Jan 01 09:48:52 crc kubenswrapper[4867]: I0101 09:48:52.090869 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32626dd7-5d1c-4beb-84bd-b97be403cc4a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"32626dd7-5d1c-4beb-84bd-b97be403cc4a\") " pod="openstack/openstack-cell1-galera-0" Jan 01 09:48:52 crc kubenswrapper[4867]: I0101 09:48:52.090958 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/32626dd7-5d1c-4beb-84bd-b97be403cc4a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"32626dd7-5d1c-4beb-84bd-b97be403cc4a\") " pod="openstack/openstack-cell1-galera-0" Jan 01 09:48:52 crc kubenswrapper[4867]: I0101 09:48:52.091264 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/32626dd7-5d1c-4beb-84bd-b97be403cc4a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"32626dd7-5d1c-4beb-84bd-b97be403cc4a\") " pod="openstack/openstack-cell1-galera-0" Jan 01 09:48:52 crc kubenswrapper[4867]: I0101 09:48:52.092262 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32626dd7-5d1c-4beb-84bd-b97be403cc4a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"32626dd7-5d1c-4beb-84bd-b97be403cc4a\") " pod="openstack/openstack-cell1-galera-0" Jan 01 09:48:52 crc kubenswrapper[4867]: I0101 09:48:52.092319 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/32626dd7-5d1c-4beb-84bd-b97be403cc4a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"32626dd7-5d1c-4beb-84bd-b97be403cc4a\") " pod="openstack/openstack-cell1-galera-0" Jan 01 09:48:52 crc kubenswrapper[4867]: I0101 09:48:52.092356 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/32626dd7-5d1c-4beb-84bd-b97be403cc4a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"32626dd7-5d1c-4beb-84bd-b97be403cc4a\") " pod="openstack/openstack-cell1-galera-0" Jan 01 09:48:52 crc kubenswrapper[4867]: I0101 09:48:52.092456 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-48e9b1ec-6c56-4020-a802-5ff70b1c2c27\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-48e9b1ec-6c56-4020-a802-5ff70b1c2c27\") pod \"openstack-cell1-galera-0\" (UID: \"32626dd7-5d1c-4beb-84bd-b97be403cc4a\") " pod="openstack/openstack-cell1-galera-0" Jan 01 09:48:52 crc kubenswrapper[4867]: I0101 09:48:52.092488 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bmpl\" (UniqueName: \"kubernetes.io/projected/32626dd7-5d1c-4beb-84bd-b97be403cc4a-kube-api-access-7bmpl\") pod \"openstack-cell1-galera-0\" (UID: \"32626dd7-5d1c-4beb-84bd-b97be403cc4a\") " pod="openstack/openstack-cell1-galera-0" Jan 01 09:48:52 crc kubenswrapper[4867]: I0101 09:48:52.092566 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/32626dd7-5d1c-4beb-84bd-b97be403cc4a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"32626dd7-5d1c-4beb-84bd-b97be403cc4a\") " pod="openstack/openstack-cell1-galera-0" Jan 01 09:48:52 crc kubenswrapper[4867]: I0101 09:48:52.092169 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/32626dd7-5d1c-4beb-84bd-b97be403cc4a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"32626dd7-5d1c-4beb-84bd-b97be403cc4a\") " pod="openstack/openstack-cell1-galera-0" Jan 01 09:48:52 crc kubenswrapper[4867]: I0101 09:48:52.094301 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/32626dd7-5d1c-4beb-84bd-b97be403cc4a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"32626dd7-5d1c-4beb-84bd-b97be403cc4a\") " pod="openstack/openstack-cell1-galera-0" Jan 01 09:48:52 crc kubenswrapper[4867]: I0101 09:48:52.095418 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32626dd7-5d1c-4beb-84bd-b97be403cc4a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"32626dd7-5d1c-4beb-84bd-b97be403cc4a\") " pod="openstack/openstack-cell1-galera-0" Jan 01 09:48:52 crc kubenswrapper[4867]: I0101 09:48:52.095978 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 01 09:48:52 crc kubenswrapper[4867]: I0101 09:48:52.096019 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-48e9b1ec-6c56-4020-a802-5ff70b1c2c27\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-48e9b1ec-6c56-4020-a802-5ff70b1c2c27\") pod \"openstack-cell1-galera-0\" (UID: \"32626dd7-5d1c-4beb-84bd-b97be403cc4a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a069be68caec695f438482d897e553661020b38651c0bd10fe2b5b0bd4162700/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 01 09:48:52 crc kubenswrapper[4867]: I0101 09:48:52.097084 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32626dd7-5d1c-4beb-84bd-b97be403cc4a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"32626dd7-5d1c-4beb-84bd-b97be403cc4a\") " pod="openstack/openstack-cell1-galera-0" Jan 01 09:48:52 crc kubenswrapper[4867]: I0101 09:48:52.100396 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/32626dd7-5d1c-4beb-84bd-b97be403cc4a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"32626dd7-5d1c-4beb-84bd-b97be403cc4a\") " pod="openstack/openstack-cell1-galera-0" Jan 01 09:48:52 crc kubenswrapper[4867]: I0101 09:48:52.116866 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bmpl\" (UniqueName: \"kubernetes.io/projected/32626dd7-5d1c-4beb-84bd-b97be403cc4a-kube-api-access-7bmpl\") pod \"openstack-cell1-galera-0\" (UID: \"32626dd7-5d1c-4beb-84bd-b97be403cc4a\") " pod="openstack/openstack-cell1-galera-0" Jan 01 09:48:52 crc kubenswrapper[4867]: I0101 09:48:52.133112 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-48e9b1ec-6c56-4020-a802-5ff70b1c2c27\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-48e9b1ec-6c56-4020-a802-5ff70b1c2c27\") pod \"openstack-cell1-galera-0\" (UID: \"32626dd7-5d1c-4beb-84bd-b97be403cc4a\") " pod="openstack/openstack-cell1-galera-0" Jan 01 09:48:52 crc kubenswrapper[4867]: I0101 09:48:52.173701 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 01 09:48:52 crc kubenswrapper[4867]: I0101 09:48:52.388069 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 01 09:48:52 crc kubenswrapper[4867]: I0101 09:48:52.389171 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 01 09:48:52 crc kubenswrapper[4867]: I0101 09:48:52.391860 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 01 09:48:52 crc kubenswrapper[4867]: I0101 09:48:52.391957 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-vrvlx" Jan 01 09:48:52 crc kubenswrapper[4867]: I0101 09:48:52.392045 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 01 09:48:53 crc kubenswrapper[4867]: I0101 09:48:52.410011 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 01 09:48:53 crc kubenswrapper[4867]: I0101 09:48:52.497282 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x4lp\" (UniqueName: \"kubernetes.io/projected/ffdde0e8-731c-4b7d-ae0f-9e6d793004c3-kube-api-access-6x4lp\") pod \"memcached-0\" (UID: \"ffdde0e8-731c-4b7d-ae0f-9e6d793004c3\") " pod="openstack/memcached-0" Jan 01 09:48:53 crc kubenswrapper[4867]: I0101 09:48:52.497359 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ffdde0e8-731c-4b7d-ae0f-9e6d793004c3-config-data\") pod \"memcached-0\" (UID: \"ffdde0e8-731c-4b7d-ae0f-9e6d793004c3\") " pod="openstack/memcached-0" Jan 01 09:48:53 crc kubenswrapper[4867]: I0101 09:48:52.497396 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffdde0e8-731c-4b7d-ae0f-9e6d793004c3-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ffdde0e8-731c-4b7d-ae0f-9e6d793004c3\") " pod="openstack/memcached-0" Jan 01 09:48:53 crc kubenswrapper[4867]: I0101 09:48:52.497529 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffdde0e8-731c-4b7d-ae0f-9e6d793004c3-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ffdde0e8-731c-4b7d-ae0f-9e6d793004c3\") " pod="openstack/memcached-0" Jan 01 09:48:53 crc kubenswrapper[4867]: I0101 09:48:52.497589 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ffdde0e8-731c-4b7d-ae0f-9e6d793004c3-kolla-config\") pod \"memcached-0\" (UID: \"ffdde0e8-731c-4b7d-ae0f-9e6d793004c3\") " pod="openstack/memcached-0" Jan 01 09:48:53 crc kubenswrapper[4867]: I0101 09:48:52.598721 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x4lp\" (UniqueName: \"kubernetes.io/projected/ffdde0e8-731c-4b7d-ae0f-9e6d793004c3-kube-api-access-6x4lp\") pod \"memcached-0\" (UID: \"ffdde0e8-731c-4b7d-ae0f-9e6d793004c3\") " pod="openstack/memcached-0" Jan 01 09:48:53 crc kubenswrapper[4867]: I0101 09:48:52.598778 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ffdde0e8-731c-4b7d-ae0f-9e6d793004c3-config-data\") pod \"memcached-0\" (UID: \"ffdde0e8-731c-4b7d-ae0f-9e6d793004c3\") " pod="openstack/memcached-0" Jan 01 09:48:53 crc kubenswrapper[4867]: I0101 09:48:52.598802 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffdde0e8-731c-4b7d-ae0f-9e6d793004c3-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ffdde0e8-731c-4b7d-ae0f-9e6d793004c3\") " pod="openstack/memcached-0" Jan 01 09:48:53 crc kubenswrapper[4867]: I0101 09:48:52.598830 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffdde0e8-731c-4b7d-ae0f-9e6d793004c3-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ffdde0e8-731c-4b7d-ae0f-9e6d793004c3\") " pod="openstack/memcached-0" Jan 01 09:48:53 crc kubenswrapper[4867]: I0101 09:48:52.598847 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ffdde0e8-731c-4b7d-ae0f-9e6d793004c3-kolla-config\") pod \"memcached-0\" (UID: \"ffdde0e8-731c-4b7d-ae0f-9e6d793004c3\") " pod="openstack/memcached-0" Jan 01 09:48:53 crc kubenswrapper[4867]: I0101 09:48:52.599647 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ffdde0e8-731c-4b7d-ae0f-9e6d793004c3-kolla-config\") pod \"memcached-0\" (UID: \"ffdde0e8-731c-4b7d-ae0f-9e6d793004c3\") " pod="openstack/memcached-0" Jan 01 09:48:53 crc kubenswrapper[4867]: I0101 09:48:52.600401 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ffdde0e8-731c-4b7d-ae0f-9e6d793004c3-config-data\") pod \"memcached-0\" (UID: \"ffdde0e8-731c-4b7d-ae0f-9e6d793004c3\") " pod="openstack/memcached-0" Jan 01 09:48:53 crc kubenswrapper[4867]: I0101 09:48:52.603692 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffdde0e8-731c-4b7d-ae0f-9e6d793004c3-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ffdde0e8-731c-4b7d-ae0f-9e6d793004c3\") " pod="openstack/memcached-0" Jan 01 09:48:53 crc kubenswrapper[4867]: I0101 09:48:52.605560 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffdde0e8-731c-4b7d-ae0f-9e6d793004c3-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ffdde0e8-731c-4b7d-ae0f-9e6d793004c3\") " pod="openstack/memcached-0" Jan 01 09:48:53 crc kubenswrapper[4867]: I0101 09:48:52.616255 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x4lp\" (UniqueName: \"kubernetes.io/projected/ffdde0e8-731c-4b7d-ae0f-9e6d793004c3-kube-api-access-6x4lp\") pod \"memcached-0\" (UID: \"ffdde0e8-731c-4b7d-ae0f-9e6d793004c3\") " pod="openstack/memcached-0" Jan 01 09:48:53 crc kubenswrapper[4867]: I0101 09:48:52.715827 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 01 09:48:53 crc kubenswrapper[4867]: I0101 09:48:53.359086 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 01 09:48:53 crc kubenswrapper[4867]: I0101 09:48:53.651485 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 01 09:48:53 crc kubenswrapper[4867]: I0101 09:48:53.888510 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"32626dd7-5d1c-4beb-84bd-b97be403cc4a","Type":"ContainerStarted","Data":"aa590e413d32f581fda1a2f93dc39b5a54fa78581bf8416f017e4e92af264a27"} Jan 01 09:48:53 crc kubenswrapper[4867]: W0101 09:48:53.963691 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffdde0e8_731c_4b7d_ae0f_9e6d793004c3.slice/crio-e4661bdcef04c49b56a0d7386d7ae64f64d92f91c1c79cc83a122095f6144eb2 WatchSource:0}: Error finding container e4661bdcef04c49b56a0d7386d7ae64f64d92f91c1c79cc83a122095f6144eb2: Status 404 returned error can't find the container with id e4661bdcef04c49b56a0d7386d7ae64f64d92f91c1c79cc83a122095f6144eb2 Jan 01 09:48:54 crc kubenswrapper[4867]: I0101 09:48:54.269164 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8xwxl" Jan 01 09:48:54 crc kubenswrapper[4867]: I0101 09:48:54.269739 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8xwxl" Jan 01 09:48:54 crc kubenswrapper[4867]: I0101 09:48:54.316744 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8xwxl" Jan 01 09:48:54 crc kubenswrapper[4867]: I0101 09:48:54.473234 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2mgpp" Jan 01 09:48:54 crc kubenswrapper[4867]: I0101 09:48:54.473286 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2mgpp" Jan 01 09:48:54 crc kubenswrapper[4867]: I0101 09:48:54.521826 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2mgpp" Jan 01 09:48:54 crc kubenswrapper[4867]: I0101 09:48:54.899970 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ffdde0e8-731c-4b7d-ae0f-9e6d793004c3","Type":"ContainerStarted","Data":"f6d6126b95f6ab3ce685ddcbf886b4ffbf78350a404d2cc0712b72257b7d57da"} Jan 01 09:48:54 crc kubenswrapper[4867]: I0101 09:48:54.900048 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ffdde0e8-731c-4b7d-ae0f-9e6d793004c3","Type":"ContainerStarted","Data":"e4661bdcef04c49b56a0d7386d7ae64f64d92f91c1c79cc83a122095f6144eb2"} Jan 01 09:48:54 crc kubenswrapper[4867]: I0101 09:48:54.901696 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 01 09:48:54 crc kubenswrapper[4867]: I0101 09:48:54.905126 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"32626dd7-5d1c-4beb-84bd-b97be403cc4a","Type":"ContainerStarted","Data":"dcf56f6d13a1bdf8fd0053bd6bcd2d85d40894ed039f9d9e851fb3caa0693c02"} Jan 01 09:48:54 crc kubenswrapper[4867]: I0101 09:48:54.976762 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.97672677 podStartE2EDuration="2.97672677s" podCreationTimestamp="2026-01-01 09:48:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 09:48:54.966388336 +0000 UTC m=+4944.101657105" watchObservedRunningTime="2026-01-01 09:48:54.97672677 +0000 UTC m=+4944.111995589" Jan 01 09:48:54 crc kubenswrapper[4867]: I0101 09:48:54.993975 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8xwxl" Jan 01 09:48:55 crc kubenswrapper[4867]: I0101 09:48:55.011280 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2mgpp" Jan 01 09:48:55 crc kubenswrapper[4867]: I0101 09:48:55.916170 4867 generic.go:334] "Generic (PLEG): container finished" podID="9fd0654b-6402-47a9-baa9-e172d84990a1" containerID="a90ffb9e01c7ae7f5733371bc2d2c6e0cd458b3a7c73372a70d322023a01bb4d" exitCode=0 Jan 01 09:48:55 crc kubenswrapper[4867]: I0101 09:48:55.916208 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9fd0654b-6402-47a9-baa9-e172d84990a1","Type":"ContainerDied","Data":"a90ffb9e01c7ae7f5733371bc2d2c6e0cd458b3a7c73372a70d322023a01bb4d"} Jan 01 09:48:56 crc kubenswrapper[4867]: I0101 09:48:56.163259 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8xwxl"] Jan 01 09:48:56 crc kubenswrapper[4867]: I0101 09:48:56.948194 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8xwxl" podUID="2bc32b87-684c-4313-a009-44dc00e8b713" containerName="registry-server" containerID="cri-o://a4f2229348b672ac020e97abf0c84f748b642f94818ddc742c90605833c966dd" gracePeriod=2 Jan 01 09:48:56 crc kubenswrapper[4867]: I0101 09:48:56.949283 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9fd0654b-6402-47a9-baa9-e172d84990a1","Type":"ContainerStarted","Data":"2130ea41f078988437644f12051e7f47323d39c59e361b59280d5c71481266fd"} Jan 01 09:48:56 crc kubenswrapper[4867]: I0101 09:48:56.986079 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=7.986047677 podStartE2EDuration="7.986047677s" podCreationTimestamp="2026-01-01 09:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 09:48:56.98159741 +0000 UTC m=+4946.116866199" watchObservedRunningTime="2026-01-01 09:48:56.986047677 +0000 UTC m=+4946.121316546" Jan 01 09:48:57 crc kubenswrapper[4867]: I0101 09:48:57.158762 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2mgpp"] Jan 01 09:48:57 crc kubenswrapper[4867]: I0101 09:48:57.159017 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2mgpp" podUID="132d1c2d-1465-423c-b366-00bda22e79f0" containerName="registry-server" containerID="cri-o://70ad49e05e131fcaacc37b190f363d65438d5a97ed21df7492e88c78b2db390d" gracePeriod=2 Jan 01 09:48:57 crc kubenswrapper[4867]: I0101 09:48:57.442814 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8xwxl" Jan 01 09:48:57 crc kubenswrapper[4867]: I0101 09:48:57.582282 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2mgpp" Jan 01 09:48:57 crc kubenswrapper[4867]: I0101 09:48:57.595627 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bc32b87-684c-4313-a009-44dc00e8b713-catalog-content\") pod \"2bc32b87-684c-4313-a009-44dc00e8b713\" (UID: \"2bc32b87-684c-4313-a009-44dc00e8b713\") " Jan 01 09:48:57 crc kubenswrapper[4867]: I0101 09:48:57.595715 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ql5s2\" (UniqueName: \"kubernetes.io/projected/2bc32b87-684c-4313-a009-44dc00e8b713-kube-api-access-ql5s2\") pod \"2bc32b87-684c-4313-a009-44dc00e8b713\" (UID: \"2bc32b87-684c-4313-a009-44dc00e8b713\") " Jan 01 09:48:57 crc kubenswrapper[4867]: I0101 09:48:57.595985 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bc32b87-684c-4313-a009-44dc00e8b713-utilities\") pod \"2bc32b87-684c-4313-a009-44dc00e8b713\" (UID: \"2bc32b87-684c-4313-a009-44dc00e8b713\") " Jan 01 09:48:57 crc kubenswrapper[4867]: I0101 09:48:57.597472 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bc32b87-684c-4313-a009-44dc00e8b713-utilities" (OuterVolumeSpecName: "utilities") pod "2bc32b87-684c-4313-a009-44dc00e8b713" (UID: "2bc32b87-684c-4313-a009-44dc00e8b713"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:48:57 crc kubenswrapper[4867]: I0101 09:48:57.601843 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bc32b87-684c-4313-a009-44dc00e8b713-kube-api-access-ql5s2" (OuterVolumeSpecName: "kube-api-access-ql5s2") pod "2bc32b87-684c-4313-a009-44dc00e8b713" (UID: "2bc32b87-684c-4313-a009-44dc00e8b713"). InnerVolumeSpecName "kube-api-access-ql5s2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:48:57 crc kubenswrapper[4867]: I0101 09:48:57.697716 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/132d1c2d-1465-423c-b366-00bda22e79f0-utilities\") pod \"132d1c2d-1465-423c-b366-00bda22e79f0\" (UID: \"132d1c2d-1465-423c-b366-00bda22e79f0\") " Jan 01 09:48:57 crc kubenswrapper[4867]: I0101 09:48:57.697775 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/132d1c2d-1465-423c-b366-00bda22e79f0-catalog-content\") pod \"132d1c2d-1465-423c-b366-00bda22e79f0\" (UID: \"132d1c2d-1465-423c-b366-00bda22e79f0\") " Jan 01 09:48:57 crc kubenswrapper[4867]: I0101 09:48:57.697809 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l2n7\" (UniqueName: \"kubernetes.io/projected/132d1c2d-1465-423c-b366-00bda22e79f0-kube-api-access-5l2n7\") pod \"132d1c2d-1465-423c-b366-00bda22e79f0\" (UID: \"132d1c2d-1465-423c-b366-00bda22e79f0\") " Jan 01 09:48:57 crc kubenswrapper[4867]: I0101 09:48:57.698071 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bc32b87-684c-4313-a009-44dc00e8b713-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 09:48:57 crc kubenswrapper[4867]: I0101 09:48:57.698087 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ql5s2\" (UniqueName: \"kubernetes.io/projected/2bc32b87-684c-4313-a009-44dc00e8b713-kube-api-access-ql5s2\") on node \"crc\" DevicePath \"\"" Jan 01 09:48:57 crc kubenswrapper[4867]: I0101 09:48:57.699573 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/132d1c2d-1465-423c-b366-00bda22e79f0-utilities" (OuterVolumeSpecName: "utilities") pod "132d1c2d-1465-423c-b366-00bda22e79f0" (UID: "132d1c2d-1465-423c-b366-00bda22e79f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:48:57 crc kubenswrapper[4867]: I0101 09:48:57.714389 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/132d1c2d-1465-423c-b366-00bda22e79f0-kube-api-access-5l2n7" (OuterVolumeSpecName: "kube-api-access-5l2n7") pod "132d1c2d-1465-423c-b366-00bda22e79f0" (UID: "132d1c2d-1465-423c-b366-00bda22e79f0"). InnerVolumeSpecName "kube-api-access-5l2n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:48:57 crc kubenswrapper[4867]: I0101 09:48:57.799803 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/132d1c2d-1465-423c-b366-00bda22e79f0-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 09:48:57 crc kubenswrapper[4867]: I0101 09:48:57.799869 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l2n7\" (UniqueName: \"kubernetes.io/projected/132d1c2d-1465-423c-b366-00bda22e79f0-kube-api-access-5l2n7\") on node \"crc\" DevicePath \"\"" Jan 01 09:48:57 crc kubenswrapper[4867]: I0101 09:48:57.960250 4867 generic.go:334] "Generic (PLEG): container finished" podID="2bc32b87-684c-4313-a009-44dc00e8b713" containerID="a4f2229348b672ac020e97abf0c84f748b642f94818ddc742c90605833c966dd" exitCode=0 Jan 01 09:48:57 crc kubenswrapper[4867]: I0101 09:48:57.960301 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xwxl" event={"ID":"2bc32b87-684c-4313-a009-44dc00e8b713","Type":"ContainerDied","Data":"a4f2229348b672ac020e97abf0c84f748b642f94818ddc742c90605833c966dd"} Jan 01 09:48:57 crc kubenswrapper[4867]: I0101 09:48:57.960349 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8xwxl" Jan 01 09:48:57 crc kubenswrapper[4867]: I0101 09:48:57.960373 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xwxl" event={"ID":"2bc32b87-684c-4313-a009-44dc00e8b713","Type":"ContainerDied","Data":"b47139e5402236a93a0992b86911f5ef3845d2608cc75d2240968d2efd0526e3"} Jan 01 09:48:57 crc kubenswrapper[4867]: I0101 09:48:57.960406 4867 scope.go:117] "RemoveContainer" containerID="a4f2229348b672ac020e97abf0c84f748b642f94818ddc742c90605833c966dd" Jan 01 09:48:57 crc kubenswrapper[4867]: I0101 09:48:57.965119 4867 generic.go:334] "Generic (PLEG): container finished" podID="132d1c2d-1465-423c-b366-00bda22e79f0" containerID="70ad49e05e131fcaacc37b190f363d65438d5a97ed21df7492e88c78b2db390d" exitCode=0 Jan 01 09:48:57 crc kubenswrapper[4867]: I0101 09:48:57.965169 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2mgpp" event={"ID":"132d1c2d-1465-423c-b366-00bda22e79f0","Type":"ContainerDied","Data":"70ad49e05e131fcaacc37b190f363d65438d5a97ed21df7492e88c78b2db390d"} Jan 01 09:48:57 crc kubenswrapper[4867]: I0101 09:48:57.965202 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2mgpp" event={"ID":"132d1c2d-1465-423c-b366-00bda22e79f0","Type":"ContainerDied","Data":"a64f1ba1ab18351b14fdd5bfd270253fc1583fa0881fe225f62c0d5f5f40a1c1"} Jan 01 09:48:57 crc kubenswrapper[4867]: I0101 09:48:57.965225 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2mgpp" Jan 01 09:48:58 crc kubenswrapper[4867]: I0101 09:48:58.010344 4867 scope.go:117] "RemoveContainer" containerID="2cf78b412fd9b338e442fa3e19c6b5e4897c7f03a7887bcbb888ef18706afa11" Jan 01 09:48:58 crc kubenswrapper[4867]: I0101 09:48:58.044418 4867 scope.go:117] "RemoveContainer" containerID="3f91c2163e064274a6e6ea300083f667688cacfbde02b44c0f5bdcd649a9da8c" Jan 01 09:48:58 crc kubenswrapper[4867]: I0101 09:48:58.077645 4867 scope.go:117] "RemoveContainer" containerID="a4f2229348b672ac020e97abf0c84f748b642f94818ddc742c90605833c966dd" Jan 01 09:48:58 crc kubenswrapper[4867]: E0101 09:48:58.078354 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4f2229348b672ac020e97abf0c84f748b642f94818ddc742c90605833c966dd\": container with ID starting with a4f2229348b672ac020e97abf0c84f748b642f94818ddc742c90605833c966dd not found: ID does not exist" containerID="a4f2229348b672ac020e97abf0c84f748b642f94818ddc742c90605833c966dd" Jan 01 09:48:58 crc kubenswrapper[4867]: I0101 09:48:58.078398 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4f2229348b672ac020e97abf0c84f748b642f94818ddc742c90605833c966dd"} err="failed to get container status \"a4f2229348b672ac020e97abf0c84f748b642f94818ddc742c90605833c966dd\": rpc error: code = NotFound desc = could not find container \"a4f2229348b672ac020e97abf0c84f748b642f94818ddc742c90605833c966dd\": container with ID starting with a4f2229348b672ac020e97abf0c84f748b642f94818ddc742c90605833c966dd not found: ID does not exist" Jan 01 09:48:58 crc kubenswrapper[4867]: I0101 09:48:58.078427 4867 scope.go:117] "RemoveContainer" containerID="2cf78b412fd9b338e442fa3e19c6b5e4897c7f03a7887bcbb888ef18706afa11" Jan 01 09:48:58 crc kubenswrapper[4867]: E0101 09:48:58.078973 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cf78b412fd9b338e442fa3e19c6b5e4897c7f03a7887bcbb888ef18706afa11\": container with ID starting with 2cf78b412fd9b338e442fa3e19c6b5e4897c7f03a7887bcbb888ef18706afa11 not found: ID does not exist" containerID="2cf78b412fd9b338e442fa3e19c6b5e4897c7f03a7887bcbb888ef18706afa11" Jan 01 09:48:58 crc kubenswrapper[4867]: I0101 09:48:58.079004 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cf78b412fd9b338e442fa3e19c6b5e4897c7f03a7887bcbb888ef18706afa11"} err="failed to get container status \"2cf78b412fd9b338e442fa3e19c6b5e4897c7f03a7887bcbb888ef18706afa11\": rpc error: code = NotFound desc = could not find container \"2cf78b412fd9b338e442fa3e19c6b5e4897c7f03a7887bcbb888ef18706afa11\": container with ID starting with 2cf78b412fd9b338e442fa3e19c6b5e4897c7f03a7887bcbb888ef18706afa11 not found: ID does not exist" Jan 01 09:48:58 crc kubenswrapper[4867]: I0101 09:48:58.079024 4867 scope.go:117] "RemoveContainer" containerID="3f91c2163e064274a6e6ea300083f667688cacfbde02b44c0f5bdcd649a9da8c" Jan 01 09:48:58 crc kubenswrapper[4867]: E0101 09:48:58.079460 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f91c2163e064274a6e6ea300083f667688cacfbde02b44c0f5bdcd649a9da8c\": container with ID starting with 3f91c2163e064274a6e6ea300083f667688cacfbde02b44c0f5bdcd649a9da8c not found: ID does not exist" containerID="3f91c2163e064274a6e6ea300083f667688cacfbde02b44c0f5bdcd649a9da8c" Jan 01 09:48:58 crc kubenswrapper[4867]: I0101 09:48:58.079553 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f91c2163e064274a6e6ea300083f667688cacfbde02b44c0f5bdcd649a9da8c"} err="failed to get container status \"3f91c2163e064274a6e6ea300083f667688cacfbde02b44c0f5bdcd649a9da8c\": rpc error: code = NotFound desc = could not find container \"3f91c2163e064274a6e6ea300083f667688cacfbde02b44c0f5bdcd649a9da8c\": container with ID starting with 3f91c2163e064274a6e6ea300083f667688cacfbde02b44c0f5bdcd649a9da8c not found: ID does not exist" Jan 01 09:48:58 crc kubenswrapper[4867]: I0101 09:48:58.079606 4867 scope.go:117] "RemoveContainer" containerID="70ad49e05e131fcaacc37b190f363d65438d5a97ed21df7492e88c78b2db390d" Jan 01 09:48:58 crc kubenswrapper[4867]: I0101 09:48:58.106704 4867 scope.go:117] "RemoveContainer" containerID="2f41a924d5e2656df175d830666818348bb54e2ae1e82101dc8eff0273965325" Jan 01 09:48:58 crc kubenswrapper[4867]: I0101 09:48:58.133454 4867 scope.go:117] "RemoveContainer" containerID="d2b24393238ccf7d876971a027e0745917f237f88a009b0e73eccb172001843f" Jan 01 09:48:58 crc kubenswrapper[4867]: I0101 09:48:58.163612 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bc32b87-684c-4313-a009-44dc00e8b713-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2bc32b87-684c-4313-a009-44dc00e8b713" (UID: "2bc32b87-684c-4313-a009-44dc00e8b713"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:48:58 crc kubenswrapper[4867]: I0101 09:48:58.164607 4867 scope.go:117] "RemoveContainer" containerID="70ad49e05e131fcaacc37b190f363d65438d5a97ed21df7492e88c78b2db390d" Jan 01 09:48:58 crc kubenswrapper[4867]: E0101 09:48:58.165220 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70ad49e05e131fcaacc37b190f363d65438d5a97ed21df7492e88c78b2db390d\": container with ID starting with 70ad49e05e131fcaacc37b190f363d65438d5a97ed21df7492e88c78b2db390d not found: ID does not exist" containerID="70ad49e05e131fcaacc37b190f363d65438d5a97ed21df7492e88c78b2db390d" Jan 01 09:48:58 crc kubenswrapper[4867]: I0101 09:48:58.165259 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70ad49e05e131fcaacc37b190f363d65438d5a97ed21df7492e88c78b2db390d"} err="failed to get container status \"70ad49e05e131fcaacc37b190f363d65438d5a97ed21df7492e88c78b2db390d\": rpc error: code = NotFound desc = could not find container \"70ad49e05e131fcaacc37b190f363d65438d5a97ed21df7492e88c78b2db390d\": container with ID starting with 70ad49e05e131fcaacc37b190f363d65438d5a97ed21df7492e88c78b2db390d not found: ID does not exist" Jan 01 09:48:58 crc kubenswrapper[4867]: I0101 09:48:58.165281 4867 scope.go:117] "RemoveContainer" containerID="2f41a924d5e2656df175d830666818348bb54e2ae1e82101dc8eff0273965325" Jan 01 09:48:58 crc kubenswrapper[4867]: E0101 09:48:58.165570 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f41a924d5e2656df175d830666818348bb54e2ae1e82101dc8eff0273965325\": container with ID starting with 2f41a924d5e2656df175d830666818348bb54e2ae1e82101dc8eff0273965325 not found: ID does not exist" containerID="2f41a924d5e2656df175d830666818348bb54e2ae1e82101dc8eff0273965325" Jan 01 09:48:58 crc kubenswrapper[4867]: I0101 09:48:58.165601 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f41a924d5e2656df175d830666818348bb54e2ae1e82101dc8eff0273965325"} err="failed to get container status \"2f41a924d5e2656df175d830666818348bb54e2ae1e82101dc8eff0273965325\": rpc error: code = NotFound desc = could not find container \"2f41a924d5e2656df175d830666818348bb54e2ae1e82101dc8eff0273965325\": container with ID starting with 2f41a924d5e2656df175d830666818348bb54e2ae1e82101dc8eff0273965325 not found: ID does not exist" Jan 01 09:48:58 crc kubenswrapper[4867]: I0101 09:48:58.165619 4867 scope.go:117] "RemoveContainer" containerID="d2b24393238ccf7d876971a027e0745917f237f88a009b0e73eccb172001843f" Jan 01 09:48:58 crc kubenswrapper[4867]: E0101 09:48:58.165963 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2b24393238ccf7d876971a027e0745917f237f88a009b0e73eccb172001843f\": container with ID starting with d2b24393238ccf7d876971a027e0745917f237f88a009b0e73eccb172001843f not found: ID does not exist" containerID="d2b24393238ccf7d876971a027e0745917f237f88a009b0e73eccb172001843f" Jan 01 09:48:58 crc kubenswrapper[4867]: I0101 09:48:58.166021 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2b24393238ccf7d876971a027e0745917f237f88a009b0e73eccb172001843f"} err="failed to get container status \"d2b24393238ccf7d876971a027e0745917f237f88a009b0e73eccb172001843f\": rpc error: code = NotFound desc = could not find container \"d2b24393238ccf7d876971a027e0745917f237f88a009b0e73eccb172001843f\": container with ID starting with d2b24393238ccf7d876971a027e0745917f237f88a009b0e73eccb172001843f not found: ID does not exist" Jan 01 09:48:58 crc kubenswrapper[4867]: I0101 09:48:58.177356 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/132d1c2d-1465-423c-b366-00bda22e79f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "132d1c2d-1465-423c-b366-00bda22e79f0" (UID: "132d1c2d-1465-423c-b366-00bda22e79f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:48:58 crc kubenswrapper[4867]: I0101 09:48:58.190099 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-95587bc99-hndrb" Jan 01 09:48:58 crc kubenswrapper[4867]: I0101 09:48:58.207486 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/132d1c2d-1465-423c-b366-00bda22e79f0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 09:48:58 crc kubenswrapper[4867]: I0101 09:48:58.207528 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bc32b87-684c-4313-a009-44dc00e8b713-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 09:48:58 crc kubenswrapper[4867]: I0101 09:48:58.369597 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8xwxl"] Jan 01 09:48:58 crc kubenswrapper[4867]: I0101 09:48:58.379543 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8xwxl"] Jan 01 09:48:58 crc kubenswrapper[4867]: I0101 09:48:58.386043 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2mgpp"] Jan 01 09:48:58 crc kubenswrapper[4867]: I0101 09:48:58.392268 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2mgpp"] Jan 01 09:48:58 crc kubenswrapper[4867]: I0101 09:48:58.440796 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d79f765b5-f5274" Jan 01 09:48:58 crc kubenswrapper[4867]: I0101 09:48:58.493875 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-hndrb"] Jan 01 09:48:58 crc kubenswrapper[4867]: I0101 09:48:58.975508 4867 generic.go:334] "Generic (PLEG): container finished" podID="32626dd7-5d1c-4beb-84bd-b97be403cc4a" containerID="dcf56f6d13a1bdf8fd0053bd6bcd2d85d40894ed039f9d9e851fb3caa0693c02" exitCode=0 Jan 01 09:48:58 crc kubenswrapper[4867]: I0101 09:48:58.975599 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"32626dd7-5d1c-4beb-84bd-b97be403cc4a","Type":"ContainerDied","Data":"dcf56f6d13a1bdf8fd0053bd6bcd2d85d40894ed039f9d9e851fb3caa0693c02"} Jan 01 09:48:58 crc kubenswrapper[4867]: I0101 09:48:58.975995 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-95587bc99-hndrb" podUID="ba666d1e-4416-4028-b20f-6c1e135e3b5d" containerName="dnsmasq-dns" containerID="cri-o://05c9a8a2ecea927228db9ba76148d839639dc0f964f5c69f95da2d4327bb6a13" gracePeriod=10 Jan 01 09:48:59 crc kubenswrapper[4867]: I0101 09:48:59.144434 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="132d1c2d-1465-423c-b366-00bda22e79f0" path="/var/lib/kubelet/pods/132d1c2d-1465-423c-b366-00bda22e79f0/volumes" Jan 01 09:48:59 crc kubenswrapper[4867]: I0101 09:48:59.145318 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bc32b87-684c-4313-a009-44dc00e8b713" path="/var/lib/kubelet/pods/2bc32b87-684c-4313-a009-44dc00e8b713/volumes" Jan 01 09:48:59 crc kubenswrapper[4867]: I0101 09:48:59.412050 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95587bc99-hndrb" Jan 01 09:48:59 crc kubenswrapper[4867]: I0101 09:48:59.539147 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvdlf\" (UniqueName: \"kubernetes.io/projected/ba666d1e-4416-4028-b20f-6c1e135e3b5d-kube-api-access-rvdlf\") pod \"ba666d1e-4416-4028-b20f-6c1e135e3b5d\" (UID: \"ba666d1e-4416-4028-b20f-6c1e135e3b5d\") " Jan 01 09:48:59 crc kubenswrapper[4867]: I0101 09:48:59.539200 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba666d1e-4416-4028-b20f-6c1e135e3b5d-config\") pod \"ba666d1e-4416-4028-b20f-6c1e135e3b5d\" (UID: \"ba666d1e-4416-4028-b20f-6c1e135e3b5d\") " Jan 01 09:48:59 crc kubenswrapper[4867]: I0101 09:48:59.539347 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba666d1e-4416-4028-b20f-6c1e135e3b5d-dns-svc\") pod \"ba666d1e-4416-4028-b20f-6c1e135e3b5d\" (UID: \"ba666d1e-4416-4028-b20f-6c1e135e3b5d\") " Jan 01 09:48:59 crc kubenswrapper[4867]: I0101 09:48:59.547174 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba666d1e-4416-4028-b20f-6c1e135e3b5d-kube-api-access-rvdlf" (OuterVolumeSpecName: "kube-api-access-rvdlf") pod "ba666d1e-4416-4028-b20f-6c1e135e3b5d" (UID: "ba666d1e-4416-4028-b20f-6c1e135e3b5d"). InnerVolumeSpecName "kube-api-access-rvdlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:48:59 crc kubenswrapper[4867]: I0101 09:48:59.584713 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba666d1e-4416-4028-b20f-6c1e135e3b5d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ba666d1e-4416-4028-b20f-6c1e135e3b5d" (UID: "ba666d1e-4416-4028-b20f-6c1e135e3b5d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 09:48:59 crc kubenswrapper[4867]: I0101 09:48:59.591199 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba666d1e-4416-4028-b20f-6c1e135e3b5d-config" (OuterVolumeSpecName: "config") pod "ba666d1e-4416-4028-b20f-6c1e135e3b5d" (UID: "ba666d1e-4416-4028-b20f-6c1e135e3b5d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 09:48:59 crc kubenswrapper[4867]: I0101 09:48:59.640815 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvdlf\" (UniqueName: \"kubernetes.io/projected/ba666d1e-4416-4028-b20f-6c1e135e3b5d-kube-api-access-rvdlf\") on node \"crc\" DevicePath \"\"" Jan 01 09:48:59 crc kubenswrapper[4867]: I0101 09:48:59.640854 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba666d1e-4416-4028-b20f-6c1e135e3b5d-config\") on node \"crc\" DevicePath \"\"" Jan 01 09:48:59 crc kubenswrapper[4867]: I0101 09:48:59.640863 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba666d1e-4416-4028-b20f-6c1e135e3b5d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 01 09:48:59 crc kubenswrapper[4867]: I0101 09:48:59.991708 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"32626dd7-5d1c-4beb-84bd-b97be403cc4a","Type":"ContainerStarted","Data":"dff7163812c07e6fdc4b6cbc3806ae588b222843323bf196b4f94a24a1e13f01"} Jan 01 09:48:59 crc kubenswrapper[4867]: I0101 09:48:59.997951 4867 generic.go:334] "Generic (PLEG): container finished" podID="ba666d1e-4416-4028-b20f-6c1e135e3b5d" containerID="05c9a8a2ecea927228db9ba76148d839639dc0f964f5c69f95da2d4327bb6a13" exitCode=0 Jan 01 09:48:59 crc kubenswrapper[4867]: I0101 09:48:59.997982 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-hndrb" event={"ID":"ba666d1e-4416-4028-b20f-6c1e135e3b5d","Type":"ContainerDied","Data":"05c9a8a2ecea927228db9ba76148d839639dc0f964f5c69f95da2d4327bb6a13"} Jan 01 09:48:59 crc kubenswrapper[4867]: I0101 09:48:59.998045 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95587bc99-hndrb" Jan 01 09:48:59 crc kubenswrapper[4867]: I0101 09:48:59.998086 4867 scope.go:117] "RemoveContainer" containerID="05c9a8a2ecea927228db9ba76148d839639dc0f964f5c69f95da2d4327bb6a13" Jan 01 09:48:59 crc kubenswrapper[4867]: I0101 09:48:59.998065 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-hndrb" event={"ID":"ba666d1e-4416-4028-b20f-6c1e135e3b5d","Type":"ContainerDied","Data":"3dfc1445873af392fb1d071b039285f020b253f5ac4195d291542d10fd48a707"} Jan 01 09:49:00 crc kubenswrapper[4867]: I0101 09:49:00.035001 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=10.034967211 podStartE2EDuration="10.034967211s" podCreationTimestamp="2026-01-01 09:48:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 09:49:00.025147771 +0000 UTC m=+4949.160416620" watchObservedRunningTime="2026-01-01 09:49:00.034967211 +0000 UTC m=+4949.170236030" Jan 01 09:49:00 crc kubenswrapper[4867]: I0101 09:49:00.056433 4867 scope.go:117] "RemoveContainer" containerID="0891dc57487589eaa25f58d417f1ab106dec37594027238be411ad1d3ec6964a" Jan 01 09:49:00 crc kubenswrapper[4867]: I0101 09:49:00.066198 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-hndrb"] Jan 01 09:49:00 crc kubenswrapper[4867]: I0101 09:49:00.074164 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-hndrb"] Jan 01 09:49:00 crc kubenswrapper[4867]: I0101 09:49:00.080001 4867 scope.go:117] "RemoveContainer" containerID="05c9a8a2ecea927228db9ba76148d839639dc0f964f5c69f95da2d4327bb6a13" Jan 01 09:49:00 crc kubenswrapper[4867]: E0101 09:49:00.080454 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05c9a8a2ecea927228db9ba76148d839639dc0f964f5c69f95da2d4327bb6a13\": container with ID starting with 05c9a8a2ecea927228db9ba76148d839639dc0f964f5c69f95da2d4327bb6a13 not found: ID does not exist" containerID="05c9a8a2ecea927228db9ba76148d839639dc0f964f5c69f95da2d4327bb6a13" Jan 01 09:49:00 crc kubenswrapper[4867]: I0101 09:49:00.080496 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05c9a8a2ecea927228db9ba76148d839639dc0f964f5c69f95da2d4327bb6a13"} err="failed to get container status \"05c9a8a2ecea927228db9ba76148d839639dc0f964f5c69f95da2d4327bb6a13\": rpc error: code = NotFound desc = could not find container \"05c9a8a2ecea927228db9ba76148d839639dc0f964f5c69f95da2d4327bb6a13\": container with ID starting with 05c9a8a2ecea927228db9ba76148d839639dc0f964f5c69f95da2d4327bb6a13 not found: ID does not exist" Jan 01 09:49:00 crc kubenswrapper[4867]: I0101 09:49:00.080521 4867 scope.go:117] "RemoveContainer" containerID="0891dc57487589eaa25f58d417f1ab106dec37594027238be411ad1d3ec6964a" Jan 01 09:49:00 crc kubenswrapper[4867]: E0101 09:49:00.080819 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0891dc57487589eaa25f58d417f1ab106dec37594027238be411ad1d3ec6964a\": container with ID starting with 0891dc57487589eaa25f58d417f1ab106dec37594027238be411ad1d3ec6964a not found: ID does not exist" containerID="0891dc57487589eaa25f58d417f1ab106dec37594027238be411ad1d3ec6964a" Jan 01 09:49:00 crc kubenswrapper[4867]: I0101 09:49:00.080846 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0891dc57487589eaa25f58d417f1ab106dec37594027238be411ad1d3ec6964a"} err="failed to get container status \"0891dc57487589eaa25f58d417f1ab106dec37594027238be411ad1d3ec6964a\": rpc error: code = NotFound desc = could not find container \"0891dc57487589eaa25f58d417f1ab106dec37594027238be411ad1d3ec6964a\": container with ID starting with 0891dc57487589eaa25f58d417f1ab106dec37594027238be411ad1d3ec6964a not found: ID does not exist" Jan 01 09:49:00 crc kubenswrapper[4867]: I0101 09:49:00.703767 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 01 09:49:00 crc kubenswrapper[4867]: I0101 09:49:00.704292 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 01 09:49:01 crc kubenswrapper[4867]: I0101 09:49:01.055811 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 01 09:49:01 crc kubenswrapper[4867]: I0101 09:49:01.138231 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba666d1e-4416-4028-b20f-6c1e135e3b5d" path="/var/lib/kubelet/pods/ba666d1e-4416-4028-b20f-6c1e135e3b5d/volumes" Jan 01 09:49:01 crc kubenswrapper[4867]: I0101 09:49:01.151925 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 01 09:49:02 crc kubenswrapper[4867]: I0101 09:49:02.173975 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 01 09:49:02 crc kubenswrapper[4867]: I0101 09:49:02.174778 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 01 09:49:02 crc kubenswrapper[4867]: I0101 09:49:02.718215 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 01 09:49:04 crc kubenswrapper[4867]: I0101 09:49:04.540545 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 01 09:49:04 crc kubenswrapper[4867]: I0101 09:49:04.665381 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 01 09:49:09 crc kubenswrapper[4867]: I0101 09:49:09.384248 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-npcr2"] Jan 01 09:49:09 crc kubenswrapper[4867]: E0101 09:49:09.385145 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bc32b87-684c-4313-a009-44dc00e8b713" containerName="registry-server" Jan 01 09:49:09 crc kubenswrapper[4867]: I0101 09:49:09.385168 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bc32b87-684c-4313-a009-44dc00e8b713" containerName="registry-server" Jan 01 09:49:09 crc kubenswrapper[4867]: E0101 09:49:09.385189 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba666d1e-4416-4028-b20f-6c1e135e3b5d" containerName="dnsmasq-dns" Jan 01 09:49:09 crc kubenswrapper[4867]: I0101 09:49:09.385203 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba666d1e-4416-4028-b20f-6c1e135e3b5d" containerName="dnsmasq-dns" Jan 01 09:49:09 crc kubenswrapper[4867]: E0101 09:49:09.385220 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="132d1c2d-1465-423c-b366-00bda22e79f0" containerName="registry-server" Jan 01 09:49:09 crc kubenswrapper[4867]: I0101 09:49:09.385232 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="132d1c2d-1465-423c-b366-00bda22e79f0" containerName="registry-server" Jan 01 09:49:09 crc kubenswrapper[4867]: E0101 09:49:09.385259 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bc32b87-684c-4313-a009-44dc00e8b713" containerName="extract-content" Jan 01 09:49:09 crc kubenswrapper[4867]: I0101 09:49:09.385271 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bc32b87-684c-4313-a009-44dc00e8b713" containerName="extract-content" Jan 01 09:49:09 crc kubenswrapper[4867]: E0101 09:49:09.385294 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba666d1e-4416-4028-b20f-6c1e135e3b5d" containerName="init" Jan 01 09:49:09 crc kubenswrapper[4867]: I0101 09:49:09.385306 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba666d1e-4416-4028-b20f-6c1e135e3b5d" containerName="init" Jan 01 09:49:09 crc kubenswrapper[4867]: E0101 09:49:09.385332 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bc32b87-684c-4313-a009-44dc00e8b713" containerName="extract-utilities" Jan 01 09:49:09 crc kubenswrapper[4867]: I0101 09:49:09.385344 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bc32b87-684c-4313-a009-44dc00e8b713" containerName="extract-utilities" Jan 01 09:49:09 crc kubenswrapper[4867]: E0101 09:49:09.385368 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="132d1c2d-1465-423c-b366-00bda22e79f0" containerName="extract-content" Jan 01 09:49:09 crc kubenswrapper[4867]: I0101 09:49:09.385379 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="132d1c2d-1465-423c-b366-00bda22e79f0" containerName="extract-content" Jan 01 09:49:09 crc kubenswrapper[4867]: E0101 09:49:09.385401 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="132d1c2d-1465-423c-b366-00bda22e79f0" containerName="extract-utilities" Jan 01 09:49:09 crc kubenswrapper[4867]: I0101 09:49:09.385413 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="132d1c2d-1465-423c-b366-00bda22e79f0" containerName="extract-utilities" Jan 01 09:49:09 crc kubenswrapper[4867]: I0101 09:49:09.385669 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bc32b87-684c-4313-a009-44dc00e8b713" containerName="registry-server" Jan 01 09:49:09 crc kubenswrapper[4867]: I0101 09:49:09.385698 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba666d1e-4416-4028-b20f-6c1e135e3b5d" containerName="dnsmasq-dns" Jan 01 09:49:09 crc kubenswrapper[4867]: I0101 09:49:09.385722 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="132d1c2d-1465-423c-b366-00bda22e79f0" containerName="registry-server" Jan 01 09:49:09 crc kubenswrapper[4867]: I0101 09:49:09.386531 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-npcr2" Jan 01 09:49:09 crc kubenswrapper[4867]: I0101 09:49:09.389579 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 01 09:49:09 crc kubenswrapper[4867]: I0101 09:49:09.399965 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-npcr2"] Jan 01 09:49:09 crc kubenswrapper[4867]: I0101 09:49:09.518709 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqm4w\" (UniqueName: \"kubernetes.io/projected/7e8f2331-28d3-4dad-b890-f7e734d17313-kube-api-access-rqm4w\") pod \"root-account-create-update-npcr2\" (UID: \"7e8f2331-28d3-4dad-b890-f7e734d17313\") " pod="openstack/root-account-create-update-npcr2" Jan 01 09:49:09 crc kubenswrapper[4867]: I0101 09:49:09.518985 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e8f2331-28d3-4dad-b890-f7e734d17313-operator-scripts\") pod \"root-account-create-update-npcr2\" (UID: \"7e8f2331-28d3-4dad-b890-f7e734d17313\") " pod="openstack/root-account-create-update-npcr2" Jan 01 09:49:09 crc kubenswrapper[4867]: I0101 09:49:09.620713 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e8f2331-28d3-4dad-b890-f7e734d17313-operator-scripts\") pod \"root-account-create-update-npcr2\" (UID: \"7e8f2331-28d3-4dad-b890-f7e734d17313\") " pod="openstack/root-account-create-update-npcr2" Jan 01 09:49:09 crc kubenswrapper[4867]: I0101 09:49:09.621040 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqm4w\" (UniqueName: \"kubernetes.io/projected/7e8f2331-28d3-4dad-b890-f7e734d17313-kube-api-access-rqm4w\") pod \"root-account-create-update-npcr2\" (UID: \"7e8f2331-28d3-4dad-b890-f7e734d17313\") " pod="openstack/root-account-create-update-npcr2" Jan 01 09:49:09 crc kubenswrapper[4867]: I0101 09:49:09.622077 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e8f2331-28d3-4dad-b890-f7e734d17313-operator-scripts\") pod \"root-account-create-update-npcr2\" (UID: \"7e8f2331-28d3-4dad-b890-f7e734d17313\") " pod="openstack/root-account-create-update-npcr2" Jan 01 09:49:09 crc kubenswrapper[4867]: I0101 09:49:09.753079 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqm4w\" (UniqueName: \"kubernetes.io/projected/7e8f2331-28d3-4dad-b890-f7e734d17313-kube-api-access-rqm4w\") pod \"root-account-create-update-npcr2\" (UID: \"7e8f2331-28d3-4dad-b890-f7e734d17313\") " pod="openstack/root-account-create-update-npcr2" Jan 01 09:49:10 crc kubenswrapper[4867]: I0101 09:49:10.011491 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-npcr2" Jan 01 09:49:10 crc kubenswrapper[4867]: I0101 09:49:10.474624 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-npcr2"] Jan 01 09:49:10 crc kubenswrapper[4867]: W0101 09:49:10.483119 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e8f2331_28d3_4dad_b890_f7e734d17313.slice/crio-4ede0239076b8d3911ac611ab6ac3fe47b76e9e45f11483e71d1feec6b3f72b7 WatchSource:0}: Error finding container 4ede0239076b8d3911ac611ab6ac3fe47b76e9e45f11483e71d1feec6b3f72b7: Status 404 returned error can't find the container with id 4ede0239076b8d3911ac611ab6ac3fe47b76e9e45f11483e71d1feec6b3f72b7 Jan 01 09:49:11 crc kubenswrapper[4867]: I0101 09:49:11.113240 4867 generic.go:334] "Generic (PLEG): container finished" podID="7e8f2331-28d3-4dad-b890-f7e734d17313" containerID="d909d050723c9ed78f6c3878f968960374acc20536ec4244489d3077a0d48354" exitCode=0 Jan 01 09:49:11 crc kubenswrapper[4867]: I0101 09:49:11.113356 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-npcr2" event={"ID":"7e8f2331-28d3-4dad-b890-f7e734d17313","Type":"ContainerDied","Data":"d909d050723c9ed78f6c3878f968960374acc20536ec4244489d3077a0d48354"} Jan 01 09:49:11 crc kubenswrapper[4867]: I0101 09:49:11.113760 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-npcr2" event={"ID":"7e8f2331-28d3-4dad-b890-f7e734d17313","Type":"ContainerStarted","Data":"4ede0239076b8d3911ac611ab6ac3fe47b76e9e45f11483e71d1feec6b3f72b7"} Jan 01 09:49:12 crc kubenswrapper[4867]: I0101 09:49:12.563296 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-npcr2" Jan 01 09:49:12 crc kubenswrapper[4867]: I0101 09:49:12.677870 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqm4w\" (UniqueName: \"kubernetes.io/projected/7e8f2331-28d3-4dad-b890-f7e734d17313-kube-api-access-rqm4w\") pod \"7e8f2331-28d3-4dad-b890-f7e734d17313\" (UID: \"7e8f2331-28d3-4dad-b890-f7e734d17313\") " Jan 01 09:49:12 crc kubenswrapper[4867]: I0101 09:49:12.678089 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e8f2331-28d3-4dad-b890-f7e734d17313-operator-scripts\") pod \"7e8f2331-28d3-4dad-b890-f7e734d17313\" (UID: \"7e8f2331-28d3-4dad-b890-f7e734d17313\") " Jan 01 09:49:12 crc kubenswrapper[4867]: I0101 09:49:12.679497 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e8f2331-28d3-4dad-b890-f7e734d17313-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7e8f2331-28d3-4dad-b890-f7e734d17313" (UID: "7e8f2331-28d3-4dad-b890-f7e734d17313"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 09:49:12 crc kubenswrapper[4867]: I0101 09:49:12.688473 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e8f2331-28d3-4dad-b890-f7e734d17313-kube-api-access-rqm4w" (OuterVolumeSpecName: "kube-api-access-rqm4w") pod "7e8f2331-28d3-4dad-b890-f7e734d17313" (UID: "7e8f2331-28d3-4dad-b890-f7e734d17313"). InnerVolumeSpecName "kube-api-access-rqm4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:49:12 crc kubenswrapper[4867]: I0101 09:49:12.780880 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqm4w\" (UniqueName: \"kubernetes.io/projected/7e8f2331-28d3-4dad-b890-f7e734d17313-kube-api-access-rqm4w\") on node \"crc\" DevicePath \"\"" Jan 01 09:49:12 crc kubenswrapper[4867]: I0101 09:49:12.780964 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e8f2331-28d3-4dad-b890-f7e734d17313-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 09:49:13 crc kubenswrapper[4867]: I0101 09:49:13.142368 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-npcr2" Jan 01 09:49:13 crc kubenswrapper[4867]: I0101 09:49:13.152974 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-npcr2" event={"ID":"7e8f2331-28d3-4dad-b890-f7e734d17313","Type":"ContainerDied","Data":"4ede0239076b8d3911ac611ab6ac3fe47b76e9e45f11483e71d1feec6b3f72b7"} Jan 01 09:49:13 crc kubenswrapper[4867]: I0101 09:49:13.153038 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ede0239076b8d3911ac611ab6ac3fe47b76e9e45f11483e71d1feec6b3f72b7" Jan 01 09:49:15 crc kubenswrapper[4867]: I0101 09:49:15.819286 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-npcr2"] Jan 01 09:49:15 crc kubenswrapper[4867]: I0101 09:49:15.830494 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-npcr2"] Jan 01 09:49:17 crc kubenswrapper[4867]: I0101 09:49:17.146172 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e8f2331-28d3-4dad-b890-f7e734d17313" path="/var/lib/kubelet/pods/7e8f2331-28d3-4dad-b890-f7e734d17313/volumes" Jan 01 09:49:20 crc kubenswrapper[4867]: I0101 09:49:20.831177 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-vwjmr"] Jan 01 09:49:20 crc kubenswrapper[4867]: E0101 09:49:20.832140 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e8f2331-28d3-4dad-b890-f7e734d17313" containerName="mariadb-account-create-update" Jan 01 09:49:20 crc kubenswrapper[4867]: I0101 09:49:20.832164 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e8f2331-28d3-4dad-b890-f7e734d17313" containerName="mariadb-account-create-update" Jan 01 09:49:20 crc kubenswrapper[4867]: I0101 09:49:20.832482 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e8f2331-28d3-4dad-b890-f7e734d17313" containerName="mariadb-account-create-update" Jan 01 09:49:20 crc kubenswrapper[4867]: I0101 09:49:20.833320 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vwjmr" Jan 01 09:49:20 crc kubenswrapper[4867]: I0101 09:49:20.836487 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 01 09:49:20 crc kubenswrapper[4867]: I0101 09:49:20.850958 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vwjmr"] Jan 01 09:49:20 crc kubenswrapper[4867]: I0101 09:49:20.935974 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ddf0bf9-9447-4286-af8b-615436da38bf-operator-scripts\") pod \"root-account-create-update-vwjmr\" (UID: \"2ddf0bf9-9447-4286-af8b-615436da38bf\") " pod="openstack/root-account-create-update-vwjmr" Jan 01 09:49:20 crc kubenswrapper[4867]: I0101 09:49:20.936331 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76mf2\" (UniqueName: \"kubernetes.io/projected/2ddf0bf9-9447-4286-af8b-615436da38bf-kube-api-access-76mf2\") pod \"root-account-create-update-vwjmr\" (UID: \"2ddf0bf9-9447-4286-af8b-615436da38bf\") " pod="openstack/root-account-create-update-vwjmr" Jan 01 09:49:21 crc kubenswrapper[4867]: I0101 09:49:21.038741 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76mf2\" (UniqueName: \"kubernetes.io/projected/2ddf0bf9-9447-4286-af8b-615436da38bf-kube-api-access-76mf2\") pod \"root-account-create-update-vwjmr\" (UID: \"2ddf0bf9-9447-4286-af8b-615436da38bf\") " pod="openstack/root-account-create-update-vwjmr" Jan 01 09:49:21 crc kubenswrapper[4867]: I0101 09:49:21.039031 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ddf0bf9-9447-4286-af8b-615436da38bf-operator-scripts\") pod \"root-account-create-update-vwjmr\" (UID: \"2ddf0bf9-9447-4286-af8b-615436da38bf\") " pod="openstack/root-account-create-update-vwjmr" Jan 01 09:49:21 crc kubenswrapper[4867]: I0101 09:49:21.040385 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ddf0bf9-9447-4286-af8b-615436da38bf-operator-scripts\") pod \"root-account-create-update-vwjmr\" (UID: \"2ddf0bf9-9447-4286-af8b-615436da38bf\") " pod="openstack/root-account-create-update-vwjmr" Jan 01 09:49:21 crc kubenswrapper[4867]: I0101 09:49:21.073691 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76mf2\" (UniqueName: \"kubernetes.io/projected/2ddf0bf9-9447-4286-af8b-615436da38bf-kube-api-access-76mf2\") pod \"root-account-create-update-vwjmr\" (UID: \"2ddf0bf9-9447-4286-af8b-615436da38bf\") " pod="openstack/root-account-create-update-vwjmr" Jan 01 09:49:21 crc kubenswrapper[4867]: I0101 09:49:21.163388 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vwjmr" Jan 01 09:49:21 crc kubenswrapper[4867]: I0101 09:49:21.332524 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 09:49:21 crc kubenswrapper[4867]: I0101 09:49:21.332598 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 09:49:21 crc kubenswrapper[4867]: I0101 09:49:21.593464 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vwjmr"] Jan 01 09:49:21 crc kubenswrapper[4867]: W0101 09:49:21.607764 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ddf0bf9_9447_4286_af8b_615436da38bf.slice/crio-705ceabb1cd0a34470d10706d902c1c09112eba438cf30aa54ab806472b03070 WatchSource:0}: Error finding container 705ceabb1cd0a34470d10706d902c1c09112eba438cf30aa54ab806472b03070: Status 404 returned error can't find the container with id 705ceabb1cd0a34470d10706d902c1c09112eba438cf30aa54ab806472b03070 Jan 01 09:49:22 crc kubenswrapper[4867]: I0101 09:49:22.244506 4867 generic.go:334] "Generic (PLEG): container finished" podID="2ddf0bf9-9447-4286-af8b-615436da38bf" containerID="3df9c6b2e5f8e1ff7dc37965e8ea9f56fcea60ce0a7fdc4c996226505b0311e6" exitCode=0 Jan 01 09:49:22 crc kubenswrapper[4867]: I0101 09:49:22.244556 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vwjmr" event={"ID":"2ddf0bf9-9447-4286-af8b-615436da38bf","Type":"ContainerDied","Data":"3df9c6b2e5f8e1ff7dc37965e8ea9f56fcea60ce0a7fdc4c996226505b0311e6"} Jan 01 09:49:22 crc kubenswrapper[4867]: I0101 09:49:22.244586 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vwjmr" event={"ID":"2ddf0bf9-9447-4286-af8b-615436da38bf","Type":"ContainerStarted","Data":"705ceabb1cd0a34470d10706d902c1c09112eba438cf30aa54ab806472b03070"} Jan 01 09:49:23 crc kubenswrapper[4867]: I0101 09:49:23.681729 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vwjmr" Jan 01 09:49:23 crc kubenswrapper[4867]: I0101 09:49:23.808549 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76mf2\" (UniqueName: \"kubernetes.io/projected/2ddf0bf9-9447-4286-af8b-615436da38bf-kube-api-access-76mf2\") pod \"2ddf0bf9-9447-4286-af8b-615436da38bf\" (UID: \"2ddf0bf9-9447-4286-af8b-615436da38bf\") " Jan 01 09:49:23 crc kubenswrapper[4867]: I0101 09:49:23.808637 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ddf0bf9-9447-4286-af8b-615436da38bf-operator-scripts\") pod \"2ddf0bf9-9447-4286-af8b-615436da38bf\" (UID: \"2ddf0bf9-9447-4286-af8b-615436da38bf\") " Jan 01 09:49:23 crc kubenswrapper[4867]: I0101 09:49:23.809436 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ddf0bf9-9447-4286-af8b-615436da38bf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2ddf0bf9-9447-4286-af8b-615436da38bf" (UID: "2ddf0bf9-9447-4286-af8b-615436da38bf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 09:49:23 crc kubenswrapper[4867]: I0101 09:49:23.911713 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ddf0bf9-9447-4286-af8b-615436da38bf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 09:49:24 crc kubenswrapper[4867]: I0101 09:49:24.266611 4867 generic.go:334] "Generic (PLEG): container finished" podID="78e20b53-c66f-44e9-8fb6-2280f8c50ac6" containerID="848c6eebae8f0e3deb1fad084edaf2fc41e19cd1f787ea58d7543d38aae862d0" exitCode=0 Jan 01 09:49:24 crc kubenswrapper[4867]: I0101 09:49:24.266762 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"78e20b53-c66f-44e9-8fb6-2280f8c50ac6","Type":"ContainerDied","Data":"848c6eebae8f0e3deb1fad084edaf2fc41e19cd1f787ea58d7543d38aae862d0"} Jan 01 09:49:24 crc kubenswrapper[4867]: I0101 09:49:24.272141 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vwjmr" Jan 01 09:49:24 crc kubenswrapper[4867]: I0101 09:49:24.272882 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vwjmr" event={"ID":"2ddf0bf9-9447-4286-af8b-615436da38bf","Type":"ContainerDied","Data":"705ceabb1cd0a34470d10706d902c1c09112eba438cf30aa54ab806472b03070"} Jan 01 09:49:24 crc kubenswrapper[4867]: I0101 09:49:24.273035 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="705ceabb1cd0a34470d10706d902c1c09112eba438cf30aa54ab806472b03070" Jan 01 09:49:24 crc kubenswrapper[4867]: I0101 09:49:24.276131 4867 generic.go:334] "Generic (PLEG): container finished" podID="a603b88e-42cc-47f4-a96a-3644524346dc" containerID="d84b3f1af876135f0b78c47863e34090e2e81d7a0f4f0c10f945c6ff02707a86" exitCode=0 Jan 01 09:49:24 crc kubenswrapper[4867]: I0101 09:49:24.276182 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a603b88e-42cc-47f4-a96a-3644524346dc","Type":"ContainerDied","Data":"d84b3f1af876135f0b78c47863e34090e2e81d7a0f4f0c10f945c6ff02707a86"} Jan 01 09:49:24 crc kubenswrapper[4867]: I0101 09:49:24.351380 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ddf0bf9-9447-4286-af8b-615436da38bf-kube-api-access-76mf2" (OuterVolumeSpecName: "kube-api-access-76mf2") pod "2ddf0bf9-9447-4286-af8b-615436da38bf" (UID: "2ddf0bf9-9447-4286-af8b-615436da38bf"). InnerVolumeSpecName "kube-api-access-76mf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:49:24 crc kubenswrapper[4867]: I0101 09:49:24.430547 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76mf2\" (UniqueName: \"kubernetes.io/projected/2ddf0bf9-9447-4286-af8b-615436da38bf-kube-api-access-76mf2\") on node \"crc\" DevicePath \"\"" Jan 01 09:49:25 crc kubenswrapper[4867]: I0101 09:49:25.286562 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a603b88e-42cc-47f4-a96a-3644524346dc","Type":"ContainerStarted","Data":"4f426708eb1ea9dcf5a08a217fb141a349008168dcaeb7bdd8ad2ac1bb8ceaa2"} Jan 01 09:49:25 crc kubenswrapper[4867]: I0101 09:49:25.287114 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:25 crc kubenswrapper[4867]: I0101 09:49:25.288953 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"78e20b53-c66f-44e9-8fb6-2280f8c50ac6","Type":"ContainerStarted","Data":"7561d99284755462eb282a2cc773a2807ab9567ead68521fbe37c4b4c6490905"} Jan 01 09:49:25 crc kubenswrapper[4867]: I0101 09:49:25.289179 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 01 09:49:25 crc kubenswrapper[4867]: I0101 09:49:25.306266 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.306242599 podStartE2EDuration="37.306242599s" podCreationTimestamp="2026-01-01 09:48:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 09:49:25.305254251 +0000 UTC m=+4974.440523040" watchObservedRunningTime="2026-01-01 09:49:25.306242599 +0000 UTC m=+4974.441511378" Jan 01 09:49:25 crc kubenswrapper[4867]: I0101 09:49:25.330658 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.330639205 podStartE2EDuration="38.330639205s" podCreationTimestamp="2026-01-01 09:48:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 09:49:25.325469677 +0000 UTC m=+4974.460738446" watchObservedRunningTime="2026-01-01 09:49:25.330639205 +0000 UTC m=+4974.465907964" Jan 01 09:49:39 crc kubenswrapper[4867]: I0101 09:49:39.348183 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 01 09:49:39 crc kubenswrapper[4867]: I0101 09:49:39.532028 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:43 crc kubenswrapper[4867]: I0101 09:49:43.775629 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-699964fbc-wnnzv"] Jan 01 09:49:43 crc kubenswrapper[4867]: E0101 09:49:43.776383 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ddf0bf9-9447-4286-af8b-615436da38bf" containerName="mariadb-account-create-update" Jan 01 09:49:43 crc kubenswrapper[4867]: I0101 09:49:43.776400 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ddf0bf9-9447-4286-af8b-615436da38bf" containerName="mariadb-account-create-update" Jan 01 09:49:43 crc kubenswrapper[4867]: I0101 09:49:43.776563 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ddf0bf9-9447-4286-af8b-615436da38bf" containerName="mariadb-account-create-update" Jan 01 09:49:43 crc kubenswrapper[4867]: I0101 09:49:43.777523 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-wnnzv" Jan 01 09:49:43 crc kubenswrapper[4867]: I0101 09:49:43.796630 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-wnnzv"] Jan 01 09:49:43 crc kubenswrapper[4867]: I0101 09:49:43.849987 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs8bk\" (UniqueName: \"kubernetes.io/projected/cfe3ab9e-fd94-4a13-bd03-5716336019bf-kube-api-access-bs8bk\") pod \"dnsmasq-dns-699964fbc-wnnzv\" (UID: \"cfe3ab9e-fd94-4a13-bd03-5716336019bf\") " pod="openstack/dnsmasq-dns-699964fbc-wnnzv" Jan 01 09:49:43 crc kubenswrapper[4867]: I0101 09:49:43.850319 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfe3ab9e-fd94-4a13-bd03-5716336019bf-dns-svc\") pod \"dnsmasq-dns-699964fbc-wnnzv\" (UID: \"cfe3ab9e-fd94-4a13-bd03-5716336019bf\") " pod="openstack/dnsmasq-dns-699964fbc-wnnzv" Jan 01 09:49:43 crc kubenswrapper[4867]: I0101 09:49:43.850528 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfe3ab9e-fd94-4a13-bd03-5716336019bf-config\") pod \"dnsmasq-dns-699964fbc-wnnzv\" (UID: \"cfe3ab9e-fd94-4a13-bd03-5716336019bf\") " pod="openstack/dnsmasq-dns-699964fbc-wnnzv" Jan 01 09:49:43 crc kubenswrapper[4867]: I0101 09:49:43.952435 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs8bk\" (UniqueName: \"kubernetes.io/projected/cfe3ab9e-fd94-4a13-bd03-5716336019bf-kube-api-access-bs8bk\") pod \"dnsmasq-dns-699964fbc-wnnzv\" (UID: \"cfe3ab9e-fd94-4a13-bd03-5716336019bf\") " pod="openstack/dnsmasq-dns-699964fbc-wnnzv" Jan 01 09:49:43 crc kubenswrapper[4867]: I0101 09:49:43.952610 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfe3ab9e-fd94-4a13-bd03-5716336019bf-dns-svc\") pod \"dnsmasq-dns-699964fbc-wnnzv\" (UID: \"cfe3ab9e-fd94-4a13-bd03-5716336019bf\") " pod="openstack/dnsmasq-dns-699964fbc-wnnzv" Jan 01 09:49:43 crc kubenswrapper[4867]: I0101 09:49:43.952674 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfe3ab9e-fd94-4a13-bd03-5716336019bf-config\") pod \"dnsmasq-dns-699964fbc-wnnzv\" (UID: \"cfe3ab9e-fd94-4a13-bd03-5716336019bf\") " pod="openstack/dnsmasq-dns-699964fbc-wnnzv" Jan 01 09:49:43 crc kubenswrapper[4867]: I0101 09:49:43.953537 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfe3ab9e-fd94-4a13-bd03-5716336019bf-dns-svc\") pod \"dnsmasq-dns-699964fbc-wnnzv\" (UID: \"cfe3ab9e-fd94-4a13-bd03-5716336019bf\") " pod="openstack/dnsmasq-dns-699964fbc-wnnzv" Jan 01 09:49:43 crc kubenswrapper[4867]: I0101 09:49:43.954130 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfe3ab9e-fd94-4a13-bd03-5716336019bf-config\") pod \"dnsmasq-dns-699964fbc-wnnzv\" (UID: \"cfe3ab9e-fd94-4a13-bd03-5716336019bf\") " pod="openstack/dnsmasq-dns-699964fbc-wnnzv" Jan 01 09:49:43 crc kubenswrapper[4867]: I0101 09:49:43.982442 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs8bk\" (UniqueName: \"kubernetes.io/projected/cfe3ab9e-fd94-4a13-bd03-5716336019bf-kube-api-access-bs8bk\") pod \"dnsmasq-dns-699964fbc-wnnzv\" (UID: \"cfe3ab9e-fd94-4a13-bd03-5716336019bf\") " pod="openstack/dnsmasq-dns-699964fbc-wnnzv" Jan 01 09:49:44 crc kubenswrapper[4867]: I0101 09:49:44.095304 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-wnnzv" Jan 01 09:49:44 crc kubenswrapper[4867]: I0101 09:49:44.329565 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-wnnzv"] Jan 01 09:49:44 crc kubenswrapper[4867]: I0101 09:49:44.486793 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 01 09:49:44 crc kubenswrapper[4867]: I0101 09:49:44.521125 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-wnnzv" event={"ID":"cfe3ab9e-fd94-4a13-bd03-5716336019bf","Type":"ContainerStarted","Data":"b5637f1be7b40db6d99d4035f6cfe9cbbb4b70cfd568619f875fb1830b5db4a3"} Jan 01 09:49:45 crc kubenswrapper[4867]: I0101 09:49:45.236169 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 01 09:49:45 crc kubenswrapper[4867]: I0101 09:49:45.533197 4867 generic.go:334] "Generic (PLEG): container finished" podID="cfe3ab9e-fd94-4a13-bd03-5716336019bf" containerID="62138944240db446e9a41ec6f854cd32f3a93569e40c327b3f33d330760472f5" exitCode=0 Jan 01 09:49:45 crc kubenswrapper[4867]: I0101 09:49:45.533265 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-wnnzv" event={"ID":"cfe3ab9e-fd94-4a13-bd03-5716336019bf","Type":"ContainerDied","Data":"62138944240db446e9a41ec6f854cd32f3a93569e40c327b3f33d330760472f5"} Jan 01 09:49:46 crc kubenswrapper[4867]: I0101 09:49:46.544410 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-wnnzv" event={"ID":"cfe3ab9e-fd94-4a13-bd03-5716336019bf","Type":"ContainerStarted","Data":"0f76aaeb6ee1331c789dc5243640746809b7bfd2e51d845f7f2bdfd176aae86c"} Jan 01 09:49:46 crc kubenswrapper[4867]: I0101 09:49:46.544739 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-699964fbc-wnnzv" Jan 01 09:49:46 crc kubenswrapper[4867]: I0101 09:49:46.567262 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-699964fbc-wnnzv" podStartSLOduration=3.567236948 podStartE2EDuration="3.567236948s" podCreationTimestamp="2026-01-01 09:49:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 09:49:46.563831792 +0000 UTC m=+4995.699100581" watchObservedRunningTime="2026-01-01 09:49:46.567236948 +0000 UTC m=+4995.702505737" Jan 01 09:49:48 crc kubenswrapper[4867]: I0101 09:49:48.717927 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="78e20b53-c66f-44e9-8fb6-2280f8c50ac6" containerName="rabbitmq" containerID="cri-o://7561d99284755462eb282a2cc773a2807ab9567ead68521fbe37c4b4c6490905" gracePeriod=604796 Jan 01 09:49:49 crc kubenswrapper[4867]: I0101 09:49:49.038052 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="a603b88e-42cc-47f4-a96a-3644524346dc" containerName="rabbitmq" containerID="cri-o://4f426708eb1ea9dcf5a08a217fb141a349008168dcaeb7bdd8ad2ac1bb8ceaa2" gracePeriod=604797 Jan 01 09:49:49 crc kubenswrapper[4867]: I0101 09:49:49.345653 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="78e20b53-c66f-44e9-8fb6-2280f8c50ac6" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.247:5671: connect: connection refused" Jan 01 09:49:49 crc kubenswrapper[4867]: I0101 09:49:49.530558 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="a603b88e-42cc-47f4-a96a-3644524346dc" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.248:5671: connect: connection refused" Jan 01 09:49:51 crc kubenswrapper[4867]: I0101 09:49:51.332446 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 09:49:51 crc kubenswrapper[4867]: I0101 09:49:51.332746 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 09:49:54 crc kubenswrapper[4867]: I0101 09:49:54.096924 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-699964fbc-wnnzv" Jan 01 09:49:54 crc kubenswrapper[4867]: I0101 09:49:54.189098 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-f5274"] Jan 01 09:49:54 crc kubenswrapper[4867]: I0101 09:49:54.189409 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d79f765b5-f5274" podUID="81bb7bf2-99db-499a-b2ff-c681fe62ee62" containerName="dnsmasq-dns" containerID="cri-o://21c2cda6b4c913ee56d222ab092ee936939fe988b4b0e73a36eb95b2dd038586" gracePeriod=10 Jan 01 09:49:54 crc kubenswrapper[4867]: I0101 09:49:54.599471 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-f5274" Jan 01 09:49:54 crc kubenswrapper[4867]: I0101 09:49:54.610262 4867 generic.go:334] "Generic (PLEG): container finished" podID="81bb7bf2-99db-499a-b2ff-c681fe62ee62" containerID="21c2cda6b4c913ee56d222ab092ee936939fe988b4b0e73a36eb95b2dd038586" exitCode=0 Jan 01 09:49:54 crc kubenswrapper[4867]: I0101 09:49:54.610317 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-f5274" event={"ID":"81bb7bf2-99db-499a-b2ff-c681fe62ee62","Type":"ContainerDied","Data":"21c2cda6b4c913ee56d222ab092ee936939fe988b4b0e73a36eb95b2dd038586"} Jan 01 09:49:54 crc kubenswrapper[4867]: I0101 09:49:54.610348 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-f5274" event={"ID":"81bb7bf2-99db-499a-b2ff-c681fe62ee62","Type":"ContainerDied","Data":"56936dd884090fe2913f73ff89436b4332c7d6bd42b0252efd80c864c9d06d5c"} Jan 01 09:49:54 crc kubenswrapper[4867]: I0101 09:49:54.610369 4867 scope.go:117] "RemoveContainer" containerID="21c2cda6b4c913ee56d222ab092ee936939fe988b4b0e73a36eb95b2dd038586" Jan 01 09:49:54 crc kubenswrapper[4867]: I0101 09:49:54.610494 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-f5274" Jan 01 09:49:54 crc kubenswrapper[4867]: I0101 09:49:54.635931 4867 scope.go:117] "RemoveContainer" containerID="ff622dce526fde4c13b9a3056f7eaedf53a838cc7e97bdb68f2996dce13290cb" Jan 01 09:49:54 crc kubenswrapper[4867]: I0101 09:49:54.640462 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqm76\" (UniqueName: \"kubernetes.io/projected/81bb7bf2-99db-499a-b2ff-c681fe62ee62-kube-api-access-sqm76\") pod \"81bb7bf2-99db-499a-b2ff-c681fe62ee62\" (UID: \"81bb7bf2-99db-499a-b2ff-c681fe62ee62\") " Jan 01 09:49:54 crc kubenswrapper[4867]: I0101 09:49:54.640643 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81bb7bf2-99db-499a-b2ff-c681fe62ee62-dns-svc\") pod \"81bb7bf2-99db-499a-b2ff-c681fe62ee62\" (UID: \"81bb7bf2-99db-499a-b2ff-c681fe62ee62\") " Jan 01 09:49:54 crc kubenswrapper[4867]: I0101 09:49:54.640692 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81bb7bf2-99db-499a-b2ff-c681fe62ee62-config\") pod \"81bb7bf2-99db-499a-b2ff-c681fe62ee62\" (UID: \"81bb7bf2-99db-499a-b2ff-c681fe62ee62\") " Jan 01 09:49:54 crc kubenswrapper[4867]: I0101 09:49:54.647838 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81bb7bf2-99db-499a-b2ff-c681fe62ee62-kube-api-access-sqm76" (OuterVolumeSpecName: "kube-api-access-sqm76") pod "81bb7bf2-99db-499a-b2ff-c681fe62ee62" (UID: "81bb7bf2-99db-499a-b2ff-c681fe62ee62"). InnerVolumeSpecName "kube-api-access-sqm76". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:49:54 crc kubenswrapper[4867]: I0101 09:49:54.654557 4867 scope.go:117] "RemoveContainer" containerID="21c2cda6b4c913ee56d222ab092ee936939fe988b4b0e73a36eb95b2dd038586" Jan 01 09:49:54 crc kubenswrapper[4867]: E0101 09:49:54.655049 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21c2cda6b4c913ee56d222ab092ee936939fe988b4b0e73a36eb95b2dd038586\": container with ID starting with 21c2cda6b4c913ee56d222ab092ee936939fe988b4b0e73a36eb95b2dd038586 not found: ID does not exist" containerID="21c2cda6b4c913ee56d222ab092ee936939fe988b4b0e73a36eb95b2dd038586" Jan 01 09:49:54 crc kubenswrapper[4867]: I0101 09:49:54.655079 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21c2cda6b4c913ee56d222ab092ee936939fe988b4b0e73a36eb95b2dd038586"} err="failed to get container status \"21c2cda6b4c913ee56d222ab092ee936939fe988b4b0e73a36eb95b2dd038586\": rpc error: code = NotFound desc = could not find container \"21c2cda6b4c913ee56d222ab092ee936939fe988b4b0e73a36eb95b2dd038586\": container with ID starting with 21c2cda6b4c913ee56d222ab092ee936939fe988b4b0e73a36eb95b2dd038586 not found: ID does not exist" Jan 01 09:49:54 crc kubenswrapper[4867]: I0101 09:49:54.655097 4867 scope.go:117] "RemoveContainer" containerID="ff622dce526fde4c13b9a3056f7eaedf53a838cc7e97bdb68f2996dce13290cb" Jan 01 09:49:54 crc kubenswrapper[4867]: E0101 09:49:54.655302 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff622dce526fde4c13b9a3056f7eaedf53a838cc7e97bdb68f2996dce13290cb\": container with ID starting with ff622dce526fde4c13b9a3056f7eaedf53a838cc7e97bdb68f2996dce13290cb not found: ID does not exist" containerID="ff622dce526fde4c13b9a3056f7eaedf53a838cc7e97bdb68f2996dce13290cb" Jan 01 09:49:54 crc kubenswrapper[4867]: I0101 09:49:54.655318 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff622dce526fde4c13b9a3056f7eaedf53a838cc7e97bdb68f2996dce13290cb"} err="failed to get container status \"ff622dce526fde4c13b9a3056f7eaedf53a838cc7e97bdb68f2996dce13290cb\": rpc error: code = NotFound desc = could not find container \"ff622dce526fde4c13b9a3056f7eaedf53a838cc7e97bdb68f2996dce13290cb\": container with ID starting with ff622dce526fde4c13b9a3056f7eaedf53a838cc7e97bdb68f2996dce13290cb not found: ID does not exist" Jan 01 09:49:54 crc kubenswrapper[4867]: I0101 09:49:54.684067 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81bb7bf2-99db-499a-b2ff-c681fe62ee62-config" (OuterVolumeSpecName: "config") pod "81bb7bf2-99db-499a-b2ff-c681fe62ee62" (UID: "81bb7bf2-99db-499a-b2ff-c681fe62ee62"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 09:49:54 crc kubenswrapper[4867]: I0101 09:49:54.688621 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81bb7bf2-99db-499a-b2ff-c681fe62ee62-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "81bb7bf2-99db-499a-b2ff-c681fe62ee62" (UID: "81bb7bf2-99db-499a-b2ff-c681fe62ee62"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 09:49:54 crc kubenswrapper[4867]: I0101 09:49:54.742134 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81bb7bf2-99db-499a-b2ff-c681fe62ee62-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 01 09:49:54 crc kubenswrapper[4867]: I0101 09:49:54.742169 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81bb7bf2-99db-499a-b2ff-c681fe62ee62-config\") on node \"crc\" DevicePath \"\"" Jan 01 09:49:54 crc kubenswrapper[4867]: I0101 09:49:54.742182 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqm76\" (UniqueName: \"kubernetes.io/projected/81bb7bf2-99db-499a-b2ff-c681fe62ee62-kube-api-access-sqm76\") on node \"crc\" DevicePath \"\"" Jan 01 09:49:54 crc kubenswrapper[4867]: I0101 09:49:54.959804 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-f5274"] Jan 01 09:49:54 crc kubenswrapper[4867]: I0101 09:49:54.966805 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-f5274"] Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.144062 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81bb7bf2-99db-499a-b2ff-c681fe62ee62" path="/var/lib/kubelet/pods/81bb7bf2-99db-499a-b2ff-c681fe62ee62/volumes" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.457674 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.555328 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcnn5\" (UniqueName: \"kubernetes.io/projected/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-kube-api-access-bcnn5\") pod \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.555376 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-rabbitmq-erlang-cookie\") pod \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.555454 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-pod-info\") pod \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.555561 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1cdc226-d0b2-4896-8a47-cf3823b8ca7b\") pod \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.555618 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-plugins-conf\") pod \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.555643 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-rabbitmq-tls\") pod \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.555673 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-erlang-cookie-secret\") pod \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.555723 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-config-data\") pod \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.555785 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-rabbitmq-confd\") pod \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.555801 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-rabbitmq-plugins\") pod \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.555820 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-server-conf\") pod \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\" (UID: \"78e20b53-c66f-44e9-8fb6-2280f8c50ac6\") " Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.557009 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "78e20b53-c66f-44e9-8fb6-2280f8c50ac6" (UID: "78e20b53-c66f-44e9-8fb6-2280f8c50ac6"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.557249 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "78e20b53-c66f-44e9-8fb6-2280f8c50ac6" (UID: "78e20b53-c66f-44e9-8fb6-2280f8c50ac6"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.563831 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-pod-info" (OuterVolumeSpecName: "pod-info") pod "78e20b53-c66f-44e9-8fb6-2280f8c50ac6" (UID: "78e20b53-c66f-44e9-8fb6-2280f8c50ac6"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.564726 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "78e20b53-c66f-44e9-8fb6-2280f8c50ac6" (UID: "78e20b53-c66f-44e9-8fb6-2280f8c50ac6"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.565031 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "78e20b53-c66f-44e9-8fb6-2280f8c50ac6" (UID: "78e20b53-c66f-44e9-8fb6-2280f8c50ac6"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.567599 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "78e20b53-c66f-44e9-8fb6-2280f8c50ac6" (UID: "78e20b53-c66f-44e9-8fb6-2280f8c50ac6"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.585694 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-kube-api-access-bcnn5" (OuterVolumeSpecName: "kube-api-access-bcnn5") pod "78e20b53-c66f-44e9-8fb6-2280f8c50ac6" (UID: "78e20b53-c66f-44e9-8fb6-2280f8c50ac6"). InnerVolumeSpecName "kube-api-access-bcnn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.603188 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1cdc226-d0b2-4896-8a47-cf3823b8ca7b" (OuterVolumeSpecName: "persistence") pod "78e20b53-c66f-44e9-8fb6-2280f8c50ac6" (UID: "78e20b53-c66f-44e9-8fb6-2280f8c50ac6"). InnerVolumeSpecName "pvc-c1cdc226-d0b2-4896-8a47-cf3823b8ca7b". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.606801 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-server-conf" (OuterVolumeSpecName: "server-conf") pod "78e20b53-c66f-44e9-8fb6-2280f8c50ac6" (UID: "78e20b53-c66f-44e9-8fb6-2280f8c50ac6"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.619197 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-config-data" (OuterVolumeSpecName: "config-data") pod "78e20b53-c66f-44e9-8fb6-2280f8c50ac6" (UID: "78e20b53-c66f-44e9-8fb6-2280f8c50ac6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.623659 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"78e20b53-c66f-44e9-8fb6-2280f8c50ac6","Type":"ContainerDied","Data":"7561d99284755462eb282a2cc773a2807ab9567ead68521fbe37c4b4c6490905"} Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.623673 4867 generic.go:334] "Generic (PLEG): container finished" podID="78e20b53-c66f-44e9-8fb6-2280f8c50ac6" containerID="7561d99284755462eb282a2cc773a2807ab9567ead68521fbe37c4b4c6490905" exitCode=0 Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.623717 4867 scope.go:117] "RemoveContainer" containerID="7561d99284755462eb282a2cc773a2807ab9567ead68521fbe37c4b4c6490905" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.623799 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"78e20b53-c66f-44e9-8fb6-2280f8c50ac6","Type":"ContainerDied","Data":"83d9f8b2de0ef17a0ad692faf7e23f7ad73bc04d5de6e185b96042641fd23a06"} Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.623793 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.635413 4867 generic.go:334] "Generic (PLEG): container finished" podID="a603b88e-42cc-47f4-a96a-3644524346dc" containerID="4f426708eb1ea9dcf5a08a217fb141a349008168dcaeb7bdd8ad2ac1bb8ceaa2" exitCode=0 Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.635456 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a603b88e-42cc-47f4-a96a-3644524346dc","Type":"ContainerDied","Data":"4f426708eb1ea9dcf5a08a217fb141a349008168dcaeb7bdd8ad2ac1bb8ceaa2"} Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.637360 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.655811 4867 scope.go:117] "RemoveContainer" containerID="848c6eebae8f0e3deb1fad084edaf2fc41e19cd1f787ea58d7543d38aae862d0" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.657989 4867 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-pod-info\") on node \"crc\" DevicePath \"\"" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.658029 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-c1cdc226-d0b2-4896-8a47-cf3823b8ca7b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1cdc226-d0b2-4896-8a47-cf3823b8ca7b\") on node \"crc\" " Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.658043 4867 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.658053 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.658062 4867 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.658070 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.658078 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.658086 4867 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-server-conf\") on node \"crc\" DevicePath \"\"" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.658095 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcnn5\" (UniqueName: \"kubernetes.io/projected/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-kube-api-access-bcnn5\") on node \"crc\" DevicePath \"\"" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.658104 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.687127 4867 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.687662 4867 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-c1cdc226-d0b2-4896-8a47-cf3823b8ca7b" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1cdc226-d0b2-4896-8a47-cf3823b8ca7b") on node "crc" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.692075 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "78e20b53-c66f-44e9-8fb6-2280f8c50ac6" (UID: "78e20b53-c66f-44e9-8fb6-2280f8c50ac6"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.692801 4867 scope.go:117] "RemoveContainer" containerID="7561d99284755462eb282a2cc773a2807ab9567ead68521fbe37c4b4c6490905" Jan 01 09:49:55 crc kubenswrapper[4867]: E0101 09:49:55.693311 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7561d99284755462eb282a2cc773a2807ab9567ead68521fbe37c4b4c6490905\": container with ID starting with 7561d99284755462eb282a2cc773a2807ab9567ead68521fbe37c4b4c6490905 not found: ID does not exist" containerID="7561d99284755462eb282a2cc773a2807ab9567ead68521fbe37c4b4c6490905" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.693337 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7561d99284755462eb282a2cc773a2807ab9567ead68521fbe37c4b4c6490905"} err="failed to get container status \"7561d99284755462eb282a2cc773a2807ab9567ead68521fbe37c4b4c6490905\": rpc error: code = NotFound desc = could not find container \"7561d99284755462eb282a2cc773a2807ab9567ead68521fbe37c4b4c6490905\": container with ID starting with 7561d99284755462eb282a2cc773a2807ab9567ead68521fbe37c4b4c6490905 not found: ID does not exist" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.693357 4867 scope.go:117] "RemoveContainer" containerID="848c6eebae8f0e3deb1fad084edaf2fc41e19cd1f787ea58d7543d38aae862d0" Jan 01 09:49:55 crc kubenswrapper[4867]: E0101 09:49:55.697978 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"848c6eebae8f0e3deb1fad084edaf2fc41e19cd1f787ea58d7543d38aae862d0\": container with ID starting with 848c6eebae8f0e3deb1fad084edaf2fc41e19cd1f787ea58d7543d38aae862d0 not found: ID does not exist" containerID="848c6eebae8f0e3deb1fad084edaf2fc41e19cd1f787ea58d7543d38aae862d0" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.698014 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"848c6eebae8f0e3deb1fad084edaf2fc41e19cd1f787ea58d7543d38aae862d0"} err="failed to get container status \"848c6eebae8f0e3deb1fad084edaf2fc41e19cd1f787ea58d7543d38aae862d0\": rpc error: code = NotFound desc = could not find container \"848c6eebae8f0e3deb1fad084edaf2fc41e19cd1f787ea58d7543d38aae862d0\": container with ID starting with 848c6eebae8f0e3deb1fad084edaf2fc41e19cd1f787ea58d7543d38aae862d0 not found: ID does not exist" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.758848 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a603b88e-42cc-47f4-a96a-3644524346dc-plugins-conf\") pod \"a603b88e-42cc-47f4-a96a-3644524346dc\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.758899 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a603b88e-42cc-47f4-a96a-3644524346dc-config-data\") pod \"a603b88e-42cc-47f4-a96a-3644524346dc\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.758958 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a603b88e-42cc-47f4-a96a-3644524346dc-erlang-cookie-secret\") pod \"a603b88e-42cc-47f4-a96a-3644524346dc\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.759012 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p5qc\" (UniqueName: \"kubernetes.io/projected/a603b88e-42cc-47f4-a96a-3644524346dc-kube-api-access-7p5qc\") pod \"a603b88e-42cc-47f4-a96a-3644524346dc\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.759166 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d8044e9-d567-4c1d-9023-1bedd67fe59f\") pod \"a603b88e-42cc-47f4-a96a-3644524346dc\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.759202 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a603b88e-42cc-47f4-a96a-3644524346dc-rabbitmq-tls\") pod \"a603b88e-42cc-47f4-a96a-3644524346dc\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.759230 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a603b88e-42cc-47f4-a96a-3644524346dc-rabbitmq-confd\") pod \"a603b88e-42cc-47f4-a96a-3644524346dc\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.759288 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a603b88e-42cc-47f4-a96a-3644524346dc-rabbitmq-erlang-cookie\") pod \"a603b88e-42cc-47f4-a96a-3644524346dc\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.759342 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a603b88e-42cc-47f4-a96a-3644524346dc-server-conf\") pod \"a603b88e-42cc-47f4-a96a-3644524346dc\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.759396 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a603b88e-42cc-47f4-a96a-3644524346dc-pod-info\") pod \"a603b88e-42cc-47f4-a96a-3644524346dc\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.759454 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a603b88e-42cc-47f4-a96a-3644524346dc-rabbitmq-plugins\") pod \"a603b88e-42cc-47f4-a96a-3644524346dc\" (UID: \"a603b88e-42cc-47f4-a96a-3644524346dc\") " Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.759803 4867 reconciler_common.go:293] "Volume detached for volume \"pvc-c1cdc226-d0b2-4896-8a47-cf3823b8ca7b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1cdc226-d0b2-4896-8a47-cf3823b8ca7b\") on node \"crc\" DevicePath \"\"" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.759819 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/78e20b53-c66f-44e9-8fb6-2280f8c50ac6-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.760153 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a603b88e-42cc-47f4-a96a-3644524346dc-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "a603b88e-42cc-47f4-a96a-3644524346dc" (UID: "a603b88e-42cc-47f4-a96a-3644524346dc"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.760404 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a603b88e-42cc-47f4-a96a-3644524346dc-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "a603b88e-42cc-47f4-a96a-3644524346dc" (UID: "a603b88e-42cc-47f4-a96a-3644524346dc"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.761350 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a603b88e-42cc-47f4-a96a-3644524346dc-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "a603b88e-42cc-47f4-a96a-3644524346dc" (UID: "a603b88e-42cc-47f4-a96a-3644524346dc"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.763729 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a603b88e-42cc-47f4-a96a-3644524346dc-kube-api-access-7p5qc" (OuterVolumeSpecName: "kube-api-access-7p5qc") pod "a603b88e-42cc-47f4-a96a-3644524346dc" (UID: "a603b88e-42cc-47f4-a96a-3644524346dc"). InnerVolumeSpecName "kube-api-access-7p5qc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.764140 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a603b88e-42cc-47f4-a96a-3644524346dc-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "a603b88e-42cc-47f4-a96a-3644524346dc" (UID: "a603b88e-42cc-47f4-a96a-3644524346dc"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.766521 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/a603b88e-42cc-47f4-a96a-3644524346dc-pod-info" (OuterVolumeSpecName: "pod-info") pod "a603b88e-42cc-47f4-a96a-3644524346dc" (UID: "a603b88e-42cc-47f4-a96a-3644524346dc"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.768985 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a603b88e-42cc-47f4-a96a-3644524346dc-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "a603b88e-42cc-47f4-a96a-3644524346dc" (UID: "a603b88e-42cc-47f4-a96a-3644524346dc"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.774445 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d8044e9-d567-4c1d-9023-1bedd67fe59f" (OuterVolumeSpecName: "persistence") pod "a603b88e-42cc-47f4-a96a-3644524346dc" (UID: "a603b88e-42cc-47f4-a96a-3644524346dc"). InnerVolumeSpecName "pvc-0d8044e9-d567-4c1d-9023-1bedd67fe59f". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.780133 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a603b88e-42cc-47f4-a96a-3644524346dc-config-data" (OuterVolumeSpecName: "config-data") pod "a603b88e-42cc-47f4-a96a-3644524346dc" (UID: "a603b88e-42cc-47f4-a96a-3644524346dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.801577 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a603b88e-42cc-47f4-a96a-3644524346dc-server-conf" (OuterVolumeSpecName: "server-conf") pod "a603b88e-42cc-47f4-a96a-3644524346dc" (UID: "a603b88e-42cc-47f4-a96a-3644524346dc"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.838748 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a603b88e-42cc-47f4-a96a-3644524346dc-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "a603b88e-42cc-47f4-a96a-3644524346dc" (UID: "a603b88e-42cc-47f4-a96a-3644524346dc"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.862089 4867 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a603b88e-42cc-47f4-a96a-3644524346dc-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.862124 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a603b88e-42cc-47f4-a96a-3644524346dc-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.862136 4867 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a603b88e-42cc-47f4-a96a-3644524346dc-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.862147 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p5qc\" (UniqueName: \"kubernetes.io/projected/a603b88e-42cc-47f4-a96a-3644524346dc-kube-api-access-7p5qc\") on node \"crc\" DevicePath \"\"" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.862191 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0d8044e9-d567-4c1d-9023-1bedd67fe59f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d8044e9-d567-4c1d-9023-1bedd67fe59f\") on node \"crc\" " Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.862202 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a603b88e-42cc-47f4-a96a-3644524346dc-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.862214 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a603b88e-42cc-47f4-a96a-3644524346dc-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.862223 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a603b88e-42cc-47f4-a96a-3644524346dc-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.862232 4867 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a603b88e-42cc-47f4-a96a-3644524346dc-server-conf\") on node \"crc\" DevicePath \"\"" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.862240 4867 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a603b88e-42cc-47f4-a96a-3644524346dc-pod-info\") on node \"crc\" DevicePath \"\"" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.862250 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a603b88e-42cc-47f4-a96a-3644524346dc-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.877693 4867 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.877969 4867 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0d8044e9-d567-4c1d-9023-1bedd67fe59f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d8044e9-d567-4c1d-9023-1bedd67fe59f") on node "crc" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.963495 4867 reconciler_common.go:293] "Volume detached for volume \"pvc-0d8044e9-d567-4c1d-9023-1bedd67fe59f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d8044e9-d567-4c1d-9023-1bedd67fe59f\") on node \"crc\" DevicePath \"\"" Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.969296 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 01 09:49:55 crc kubenswrapper[4867]: I0101 09:49:55.979674 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.006281 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 01 09:49:56 crc kubenswrapper[4867]: E0101 09:49:56.006634 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81bb7bf2-99db-499a-b2ff-c681fe62ee62" containerName="init" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.006656 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="81bb7bf2-99db-499a-b2ff-c681fe62ee62" containerName="init" Jan 01 09:49:56 crc kubenswrapper[4867]: E0101 09:49:56.006869 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78e20b53-c66f-44e9-8fb6-2280f8c50ac6" containerName="setup-container" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.006900 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="78e20b53-c66f-44e9-8fb6-2280f8c50ac6" containerName="setup-container" Jan 01 09:49:56 crc kubenswrapper[4867]: E0101 09:49:56.006932 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a603b88e-42cc-47f4-a96a-3644524346dc" containerName="setup-container" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.006940 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="a603b88e-42cc-47f4-a96a-3644524346dc" containerName="setup-container" Jan 01 09:49:56 crc kubenswrapper[4867]: E0101 09:49:56.006954 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a603b88e-42cc-47f4-a96a-3644524346dc" containerName="rabbitmq" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.006962 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="a603b88e-42cc-47f4-a96a-3644524346dc" containerName="rabbitmq" Jan 01 09:49:56 crc kubenswrapper[4867]: E0101 09:49:56.006976 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81bb7bf2-99db-499a-b2ff-c681fe62ee62" containerName="dnsmasq-dns" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.006984 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="81bb7bf2-99db-499a-b2ff-c681fe62ee62" containerName="dnsmasq-dns" Jan 01 09:49:56 crc kubenswrapper[4867]: E0101 09:49:56.006995 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78e20b53-c66f-44e9-8fb6-2280f8c50ac6" containerName="rabbitmq" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.007002 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="78e20b53-c66f-44e9-8fb6-2280f8c50ac6" containerName="rabbitmq" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.007220 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="78e20b53-c66f-44e9-8fb6-2280f8c50ac6" containerName="rabbitmq" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.007238 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="81bb7bf2-99db-499a-b2ff-c681fe62ee62" containerName="dnsmasq-dns" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.007253 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="a603b88e-42cc-47f4-a96a-3644524346dc" containerName="rabbitmq" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.008362 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.010436 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.012310 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-5cv2r" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.012871 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.013145 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.013402 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.016743 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.017069 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.024638 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.067654 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/eaf26a82-cbbb-41bd-89ed-9722ddd150cf-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"eaf26a82-cbbb-41bd-89ed-9722ddd150cf\") " pod="openstack/rabbitmq-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.067720 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/eaf26a82-cbbb-41bd-89ed-9722ddd150cf-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"eaf26a82-cbbb-41bd-89ed-9722ddd150cf\") " pod="openstack/rabbitmq-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.067848 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/eaf26a82-cbbb-41bd-89ed-9722ddd150cf-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"eaf26a82-cbbb-41bd-89ed-9722ddd150cf\") " pod="openstack/rabbitmq-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.067908 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/eaf26a82-cbbb-41bd-89ed-9722ddd150cf-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"eaf26a82-cbbb-41bd-89ed-9722ddd150cf\") " pod="openstack/rabbitmq-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.067946 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/eaf26a82-cbbb-41bd-89ed-9722ddd150cf-server-conf\") pod \"rabbitmq-server-0\" (UID: \"eaf26a82-cbbb-41bd-89ed-9722ddd150cf\") " pod="openstack/rabbitmq-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.067985 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eaf26a82-cbbb-41bd-89ed-9722ddd150cf-config-data\") pod \"rabbitmq-server-0\" (UID: \"eaf26a82-cbbb-41bd-89ed-9722ddd150cf\") " pod="openstack/rabbitmq-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.068072 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8ll4\" (UniqueName: \"kubernetes.io/projected/eaf26a82-cbbb-41bd-89ed-9722ddd150cf-kube-api-access-z8ll4\") pod \"rabbitmq-server-0\" (UID: \"eaf26a82-cbbb-41bd-89ed-9722ddd150cf\") " pod="openstack/rabbitmq-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.068115 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/eaf26a82-cbbb-41bd-89ed-9722ddd150cf-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"eaf26a82-cbbb-41bd-89ed-9722ddd150cf\") " pod="openstack/rabbitmq-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.068186 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/eaf26a82-cbbb-41bd-89ed-9722ddd150cf-pod-info\") pod \"rabbitmq-server-0\" (UID: \"eaf26a82-cbbb-41bd-89ed-9722ddd150cf\") " pod="openstack/rabbitmq-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.068249 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/eaf26a82-cbbb-41bd-89ed-9722ddd150cf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"eaf26a82-cbbb-41bd-89ed-9722ddd150cf\") " pod="openstack/rabbitmq-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.068982 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c1cdc226-d0b2-4896-8a47-cf3823b8ca7b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1cdc226-d0b2-4896-8a47-cf3823b8ca7b\") pod \"rabbitmq-server-0\" (UID: \"eaf26a82-cbbb-41bd-89ed-9722ddd150cf\") " pod="openstack/rabbitmq-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.169996 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/eaf26a82-cbbb-41bd-89ed-9722ddd150cf-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"eaf26a82-cbbb-41bd-89ed-9722ddd150cf\") " pod="openstack/rabbitmq-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.171143 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/eaf26a82-cbbb-41bd-89ed-9722ddd150cf-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"eaf26a82-cbbb-41bd-89ed-9722ddd150cf\") " pod="openstack/rabbitmq-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.171218 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/eaf26a82-cbbb-41bd-89ed-9722ddd150cf-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"eaf26a82-cbbb-41bd-89ed-9722ddd150cf\") " pod="openstack/rabbitmq-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.171257 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/eaf26a82-cbbb-41bd-89ed-9722ddd150cf-server-conf\") pod \"rabbitmq-server-0\" (UID: \"eaf26a82-cbbb-41bd-89ed-9722ddd150cf\") " pod="openstack/rabbitmq-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.171294 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eaf26a82-cbbb-41bd-89ed-9722ddd150cf-config-data\") pod \"rabbitmq-server-0\" (UID: \"eaf26a82-cbbb-41bd-89ed-9722ddd150cf\") " pod="openstack/rabbitmq-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.171364 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8ll4\" (UniqueName: \"kubernetes.io/projected/eaf26a82-cbbb-41bd-89ed-9722ddd150cf-kube-api-access-z8ll4\") pod \"rabbitmq-server-0\" (UID: \"eaf26a82-cbbb-41bd-89ed-9722ddd150cf\") " pod="openstack/rabbitmq-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.171405 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/eaf26a82-cbbb-41bd-89ed-9722ddd150cf-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"eaf26a82-cbbb-41bd-89ed-9722ddd150cf\") " pod="openstack/rabbitmq-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.171499 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/eaf26a82-cbbb-41bd-89ed-9722ddd150cf-pod-info\") pod \"rabbitmq-server-0\" (UID: \"eaf26a82-cbbb-41bd-89ed-9722ddd150cf\") " pod="openstack/rabbitmq-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.171576 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/eaf26a82-cbbb-41bd-89ed-9722ddd150cf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"eaf26a82-cbbb-41bd-89ed-9722ddd150cf\") " pod="openstack/rabbitmq-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.171615 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c1cdc226-d0b2-4896-8a47-cf3823b8ca7b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1cdc226-d0b2-4896-8a47-cf3823b8ca7b\") pod \"rabbitmq-server-0\" (UID: \"eaf26a82-cbbb-41bd-89ed-9722ddd150cf\") " pod="openstack/rabbitmq-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.171768 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/eaf26a82-cbbb-41bd-89ed-9722ddd150cf-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"eaf26a82-cbbb-41bd-89ed-9722ddd150cf\") " pod="openstack/rabbitmq-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.173877 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/eaf26a82-cbbb-41bd-89ed-9722ddd150cf-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"eaf26a82-cbbb-41bd-89ed-9722ddd150cf\") " pod="openstack/rabbitmq-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.174581 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/eaf26a82-cbbb-41bd-89ed-9722ddd150cf-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"eaf26a82-cbbb-41bd-89ed-9722ddd150cf\") " pod="openstack/rabbitmq-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.174821 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eaf26a82-cbbb-41bd-89ed-9722ddd150cf-config-data\") pod \"rabbitmq-server-0\" (UID: \"eaf26a82-cbbb-41bd-89ed-9722ddd150cf\") " pod="openstack/rabbitmq-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.175102 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/eaf26a82-cbbb-41bd-89ed-9722ddd150cf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"eaf26a82-cbbb-41bd-89ed-9722ddd150cf\") " pod="openstack/rabbitmq-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.175579 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/eaf26a82-cbbb-41bd-89ed-9722ddd150cf-server-conf\") pod \"rabbitmq-server-0\" (UID: \"eaf26a82-cbbb-41bd-89ed-9722ddd150cf\") " pod="openstack/rabbitmq-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.175845 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/eaf26a82-cbbb-41bd-89ed-9722ddd150cf-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"eaf26a82-cbbb-41bd-89ed-9722ddd150cf\") " pod="openstack/rabbitmq-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.177324 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/eaf26a82-cbbb-41bd-89ed-9722ddd150cf-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"eaf26a82-cbbb-41bd-89ed-9722ddd150cf\") " pod="openstack/rabbitmq-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.177684 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/eaf26a82-cbbb-41bd-89ed-9722ddd150cf-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"eaf26a82-cbbb-41bd-89ed-9722ddd150cf\") " pod="openstack/rabbitmq-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.179153 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/eaf26a82-cbbb-41bd-89ed-9722ddd150cf-pod-info\") pod \"rabbitmq-server-0\" (UID: \"eaf26a82-cbbb-41bd-89ed-9722ddd150cf\") " pod="openstack/rabbitmq-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.179820 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.179855 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c1cdc226-d0b2-4896-8a47-cf3823b8ca7b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1cdc226-d0b2-4896-8a47-cf3823b8ca7b\") pod \"rabbitmq-server-0\" (UID: \"eaf26a82-cbbb-41bd-89ed-9722ddd150cf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/398ce38eb77f06525022db021525dff8a29db5447dc71986068b172874e15a02/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.197599 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8ll4\" (UniqueName: \"kubernetes.io/projected/eaf26a82-cbbb-41bd-89ed-9722ddd150cf-kube-api-access-z8ll4\") pod \"rabbitmq-server-0\" (UID: \"eaf26a82-cbbb-41bd-89ed-9722ddd150cf\") " pod="openstack/rabbitmq-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.216810 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c1cdc226-d0b2-4896-8a47-cf3823b8ca7b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1cdc226-d0b2-4896-8a47-cf3823b8ca7b\") pod \"rabbitmq-server-0\" (UID: \"eaf26a82-cbbb-41bd-89ed-9722ddd150cf\") " pod="openstack/rabbitmq-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.328646 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.621874 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.649824 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"eaf26a82-cbbb-41bd-89ed-9722ddd150cf","Type":"ContainerStarted","Data":"61542214185d8509e38f21b23d27a3dd1e7f9932781c8b29f4bcca29f9461f3b"} Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.651656 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a603b88e-42cc-47f4-a96a-3644524346dc","Type":"ContainerDied","Data":"d9f3ebf90061af7b45a03d11204bcdf26f2ab807b53bbbea5609913ad2ce3ce4"} Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.651689 4867 scope.go:117] "RemoveContainer" containerID="4f426708eb1ea9dcf5a08a217fb141a349008168dcaeb7bdd8ad2ac1bb8ceaa2" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.651765 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.747528 4867 scope.go:117] "RemoveContainer" containerID="d84b3f1af876135f0b78c47863e34090e2e81d7a0f4f0c10f945c6ff02707a86" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.786243 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.794121 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.813458 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.814530 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.817872 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-29z5g" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.817970 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.818017 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.818519 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.818574 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.819651 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.820141 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.845115 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.881299 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4de45c4e-693b-4158-8b1c-0a50d54ae477-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4de45c4e-693b-4158-8b1c-0a50d54ae477\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.881379 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4de45c4e-693b-4158-8b1c-0a50d54ae477-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4de45c4e-693b-4158-8b1c-0a50d54ae477\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.881424 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0d8044e9-d567-4c1d-9023-1bedd67fe59f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d8044e9-d567-4c1d-9023-1bedd67fe59f\") pod \"rabbitmq-cell1-server-0\" (UID: \"4de45c4e-693b-4158-8b1c-0a50d54ae477\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.881461 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4de45c4e-693b-4158-8b1c-0a50d54ae477-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4de45c4e-693b-4158-8b1c-0a50d54ae477\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.881499 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4de45c4e-693b-4158-8b1c-0a50d54ae477-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4de45c4e-693b-4158-8b1c-0a50d54ae477\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.881533 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4de45c4e-693b-4158-8b1c-0a50d54ae477-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4de45c4e-693b-4158-8b1c-0a50d54ae477\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.881555 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4de45c4e-693b-4158-8b1c-0a50d54ae477-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4de45c4e-693b-4158-8b1c-0a50d54ae477\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.881579 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4de45c4e-693b-4158-8b1c-0a50d54ae477-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4de45c4e-693b-4158-8b1c-0a50d54ae477\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.881623 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skj9v\" (UniqueName: \"kubernetes.io/projected/4de45c4e-693b-4158-8b1c-0a50d54ae477-kube-api-access-skj9v\") pod \"rabbitmq-cell1-server-0\" (UID: \"4de45c4e-693b-4158-8b1c-0a50d54ae477\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.881677 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4de45c4e-693b-4158-8b1c-0a50d54ae477-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4de45c4e-693b-4158-8b1c-0a50d54ae477\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.881713 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4de45c4e-693b-4158-8b1c-0a50d54ae477-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4de45c4e-693b-4158-8b1c-0a50d54ae477\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.983362 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4de45c4e-693b-4158-8b1c-0a50d54ae477-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4de45c4e-693b-4158-8b1c-0a50d54ae477\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.983429 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4de45c4e-693b-4158-8b1c-0a50d54ae477-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4de45c4e-693b-4158-8b1c-0a50d54ae477\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.983463 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4de45c4e-693b-4158-8b1c-0a50d54ae477-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4de45c4e-693b-4158-8b1c-0a50d54ae477\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.983486 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4de45c4e-693b-4158-8b1c-0a50d54ae477-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4de45c4e-693b-4158-8b1c-0a50d54ae477\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.983508 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4de45c4e-693b-4158-8b1c-0a50d54ae477-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4de45c4e-693b-4158-8b1c-0a50d54ae477\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.983538 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skj9v\" (UniqueName: \"kubernetes.io/projected/4de45c4e-693b-4158-8b1c-0a50d54ae477-kube-api-access-skj9v\") pod \"rabbitmq-cell1-server-0\" (UID: \"4de45c4e-693b-4158-8b1c-0a50d54ae477\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.983591 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4de45c4e-693b-4158-8b1c-0a50d54ae477-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4de45c4e-693b-4158-8b1c-0a50d54ae477\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.983624 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4de45c4e-693b-4158-8b1c-0a50d54ae477-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4de45c4e-693b-4158-8b1c-0a50d54ae477\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.983655 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4de45c4e-693b-4158-8b1c-0a50d54ae477-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4de45c4e-693b-4158-8b1c-0a50d54ae477\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.983701 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4de45c4e-693b-4158-8b1c-0a50d54ae477-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4de45c4e-693b-4158-8b1c-0a50d54ae477\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.983733 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0d8044e9-d567-4c1d-9023-1bedd67fe59f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d8044e9-d567-4c1d-9023-1bedd67fe59f\") pod \"rabbitmq-cell1-server-0\" (UID: \"4de45c4e-693b-4158-8b1c-0a50d54ae477\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.985256 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4de45c4e-693b-4158-8b1c-0a50d54ae477-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4de45c4e-693b-4158-8b1c-0a50d54ae477\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.985301 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4de45c4e-693b-4158-8b1c-0a50d54ae477-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4de45c4e-693b-4158-8b1c-0a50d54ae477\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.985734 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4de45c4e-693b-4158-8b1c-0a50d54ae477-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4de45c4e-693b-4158-8b1c-0a50d54ae477\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.985760 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4de45c4e-693b-4158-8b1c-0a50d54ae477-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4de45c4e-693b-4158-8b1c-0a50d54ae477\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.986521 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4de45c4e-693b-4158-8b1c-0a50d54ae477-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4de45c4e-693b-4158-8b1c-0a50d54ae477\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.987529 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 01 09:49:56 crc kubenswrapper[4867]: I0101 09:49:56.987580 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0d8044e9-d567-4c1d-9023-1bedd67fe59f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d8044e9-d567-4c1d-9023-1bedd67fe59f\") pod \"rabbitmq-cell1-server-0\" (UID: \"4de45c4e-693b-4158-8b1c-0a50d54ae477\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/dd0c54f362857cd9958c91636dfb491521abaa6cf57e38a339687a41ed7e5e9c/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:57 crc kubenswrapper[4867]: I0101 09:49:57.145948 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78e20b53-c66f-44e9-8fb6-2280f8c50ac6" path="/var/lib/kubelet/pods/78e20b53-c66f-44e9-8fb6-2280f8c50ac6/volumes" Jan 01 09:49:57 crc kubenswrapper[4867]: I0101 09:49:57.147715 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a603b88e-42cc-47f4-a96a-3644524346dc" path="/var/lib/kubelet/pods/a603b88e-42cc-47f4-a96a-3644524346dc/volumes" Jan 01 09:49:57 crc kubenswrapper[4867]: I0101 09:49:57.353809 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4de45c4e-693b-4158-8b1c-0a50d54ae477-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4de45c4e-693b-4158-8b1c-0a50d54ae477\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:57 crc kubenswrapper[4867]: I0101 09:49:57.354008 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4de45c4e-693b-4158-8b1c-0a50d54ae477-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4de45c4e-693b-4158-8b1c-0a50d54ae477\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:57 crc kubenswrapper[4867]: I0101 09:49:57.354040 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4de45c4e-693b-4158-8b1c-0a50d54ae477-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4de45c4e-693b-4158-8b1c-0a50d54ae477\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:57 crc kubenswrapper[4867]: I0101 09:49:57.354537 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4de45c4e-693b-4158-8b1c-0a50d54ae477-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4de45c4e-693b-4158-8b1c-0a50d54ae477\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:57 crc kubenswrapper[4867]: I0101 09:49:57.355399 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skj9v\" (UniqueName: \"kubernetes.io/projected/4de45c4e-693b-4158-8b1c-0a50d54ae477-kube-api-access-skj9v\") pod \"rabbitmq-cell1-server-0\" (UID: \"4de45c4e-693b-4158-8b1c-0a50d54ae477\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:57 crc kubenswrapper[4867]: I0101 09:49:57.587089 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0d8044e9-d567-4c1d-9023-1bedd67fe59f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d8044e9-d567-4c1d-9023-1bedd67fe59f\") pod \"rabbitmq-cell1-server-0\" (UID: \"4de45c4e-693b-4158-8b1c-0a50d54ae477\") " pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:57 crc kubenswrapper[4867]: I0101 09:49:57.735032 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:49:58 crc kubenswrapper[4867]: I0101 09:49:58.416932 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 01 09:49:58 crc kubenswrapper[4867]: I0101 09:49:58.689865 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4de45c4e-693b-4158-8b1c-0a50d54ae477","Type":"ContainerStarted","Data":"75f5394211389a0b3bcb13a07232ea987d73446ce88a7485fc212f780990a219"} Jan 01 09:49:59 crc kubenswrapper[4867]: I0101 09:49:59.705072 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"eaf26a82-cbbb-41bd-89ed-9722ddd150cf","Type":"ContainerStarted","Data":"aed4568fe1ebdf5713c0e9723bf7e63d586734e748178097408bbe0a980cbe0f"} Jan 01 09:50:01 crc kubenswrapper[4867]: I0101 09:50:01.728563 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4de45c4e-693b-4158-8b1c-0a50d54ae477","Type":"ContainerStarted","Data":"2eff1d34b560f7dacda7dcd604adf67e6a71d5f02e03d31507db62545609764c"} Jan 01 09:50:21 crc kubenswrapper[4867]: I0101 09:50:21.331626 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 09:50:21 crc kubenswrapper[4867]: I0101 09:50:21.332339 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 09:50:21 crc kubenswrapper[4867]: I0101 09:50:21.332403 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-69jph" Jan 01 09:50:21 crc kubenswrapper[4867]: I0101 09:50:21.333290 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5c8950a1f682766a61154106a21527175eab09dc1f0de8f7e4d1ac387a869c79"} pod="openshift-machine-config-operator/machine-config-daemon-69jph" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 01 09:50:21 crc kubenswrapper[4867]: I0101 09:50:21.333386 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" containerID="cri-o://5c8950a1f682766a61154106a21527175eab09dc1f0de8f7e4d1ac387a869c79" gracePeriod=600 Jan 01 09:50:21 crc kubenswrapper[4867]: E0101 09:50:21.463184 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:50:21 crc kubenswrapper[4867]: I0101 09:50:21.910931 4867 generic.go:334] "Generic (PLEG): container finished" podID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerID="5c8950a1f682766a61154106a21527175eab09dc1f0de8f7e4d1ac387a869c79" exitCode=0 Jan 01 09:50:21 crc kubenswrapper[4867]: I0101 09:50:21.910991 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerDied","Data":"5c8950a1f682766a61154106a21527175eab09dc1f0de8f7e4d1ac387a869c79"} Jan 01 09:50:21 crc kubenswrapper[4867]: I0101 09:50:21.911037 4867 scope.go:117] "RemoveContainer" containerID="7c3a0bf8d3dcccff6661f6d6d4e4e646f3451d0401617d8e0355386059505e8b" Jan 01 09:50:21 crc kubenswrapper[4867]: I0101 09:50:21.911784 4867 scope.go:117] "RemoveContainer" containerID="5c8950a1f682766a61154106a21527175eab09dc1f0de8f7e4d1ac387a869c79" Jan 01 09:50:21 crc kubenswrapper[4867]: E0101 09:50:21.912375 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:50:32 crc kubenswrapper[4867]: I0101 09:50:32.016323 4867 generic.go:334] "Generic (PLEG): container finished" podID="eaf26a82-cbbb-41bd-89ed-9722ddd150cf" containerID="aed4568fe1ebdf5713c0e9723bf7e63d586734e748178097408bbe0a980cbe0f" exitCode=0 Jan 01 09:50:32 crc kubenswrapper[4867]: I0101 09:50:32.016449 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"eaf26a82-cbbb-41bd-89ed-9722ddd150cf","Type":"ContainerDied","Data":"aed4568fe1ebdf5713c0e9723bf7e63d586734e748178097408bbe0a980cbe0f"} Jan 01 09:50:33 crc kubenswrapper[4867]: I0101 09:50:33.028077 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"eaf26a82-cbbb-41bd-89ed-9722ddd150cf","Type":"ContainerStarted","Data":"350b401df01763ff68e5afc3447abf4a27fd63aad2a0041221627bbaa117158b"} Jan 01 09:50:33 crc kubenswrapper[4867]: I0101 09:50:33.028564 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 01 09:50:33 crc kubenswrapper[4867]: I0101 09:50:33.062014 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.061992169 podStartE2EDuration="38.061992169s" podCreationTimestamp="2026-01-01 09:49:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 09:50:33.057394469 +0000 UTC m=+5042.192663328" watchObservedRunningTime="2026-01-01 09:50:33.061992169 +0000 UTC m=+5042.197260948" Jan 01 09:50:33 crc kubenswrapper[4867]: I0101 09:50:33.129587 4867 scope.go:117] "RemoveContainer" containerID="5c8950a1f682766a61154106a21527175eab09dc1f0de8f7e4d1ac387a869c79" Jan 01 09:50:33 crc kubenswrapper[4867]: E0101 09:50:33.130031 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:50:34 crc kubenswrapper[4867]: I0101 09:50:34.043579 4867 generic.go:334] "Generic (PLEG): container finished" podID="4de45c4e-693b-4158-8b1c-0a50d54ae477" containerID="2eff1d34b560f7dacda7dcd604adf67e6a71d5f02e03d31507db62545609764c" exitCode=0 Jan 01 09:50:34 crc kubenswrapper[4867]: I0101 09:50:34.043702 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4de45c4e-693b-4158-8b1c-0a50d54ae477","Type":"ContainerDied","Data":"2eff1d34b560f7dacda7dcd604adf67e6a71d5f02e03d31507db62545609764c"} Jan 01 09:50:35 crc kubenswrapper[4867]: I0101 09:50:35.056379 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4de45c4e-693b-4158-8b1c-0a50d54ae477","Type":"ContainerStarted","Data":"200a3c90cb33549d82dafcea53aa7bf5c420cf4ff41855959732ae82553b15ba"} Jan 01 09:50:35 crc kubenswrapper[4867]: I0101 09:50:35.057014 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:50:35 crc kubenswrapper[4867]: I0101 09:50:35.091394 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=39.091366261 podStartE2EDuration="39.091366261s" podCreationTimestamp="2026-01-01 09:49:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 09:50:35.089825157 +0000 UTC m=+5044.225093936" watchObservedRunningTime="2026-01-01 09:50:35.091366261 +0000 UTC m=+5044.226635070" Jan 01 09:50:46 crc kubenswrapper[4867]: I0101 09:50:46.331254 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 01 09:50:47 crc kubenswrapper[4867]: I0101 09:50:47.738434 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 01 09:50:48 crc kubenswrapper[4867]: I0101 09:50:48.129343 4867 scope.go:117] "RemoveContainer" containerID="5c8950a1f682766a61154106a21527175eab09dc1f0de8f7e4d1ac387a869c79" Jan 01 09:50:48 crc kubenswrapper[4867]: E0101 09:50:48.129794 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:50:51 crc kubenswrapper[4867]: I0101 09:50:51.922600 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1-default"] Jan 01 09:50:51 crc kubenswrapper[4867]: I0101 09:50:51.924159 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Jan 01 09:50:51 crc kubenswrapper[4867]: I0101 09:50:51.931525 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-86bsg" Jan 01 09:50:51 crc kubenswrapper[4867]: I0101 09:50:51.939251 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Jan 01 09:50:52 crc kubenswrapper[4867]: I0101 09:50:52.038373 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ptzn\" (UniqueName: \"kubernetes.io/projected/b83d65d3-340d-4a7e-98af-d09b55640eb1-kube-api-access-2ptzn\") pod \"mariadb-client-1-default\" (UID: \"b83d65d3-340d-4a7e-98af-d09b55640eb1\") " pod="openstack/mariadb-client-1-default" Jan 01 09:50:52 crc kubenswrapper[4867]: I0101 09:50:52.139827 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ptzn\" (UniqueName: \"kubernetes.io/projected/b83d65d3-340d-4a7e-98af-d09b55640eb1-kube-api-access-2ptzn\") pod \"mariadb-client-1-default\" (UID: \"b83d65d3-340d-4a7e-98af-d09b55640eb1\") " pod="openstack/mariadb-client-1-default" Jan 01 09:50:52 crc kubenswrapper[4867]: I0101 09:50:52.173083 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ptzn\" (UniqueName: \"kubernetes.io/projected/b83d65d3-340d-4a7e-98af-d09b55640eb1-kube-api-access-2ptzn\") pod \"mariadb-client-1-default\" (UID: \"b83d65d3-340d-4a7e-98af-d09b55640eb1\") " pod="openstack/mariadb-client-1-default" Jan 01 09:50:52 crc kubenswrapper[4867]: I0101 09:50:52.248153 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Jan 01 09:50:52 crc kubenswrapper[4867]: I0101 09:50:52.895082 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Jan 01 09:50:52 crc kubenswrapper[4867]: I0101 09:50:52.930403 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 01 09:50:53 crc kubenswrapper[4867]: I0101 09:50:53.223783 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"b83d65d3-340d-4a7e-98af-d09b55640eb1","Type":"ContainerStarted","Data":"38bd0e13151922b043acaf9299dd45aa6edba8bbcd05f758fa4748fea593feba"} Jan 01 09:50:54 crc kubenswrapper[4867]: I0101 09:50:54.232833 4867 generic.go:334] "Generic (PLEG): container finished" podID="b83d65d3-340d-4a7e-98af-d09b55640eb1" containerID="4947f14ef0a0cb9092112f6e82f0fff713fb1daa98d3700e718b2213657e04a6" exitCode=0 Jan 01 09:50:54 crc kubenswrapper[4867]: I0101 09:50:54.232878 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"b83d65d3-340d-4a7e-98af-d09b55640eb1","Type":"ContainerDied","Data":"4947f14ef0a0cb9092112f6e82f0fff713fb1daa98d3700e718b2213657e04a6"} Jan 01 09:50:55 crc kubenswrapper[4867]: I0101 09:50:55.736968 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Jan 01 09:50:55 crc kubenswrapper[4867]: I0101 09:50:55.778590 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1-default_b83d65d3-340d-4a7e-98af-d09b55640eb1/mariadb-client-1-default/0.log" Jan 01 09:50:55 crc kubenswrapper[4867]: I0101 09:50:55.812208 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1-default"] Jan 01 09:50:55 crc kubenswrapper[4867]: I0101 09:50:55.817517 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1-default"] Jan 01 09:50:55 crc kubenswrapper[4867]: I0101 09:50:55.907621 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ptzn\" (UniqueName: \"kubernetes.io/projected/b83d65d3-340d-4a7e-98af-d09b55640eb1-kube-api-access-2ptzn\") pod \"b83d65d3-340d-4a7e-98af-d09b55640eb1\" (UID: \"b83d65d3-340d-4a7e-98af-d09b55640eb1\") " Jan 01 09:50:55 crc kubenswrapper[4867]: I0101 09:50:55.912685 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b83d65d3-340d-4a7e-98af-d09b55640eb1-kube-api-access-2ptzn" (OuterVolumeSpecName: "kube-api-access-2ptzn") pod "b83d65d3-340d-4a7e-98af-d09b55640eb1" (UID: "b83d65d3-340d-4a7e-98af-d09b55640eb1"). InnerVolumeSpecName "kube-api-access-2ptzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:50:56 crc kubenswrapper[4867]: I0101 09:50:56.009645 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ptzn\" (UniqueName: \"kubernetes.io/projected/b83d65d3-340d-4a7e-98af-d09b55640eb1-kube-api-access-2ptzn\") on node \"crc\" DevicePath \"\"" Jan 01 09:50:56 crc kubenswrapper[4867]: I0101 09:50:56.250302 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38bd0e13151922b043acaf9299dd45aa6edba8bbcd05f758fa4748fea593feba" Jan 01 09:50:56 crc kubenswrapper[4867]: I0101 09:50:56.250674 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Jan 01 09:50:56 crc kubenswrapper[4867]: I0101 09:50:56.344769 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2-default"] Jan 01 09:50:56 crc kubenswrapper[4867]: E0101 09:50:56.345353 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b83d65d3-340d-4a7e-98af-d09b55640eb1" containerName="mariadb-client-1-default" Jan 01 09:50:56 crc kubenswrapper[4867]: I0101 09:50:56.345393 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b83d65d3-340d-4a7e-98af-d09b55640eb1" containerName="mariadb-client-1-default" Jan 01 09:50:56 crc kubenswrapper[4867]: I0101 09:50:56.345743 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b83d65d3-340d-4a7e-98af-d09b55640eb1" containerName="mariadb-client-1-default" Jan 01 09:50:56 crc kubenswrapper[4867]: I0101 09:50:56.346638 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Jan 01 09:50:56 crc kubenswrapper[4867]: I0101 09:50:56.349942 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-86bsg" Jan 01 09:50:56 crc kubenswrapper[4867]: I0101 09:50:56.350091 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Jan 01 09:50:56 crc kubenswrapper[4867]: I0101 09:50:56.518049 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9pwb\" (UniqueName: \"kubernetes.io/projected/d4d70412-f1d2-4a49-8018-59ece4465d90-kube-api-access-s9pwb\") pod \"mariadb-client-2-default\" (UID: \"d4d70412-f1d2-4a49-8018-59ece4465d90\") " pod="openstack/mariadb-client-2-default" Jan 01 09:50:56 crc kubenswrapper[4867]: I0101 09:50:56.619757 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9pwb\" (UniqueName: \"kubernetes.io/projected/d4d70412-f1d2-4a49-8018-59ece4465d90-kube-api-access-s9pwb\") pod \"mariadb-client-2-default\" (UID: \"d4d70412-f1d2-4a49-8018-59ece4465d90\") " pod="openstack/mariadb-client-2-default" Jan 01 09:50:56 crc kubenswrapper[4867]: I0101 09:50:56.643391 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9pwb\" (UniqueName: \"kubernetes.io/projected/d4d70412-f1d2-4a49-8018-59ece4465d90-kube-api-access-s9pwb\") pod \"mariadb-client-2-default\" (UID: \"d4d70412-f1d2-4a49-8018-59ece4465d90\") " pod="openstack/mariadb-client-2-default" Jan 01 09:50:56 crc kubenswrapper[4867]: I0101 09:50:56.677788 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Jan 01 09:50:56 crc kubenswrapper[4867]: I0101 09:50:56.989310 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Jan 01 09:50:57 crc kubenswrapper[4867]: W0101 09:50:57.007069 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4d70412_f1d2_4a49_8018_59ece4465d90.slice/crio-a6fb0a53463d5379a509aa8bc80811296c0e42f6945321c875597f49aa42fc89 WatchSource:0}: Error finding container a6fb0a53463d5379a509aa8bc80811296c0e42f6945321c875597f49aa42fc89: Status 404 returned error can't find the container with id a6fb0a53463d5379a509aa8bc80811296c0e42f6945321c875597f49aa42fc89 Jan 01 09:50:57 crc kubenswrapper[4867]: I0101 09:50:57.144915 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b83d65d3-340d-4a7e-98af-d09b55640eb1" path="/var/lib/kubelet/pods/b83d65d3-340d-4a7e-98af-d09b55640eb1/volumes" Jan 01 09:50:57 crc kubenswrapper[4867]: I0101 09:50:57.259718 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"d4d70412-f1d2-4a49-8018-59ece4465d90","Type":"ContainerStarted","Data":"032c93d1d4384cbbe4e53e201cbad1a2439b39618711a0b8338cd2beb7c63196"} Jan 01 09:50:57 crc kubenswrapper[4867]: I0101 09:50:57.259767 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"d4d70412-f1d2-4a49-8018-59ece4465d90","Type":"ContainerStarted","Data":"a6fb0a53463d5379a509aa8bc80811296c0e42f6945321c875597f49aa42fc89"} Jan 01 09:50:57 crc kubenswrapper[4867]: I0101 09:50:57.273422 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client-2-default" podStartSLOduration=1.273399945 podStartE2EDuration="1.273399945s" podCreationTimestamp="2026-01-01 09:50:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 09:50:57.271041278 +0000 UTC m=+5066.406310127" watchObservedRunningTime="2026-01-01 09:50:57.273399945 +0000 UTC m=+5066.408668724" Jan 01 09:50:58 crc kubenswrapper[4867]: I0101 09:50:58.271569 4867 generic.go:334] "Generic (PLEG): container finished" podID="d4d70412-f1d2-4a49-8018-59ece4465d90" containerID="032c93d1d4384cbbe4e53e201cbad1a2439b39618711a0b8338cd2beb7c63196" exitCode=1 Jan 01 09:50:58 crc kubenswrapper[4867]: I0101 09:50:58.271668 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"d4d70412-f1d2-4a49-8018-59ece4465d90","Type":"ContainerDied","Data":"032c93d1d4384cbbe4e53e201cbad1a2439b39618711a0b8338cd2beb7c63196"} Jan 01 09:50:59 crc kubenswrapper[4867]: I0101 09:50:59.129720 4867 scope.go:117] "RemoveContainer" containerID="5c8950a1f682766a61154106a21527175eab09dc1f0de8f7e4d1ac387a869c79" Jan 01 09:50:59 crc kubenswrapper[4867]: E0101 09:50:59.130181 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:51:00 crc kubenswrapper[4867]: I0101 09:51:00.266097 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Jan 01 09:51:00 crc kubenswrapper[4867]: I0101 09:51:00.299523 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"d4d70412-f1d2-4a49-8018-59ece4465d90","Type":"ContainerDied","Data":"a6fb0a53463d5379a509aa8bc80811296c0e42f6945321c875597f49aa42fc89"} Jan 01 09:51:00 crc kubenswrapper[4867]: I0101 09:51:00.299578 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6fb0a53463d5379a509aa8bc80811296c0e42f6945321c875597f49aa42fc89" Jan 01 09:51:00 crc kubenswrapper[4867]: I0101 09:51:00.299652 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Jan 01 09:51:00 crc kubenswrapper[4867]: I0101 09:51:00.326056 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2-default"] Jan 01 09:51:00 crc kubenswrapper[4867]: I0101 09:51:00.333377 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2-default"] Jan 01 09:51:00 crc kubenswrapper[4867]: I0101 09:51:00.394245 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9pwb\" (UniqueName: \"kubernetes.io/projected/d4d70412-f1d2-4a49-8018-59ece4465d90-kube-api-access-s9pwb\") pod \"d4d70412-f1d2-4a49-8018-59ece4465d90\" (UID: \"d4d70412-f1d2-4a49-8018-59ece4465d90\") " Jan 01 09:51:00 crc kubenswrapper[4867]: I0101 09:51:00.402733 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4d70412-f1d2-4a49-8018-59ece4465d90-kube-api-access-s9pwb" (OuterVolumeSpecName: "kube-api-access-s9pwb") pod "d4d70412-f1d2-4a49-8018-59ece4465d90" (UID: "d4d70412-f1d2-4a49-8018-59ece4465d90"). InnerVolumeSpecName "kube-api-access-s9pwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:51:00 crc kubenswrapper[4867]: I0101 09:51:00.496024 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9pwb\" (UniqueName: \"kubernetes.io/projected/d4d70412-f1d2-4a49-8018-59ece4465d90-kube-api-access-s9pwb\") on node \"crc\" DevicePath \"\"" Jan 01 09:51:00 crc kubenswrapper[4867]: I0101 09:51:00.925279 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1"] Jan 01 09:51:00 crc kubenswrapper[4867]: E0101 09:51:00.925765 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4d70412-f1d2-4a49-8018-59ece4465d90" containerName="mariadb-client-2-default" Jan 01 09:51:00 crc kubenswrapper[4867]: I0101 09:51:00.925796 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4d70412-f1d2-4a49-8018-59ece4465d90" containerName="mariadb-client-2-default" Jan 01 09:51:00 crc kubenswrapper[4867]: I0101 09:51:00.926064 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4d70412-f1d2-4a49-8018-59ece4465d90" containerName="mariadb-client-2-default" Jan 01 09:51:00 crc kubenswrapper[4867]: I0101 09:51:00.926964 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Jan 01 09:51:00 crc kubenswrapper[4867]: I0101 09:51:00.930558 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-86bsg" Jan 01 09:51:00 crc kubenswrapper[4867]: I0101 09:51:00.941861 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Jan 01 09:51:01 crc kubenswrapper[4867]: I0101 09:51:01.106769 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24dc8\" (UniqueName: \"kubernetes.io/projected/b2d3d762-35b4-490a-81e8-ff069fcddca5-kube-api-access-24dc8\") pod \"mariadb-client-1\" (UID: \"b2d3d762-35b4-490a-81e8-ff069fcddca5\") " pod="openstack/mariadb-client-1" Jan 01 09:51:01 crc kubenswrapper[4867]: I0101 09:51:01.145505 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4d70412-f1d2-4a49-8018-59ece4465d90" path="/var/lib/kubelet/pods/d4d70412-f1d2-4a49-8018-59ece4465d90/volumes" Jan 01 09:51:01 crc kubenswrapper[4867]: I0101 09:51:01.208964 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24dc8\" (UniqueName: \"kubernetes.io/projected/b2d3d762-35b4-490a-81e8-ff069fcddca5-kube-api-access-24dc8\") pod \"mariadb-client-1\" (UID: \"b2d3d762-35b4-490a-81e8-ff069fcddca5\") " pod="openstack/mariadb-client-1" Jan 01 09:51:01 crc kubenswrapper[4867]: I0101 09:51:01.238833 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24dc8\" (UniqueName: \"kubernetes.io/projected/b2d3d762-35b4-490a-81e8-ff069fcddca5-kube-api-access-24dc8\") pod \"mariadb-client-1\" (UID: \"b2d3d762-35b4-490a-81e8-ff069fcddca5\") " pod="openstack/mariadb-client-1" Jan 01 09:51:01 crc kubenswrapper[4867]: I0101 09:51:01.258173 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Jan 01 09:51:01 crc kubenswrapper[4867]: I0101 09:51:01.834363 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Jan 01 09:51:01 crc kubenswrapper[4867]: W0101 09:51:01.844210 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2d3d762_35b4_490a_81e8_ff069fcddca5.slice/crio-cffc25e43d7138f0bd8de552250f7771f32757e0106d8d8bc6f4ac2a9b3e7cef WatchSource:0}: Error finding container cffc25e43d7138f0bd8de552250f7771f32757e0106d8d8bc6f4ac2a9b3e7cef: Status 404 returned error can't find the container with id cffc25e43d7138f0bd8de552250f7771f32757e0106d8d8bc6f4ac2a9b3e7cef Jan 01 09:51:02 crc kubenswrapper[4867]: I0101 09:51:02.322123 4867 generic.go:334] "Generic (PLEG): container finished" podID="b2d3d762-35b4-490a-81e8-ff069fcddca5" containerID="fdb407e475d05c2d4596ab2464c48035358f4a7fb329d521383a4bbd4c0fb30f" exitCode=0 Jan 01 09:51:02 crc kubenswrapper[4867]: I0101 09:51:02.322187 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"b2d3d762-35b4-490a-81e8-ff069fcddca5","Type":"ContainerDied","Data":"fdb407e475d05c2d4596ab2464c48035358f4a7fb329d521383a4bbd4c0fb30f"} Jan 01 09:51:02 crc kubenswrapper[4867]: I0101 09:51:02.322227 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"b2d3d762-35b4-490a-81e8-ff069fcddca5","Type":"ContainerStarted","Data":"cffc25e43d7138f0bd8de552250f7771f32757e0106d8d8bc6f4ac2a9b3e7cef"} Jan 01 09:51:03 crc kubenswrapper[4867]: I0101 09:51:03.760716 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Jan 01 09:51:03 crc kubenswrapper[4867]: I0101 09:51:03.779797 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1_b2d3d762-35b4-490a-81e8-ff069fcddca5/mariadb-client-1/0.log" Jan 01 09:51:03 crc kubenswrapper[4867]: I0101 09:51:03.810361 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1"] Jan 01 09:51:03 crc kubenswrapper[4867]: I0101 09:51:03.815357 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1"] Jan 01 09:51:03 crc kubenswrapper[4867]: I0101 09:51:03.952490 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24dc8\" (UniqueName: \"kubernetes.io/projected/b2d3d762-35b4-490a-81e8-ff069fcddca5-kube-api-access-24dc8\") pod \"b2d3d762-35b4-490a-81e8-ff069fcddca5\" (UID: \"b2d3d762-35b4-490a-81e8-ff069fcddca5\") " Jan 01 09:51:03 crc kubenswrapper[4867]: I0101 09:51:03.961225 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2d3d762-35b4-490a-81e8-ff069fcddca5-kube-api-access-24dc8" (OuterVolumeSpecName: "kube-api-access-24dc8") pod "b2d3d762-35b4-490a-81e8-ff069fcddca5" (UID: "b2d3d762-35b4-490a-81e8-ff069fcddca5"). InnerVolumeSpecName "kube-api-access-24dc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:51:04 crc kubenswrapper[4867]: I0101 09:51:04.055484 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24dc8\" (UniqueName: \"kubernetes.io/projected/b2d3d762-35b4-490a-81e8-ff069fcddca5-kube-api-access-24dc8\") on node \"crc\" DevicePath \"\"" Jan 01 09:51:04 crc kubenswrapper[4867]: I0101 09:51:04.343712 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cffc25e43d7138f0bd8de552250f7771f32757e0106d8d8bc6f4ac2a9b3e7cef" Jan 01 09:51:04 crc kubenswrapper[4867]: I0101 09:51:04.343801 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Jan 01 09:51:04 crc kubenswrapper[4867]: I0101 09:51:04.395740 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-4-default"] Jan 01 09:51:04 crc kubenswrapper[4867]: E0101 09:51:04.396180 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d3d762-35b4-490a-81e8-ff069fcddca5" containerName="mariadb-client-1" Jan 01 09:51:04 crc kubenswrapper[4867]: I0101 09:51:04.396212 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d3d762-35b4-490a-81e8-ff069fcddca5" containerName="mariadb-client-1" Jan 01 09:51:04 crc kubenswrapper[4867]: I0101 09:51:04.396480 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2d3d762-35b4-490a-81e8-ff069fcddca5" containerName="mariadb-client-1" Jan 01 09:51:04 crc kubenswrapper[4867]: I0101 09:51:04.397297 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Jan 01 09:51:04 crc kubenswrapper[4867]: I0101 09:51:04.402777 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-86bsg" Jan 01 09:51:04 crc kubenswrapper[4867]: I0101 09:51:04.419446 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Jan 01 09:51:04 crc kubenswrapper[4867]: I0101 09:51:04.564285 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgk8b\" (UniqueName: \"kubernetes.io/projected/f16ae6bf-1115-4d81-8db3-4b04395f3542-kube-api-access-tgk8b\") pod \"mariadb-client-4-default\" (UID: \"f16ae6bf-1115-4d81-8db3-4b04395f3542\") " pod="openstack/mariadb-client-4-default" Jan 01 09:51:04 crc kubenswrapper[4867]: I0101 09:51:04.666286 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgk8b\" (UniqueName: \"kubernetes.io/projected/f16ae6bf-1115-4d81-8db3-4b04395f3542-kube-api-access-tgk8b\") pod \"mariadb-client-4-default\" (UID: \"f16ae6bf-1115-4d81-8db3-4b04395f3542\") " pod="openstack/mariadb-client-4-default" Jan 01 09:51:04 crc kubenswrapper[4867]: I0101 09:51:04.697577 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgk8b\" (UniqueName: \"kubernetes.io/projected/f16ae6bf-1115-4d81-8db3-4b04395f3542-kube-api-access-tgk8b\") pod \"mariadb-client-4-default\" (UID: \"f16ae6bf-1115-4d81-8db3-4b04395f3542\") " pod="openstack/mariadb-client-4-default" Jan 01 09:51:04 crc kubenswrapper[4867]: I0101 09:51:04.737744 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Jan 01 09:51:05 crc kubenswrapper[4867]: I0101 09:51:05.114146 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Jan 01 09:51:05 crc kubenswrapper[4867]: I0101 09:51:05.138458 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2d3d762-35b4-490a-81e8-ff069fcddca5" path="/var/lib/kubelet/pods/b2d3d762-35b4-490a-81e8-ff069fcddca5/volumes" Jan 01 09:51:05 crc kubenswrapper[4867]: I0101 09:51:05.355277 4867 generic.go:334] "Generic (PLEG): container finished" podID="f16ae6bf-1115-4d81-8db3-4b04395f3542" containerID="fc7e16dd4def607514202bb26456d0af6110f4aa428293d254531010c64aa8fa" exitCode=0 Jan 01 09:51:05 crc kubenswrapper[4867]: I0101 09:51:05.355318 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"f16ae6bf-1115-4d81-8db3-4b04395f3542","Type":"ContainerDied","Data":"fc7e16dd4def607514202bb26456d0af6110f4aa428293d254531010c64aa8fa"} Jan 01 09:51:05 crc kubenswrapper[4867]: I0101 09:51:05.355344 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"f16ae6bf-1115-4d81-8db3-4b04395f3542","Type":"ContainerStarted","Data":"b0ff6475c59319bccc3d6c13d72895e17d0e1d126c67cc45c42d2b03be975006"} Jan 01 09:51:06 crc kubenswrapper[4867]: I0101 09:51:06.821243 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Jan 01 09:51:06 crc kubenswrapper[4867]: I0101 09:51:06.849517 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-4-default_f16ae6bf-1115-4d81-8db3-4b04395f3542/mariadb-client-4-default/0.log" Jan 01 09:51:06 crc kubenswrapper[4867]: I0101 09:51:06.879008 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-4-default"] Jan 01 09:51:06 crc kubenswrapper[4867]: I0101 09:51:06.886488 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-4-default"] Jan 01 09:51:07 crc kubenswrapper[4867]: I0101 09:51:07.004442 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgk8b\" (UniqueName: \"kubernetes.io/projected/f16ae6bf-1115-4d81-8db3-4b04395f3542-kube-api-access-tgk8b\") pod \"f16ae6bf-1115-4d81-8db3-4b04395f3542\" (UID: \"f16ae6bf-1115-4d81-8db3-4b04395f3542\") " Jan 01 09:51:07 crc kubenswrapper[4867]: I0101 09:51:07.024197 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f16ae6bf-1115-4d81-8db3-4b04395f3542-kube-api-access-tgk8b" (OuterVolumeSpecName: "kube-api-access-tgk8b") pod "f16ae6bf-1115-4d81-8db3-4b04395f3542" (UID: "f16ae6bf-1115-4d81-8db3-4b04395f3542"). InnerVolumeSpecName "kube-api-access-tgk8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:51:07 crc kubenswrapper[4867]: I0101 09:51:07.107499 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgk8b\" (UniqueName: \"kubernetes.io/projected/f16ae6bf-1115-4d81-8db3-4b04395f3542-kube-api-access-tgk8b\") on node \"crc\" DevicePath \"\"" Jan 01 09:51:07 crc kubenswrapper[4867]: I0101 09:51:07.145988 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f16ae6bf-1115-4d81-8db3-4b04395f3542" path="/var/lib/kubelet/pods/f16ae6bf-1115-4d81-8db3-4b04395f3542/volumes" Jan 01 09:51:07 crc kubenswrapper[4867]: I0101 09:51:07.375969 4867 scope.go:117] "RemoveContainer" containerID="fc7e16dd4def607514202bb26456d0af6110f4aa428293d254531010c64aa8fa" Jan 01 09:51:07 crc kubenswrapper[4867]: I0101 09:51:07.376103 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Jan 01 09:51:10 crc kubenswrapper[4867]: I0101 09:51:10.129168 4867 scope.go:117] "RemoveContainer" containerID="5c8950a1f682766a61154106a21527175eab09dc1f0de8f7e4d1ac387a869c79" Jan 01 09:51:10 crc kubenswrapper[4867]: E0101 09:51:10.130176 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:51:10 crc kubenswrapper[4867]: I0101 09:51:10.557371 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-5-default"] Jan 01 09:51:10 crc kubenswrapper[4867]: E0101 09:51:10.562582 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f16ae6bf-1115-4d81-8db3-4b04395f3542" containerName="mariadb-client-4-default" Jan 01 09:51:10 crc kubenswrapper[4867]: I0101 09:51:10.562637 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f16ae6bf-1115-4d81-8db3-4b04395f3542" containerName="mariadb-client-4-default" Jan 01 09:51:10 crc kubenswrapper[4867]: I0101 09:51:10.562973 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f16ae6bf-1115-4d81-8db3-4b04395f3542" containerName="mariadb-client-4-default" Jan 01 09:51:10 crc kubenswrapper[4867]: I0101 09:51:10.563821 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Jan 01 09:51:10 crc kubenswrapper[4867]: I0101 09:51:10.567142 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-86bsg" Jan 01 09:51:10 crc kubenswrapper[4867]: I0101 09:51:10.574509 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Jan 01 09:51:10 crc kubenswrapper[4867]: I0101 09:51:10.670401 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snw2h\" (UniqueName: \"kubernetes.io/projected/e24ac851-3780-4cfc-8903-d3585e5e789e-kube-api-access-snw2h\") pod \"mariadb-client-5-default\" (UID: \"e24ac851-3780-4cfc-8903-d3585e5e789e\") " pod="openstack/mariadb-client-5-default" Jan 01 09:51:10 crc kubenswrapper[4867]: I0101 09:51:10.772265 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snw2h\" (UniqueName: \"kubernetes.io/projected/e24ac851-3780-4cfc-8903-d3585e5e789e-kube-api-access-snw2h\") pod \"mariadb-client-5-default\" (UID: \"e24ac851-3780-4cfc-8903-d3585e5e789e\") " pod="openstack/mariadb-client-5-default" Jan 01 09:51:10 crc kubenswrapper[4867]: I0101 09:51:10.804177 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snw2h\" (UniqueName: \"kubernetes.io/projected/e24ac851-3780-4cfc-8903-d3585e5e789e-kube-api-access-snw2h\") pod \"mariadb-client-5-default\" (UID: \"e24ac851-3780-4cfc-8903-d3585e5e789e\") " pod="openstack/mariadb-client-5-default" Jan 01 09:51:10 crc kubenswrapper[4867]: I0101 09:51:10.897388 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Jan 01 09:51:11 crc kubenswrapper[4867]: I0101 09:51:11.231341 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Jan 01 09:51:11 crc kubenswrapper[4867]: I0101 09:51:11.430024 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"e24ac851-3780-4cfc-8903-d3585e5e789e","Type":"ContainerStarted","Data":"9164dc275131bfab97eb061c268bea17301a1db2cd92624742201660a96712b1"} Jan 01 09:51:11 crc kubenswrapper[4867]: I0101 09:51:11.430425 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"e24ac851-3780-4cfc-8903-d3585e5e789e","Type":"ContainerStarted","Data":"60ea6d6c272108b70b9d023fca20e7cc521f421ab53ab4ce2622f17faa6338e2"} Jan 01 09:51:11 crc kubenswrapper[4867]: I0101 09:51:11.449801 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client-5-default" podStartSLOduration=1.449785571 podStartE2EDuration="1.449785571s" podCreationTimestamp="2026-01-01 09:51:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 09:51:11.446629481 +0000 UTC m=+5080.581898330" watchObservedRunningTime="2026-01-01 09:51:11.449785571 +0000 UTC m=+5080.585054330" Jan 01 09:51:11 crc kubenswrapper[4867]: I0101 09:51:11.504616 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-5-default_e24ac851-3780-4cfc-8903-d3585e5e789e/mariadb-client-5-default/0.log" Jan 01 09:51:12 crc kubenswrapper[4867]: I0101 09:51:12.442409 4867 generic.go:334] "Generic (PLEG): container finished" podID="e24ac851-3780-4cfc-8903-d3585e5e789e" containerID="9164dc275131bfab97eb061c268bea17301a1db2cd92624742201660a96712b1" exitCode=0 Jan 01 09:51:12 crc kubenswrapper[4867]: I0101 09:51:12.442468 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"e24ac851-3780-4cfc-8903-d3585e5e789e","Type":"ContainerDied","Data":"9164dc275131bfab97eb061c268bea17301a1db2cd92624742201660a96712b1"} Jan 01 09:51:13 crc kubenswrapper[4867]: I0101 09:51:13.884525 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Jan 01 09:51:13 crc kubenswrapper[4867]: I0101 09:51:13.920099 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-5-default"] Jan 01 09:51:13 crc kubenswrapper[4867]: I0101 09:51:13.927009 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-5-default"] Jan 01 09:51:14 crc kubenswrapper[4867]: I0101 09:51:14.029449 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snw2h\" (UniqueName: \"kubernetes.io/projected/e24ac851-3780-4cfc-8903-d3585e5e789e-kube-api-access-snw2h\") pod \"e24ac851-3780-4cfc-8903-d3585e5e789e\" (UID: \"e24ac851-3780-4cfc-8903-d3585e5e789e\") " Jan 01 09:51:14 crc kubenswrapper[4867]: I0101 09:51:14.038323 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e24ac851-3780-4cfc-8903-d3585e5e789e-kube-api-access-snw2h" (OuterVolumeSpecName: "kube-api-access-snw2h") pod "e24ac851-3780-4cfc-8903-d3585e5e789e" (UID: "e24ac851-3780-4cfc-8903-d3585e5e789e"). InnerVolumeSpecName "kube-api-access-snw2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:51:14 crc kubenswrapper[4867]: I0101 09:51:14.109961 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-6-default"] Jan 01 09:51:14 crc kubenswrapper[4867]: E0101 09:51:14.110375 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e24ac851-3780-4cfc-8903-d3585e5e789e" containerName="mariadb-client-5-default" Jan 01 09:51:14 crc kubenswrapper[4867]: I0101 09:51:14.110404 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e24ac851-3780-4cfc-8903-d3585e5e789e" containerName="mariadb-client-5-default" Jan 01 09:51:14 crc kubenswrapper[4867]: I0101 09:51:14.110587 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="e24ac851-3780-4cfc-8903-d3585e5e789e" containerName="mariadb-client-5-default" Jan 01 09:51:14 crc kubenswrapper[4867]: I0101 09:51:14.111153 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Jan 01 09:51:14 crc kubenswrapper[4867]: I0101 09:51:14.126127 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Jan 01 09:51:14 crc kubenswrapper[4867]: I0101 09:51:14.131189 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snw2h\" (UniqueName: \"kubernetes.io/projected/e24ac851-3780-4cfc-8903-d3585e5e789e-kube-api-access-snw2h\") on node \"crc\" DevicePath \"\"" Jan 01 09:51:14 crc kubenswrapper[4867]: I0101 09:51:14.233193 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwmk7\" (UniqueName: \"kubernetes.io/projected/6b6b1cf5-8e2a-4ace-8218-0a03ec6e00e7-kube-api-access-qwmk7\") pod \"mariadb-client-6-default\" (UID: \"6b6b1cf5-8e2a-4ace-8218-0a03ec6e00e7\") " pod="openstack/mariadb-client-6-default" Jan 01 09:51:14 crc kubenswrapper[4867]: I0101 09:51:14.335451 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwmk7\" (UniqueName: \"kubernetes.io/projected/6b6b1cf5-8e2a-4ace-8218-0a03ec6e00e7-kube-api-access-qwmk7\") pod \"mariadb-client-6-default\" (UID: \"6b6b1cf5-8e2a-4ace-8218-0a03ec6e00e7\") " pod="openstack/mariadb-client-6-default" Jan 01 09:51:14 crc kubenswrapper[4867]: I0101 09:51:14.465269 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60ea6d6c272108b70b9d023fca20e7cc521f421ab53ab4ce2622f17faa6338e2" Jan 01 09:51:14 crc kubenswrapper[4867]: I0101 09:51:14.465385 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Jan 01 09:51:14 crc kubenswrapper[4867]: I0101 09:51:14.952701 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwmk7\" (UniqueName: \"kubernetes.io/projected/6b6b1cf5-8e2a-4ace-8218-0a03ec6e00e7-kube-api-access-qwmk7\") pod \"mariadb-client-6-default\" (UID: \"6b6b1cf5-8e2a-4ace-8218-0a03ec6e00e7\") " pod="openstack/mariadb-client-6-default" Jan 01 09:51:15 crc kubenswrapper[4867]: I0101 09:51:15.045973 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Jan 01 09:51:15 crc kubenswrapper[4867]: I0101 09:51:15.142083 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e24ac851-3780-4cfc-8903-d3585e5e789e" path="/var/lib/kubelet/pods/e24ac851-3780-4cfc-8903-d3585e5e789e/volumes" Jan 01 09:51:15 crc kubenswrapper[4867]: I0101 09:51:15.468745 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Jan 01 09:51:15 crc kubenswrapper[4867]: W0101 09:51:15.474571 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b6b1cf5_8e2a_4ace_8218_0a03ec6e00e7.slice/crio-b22dc22b1d29ba75c651928e6915a0a579aba1f9cc57de9bd867036791d6dae0 WatchSource:0}: Error finding container b22dc22b1d29ba75c651928e6915a0a579aba1f9cc57de9bd867036791d6dae0: Status 404 returned error can't find the container with id b22dc22b1d29ba75c651928e6915a0a579aba1f9cc57de9bd867036791d6dae0 Jan 01 09:51:16 crc kubenswrapper[4867]: I0101 09:51:16.500569 4867 generic.go:334] "Generic (PLEG): container finished" podID="6b6b1cf5-8e2a-4ace-8218-0a03ec6e00e7" containerID="55fa3562c18f66503d11a25ab40feeb6f9b7d1d30835bebde7245c2edd5a77fb" exitCode=1 Jan 01 09:51:16 crc kubenswrapper[4867]: I0101 09:51:16.500647 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"6b6b1cf5-8e2a-4ace-8218-0a03ec6e00e7","Type":"ContainerDied","Data":"55fa3562c18f66503d11a25ab40feeb6f9b7d1d30835bebde7245c2edd5a77fb"} Jan 01 09:51:16 crc kubenswrapper[4867]: I0101 09:51:16.501039 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"6b6b1cf5-8e2a-4ace-8218-0a03ec6e00e7","Type":"ContainerStarted","Data":"b22dc22b1d29ba75c651928e6915a0a579aba1f9cc57de9bd867036791d6dae0"} Jan 01 09:51:17 crc kubenswrapper[4867]: I0101 09:51:17.996118 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Jan 01 09:51:18 crc kubenswrapper[4867]: I0101 09:51:18.016579 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-6-default_6b6b1cf5-8e2a-4ace-8218-0a03ec6e00e7/mariadb-client-6-default/0.log" Jan 01 09:51:18 crc kubenswrapper[4867]: I0101 09:51:18.062228 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-6-default"] Jan 01 09:51:18 crc kubenswrapper[4867]: I0101 09:51:18.072827 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-6-default"] Jan 01 09:51:18 crc kubenswrapper[4867]: I0101 09:51:18.111206 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwmk7\" (UniqueName: \"kubernetes.io/projected/6b6b1cf5-8e2a-4ace-8218-0a03ec6e00e7-kube-api-access-qwmk7\") pod \"6b6b1cf5-8e2a-4ace-8218-0a03ec6e00e7\" (UID: \"6b6b1cf5-8e2a-4ace-8218-0a03ec6e00e7\") " Jan 01 09:51:18 crc kubenswrapper[4867]: I0101 09:51:18.119345 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b6b1cf5-8e2a-4ace-8218-0a03ec6e00e7-kube-api-access-qwmk7" (OuterVolumeSpecName: "kube-api-access-qwmk7") pod "6b6b1cf5-8e2a-4ace-8218-0a03ec6e00e7" (UID: "6b6b1cf5-8e2a-4ace-8218-0a03ec6e00e7"). InnerVolumeSpecName "kube-api-access-qwmk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:51:18 crc kubenswrapper[4867]: I0101 09:51:18.212841 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwmk7\" (UniqueName: \"kubernetes.io/projected/6b6b1cf5-8e2a-4ace-8218-0a03ec6e00e7-kube-api-access-qwmk7\") on node \"crc\" DevicePath \"\"" Jan 01 09:51:18 crc kubenswrapper[4867]: I0101 09:51:18.233308 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-7-default"] Jan 01 09:51:18 crc kubenswrapper[4867]: E0101 09:51:18.233707 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b6b1cf5-8e2a-4ace-8218-0a03ec6e00e7" containerName="mariadb-client-6-default" Jan 01 09:51:18 crc kubenswrapper[4867]: I0101 09:51:18.233735 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b6b1cf5-8e2a-4ace-8218-0a03ec6e00e7" containerName="mariadb-client-6-default" Jan 01 09:51:18 crc kubenswrapper[4867]: I0101 09:51:18.233926 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b6b1cf5-8e2a-4ace-8218-0a03ec6e00e7" containerName="mariadb-client-6-default" Jan 01 09:51:18 crc kubenswrapper[4867]: I0101 09:51:18.234477 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Jan 01 09:51:18 crc kubenswrapper[4867]: I0101 09:51:18.241485 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Jan 01 09:51:18 crc kubenswrapper[4867]: I0101 09:51:18.314531 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wr92\" (UniqueName: \"kubernetes.io/projected/e20ee0e8-d836-43fb-aed1-e6efcbf7dabc-kube-api-access-4wr92\") pod \"mariadb-client-7-default\" (UID: \"e20ee0e8-d836-43fb-aed1-e6efcbf7dabc\") " pod="openstack/mariadb-client-7-default" Jan 01 09:51:18 crc kubenswrapper[4867]: I0101 09:51:18.416708 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wr92\" (UniqueName: \"kubernetes.io/projected/e20ee0e8-d836-43fb-aed1-e6efcbf7dabc-kube-api-access-4wr92\") pod \"mariadb-client-7-default\" (UID: \"e20ee0e8-d836-43fb-aed1-e6efcbf7dabc\") " pod="openstack/mariadb-client-7-default" Jan 01 09:51:18 crc kubenswrapper[4867]: I0101 09:51:18.451492 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wr92\" (UniqueName: \"kubernetes.io/projected/e20ee0e8-d836-43fb-aed1-e6efcbf7dabc-kube-api-access-4wr92\") pod \"mariadb-client-7-default\" (UID: \"e20ee0e8-d836-43fb-aed1-e6efcbf7dabc\") " pod="openstack/mariadb-client-7-default" Jan 01 09:51:18 crc kubenswrapper[4867]: I0101 09:51:18.520958 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b22dc22b1d29ba75c651928e6915a0a579aba1f9cc57de9bd867036791d6dae0" Jan 01 09:51:18 crc kubenswrapper[4867]: I0101 09:51:18.521014 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Jan 01 09:51:18 crc kubenswrapper[4867]: I0101 09:51:18.562132 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Jan 01 09:51:18 crc kubenswrapper[4867]: I0101 09:51:18.961122 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Jan 01 09:51:18 crc kubenswrapper[4867]: W0101 09:51:18.961937 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode20ee0e8_d836_43fb_aed1_e6efcbf7dabc.slice/crio-2b675c9f6e02765c799a8db8d63d3a5063eb28b074c3ee491e4adb3f3b0f6a53 WatchSource:0}: Error finding container 2b675c9f6e02765c799a8db8d63d3a5063eb28b074c3ee491e4adb3f3b0f6a53: Status 404 returned error can't find the container with id 2b675c9f6e02765c799a8db8d63d3a5063eb28b074c3ee491e4adb3f3b0f6a53 Jan 01 09:51:19 crc kubenswrapper[4867]: I0101 09:51:19.139324 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b6b1cf5-8e2a-4ace-8218-0a03ec6e00e7" path="/var/lib/kubelet/pods/6b6b1cf5-8e2a-4ace-8218-0a03ec6e00e7/volumes" Jan 01 09:51:19 crc kubenswrapper[4867]: I0101 09:51:19.534005 4867 generic.go:334] "Generic (PLEG): container finished" podID="e20ee0e8-d836-43fb-aed1-e6efcbf7dabc" containerID="c5e2b8d777ef6c17c69dc6f3e4ed7c80c2174c252c9c14228c6b51284ab48751" exitCode=0 Jan 01 09:51:19 crc kubenswrapper[4867]: I0101 09:51:19.534071 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"e20ee0e8-d836-43fb-aed1-e6efcbf7dabc","Type":"ContainerDied","Data":"c5e2b8d777ef6c17c69dc6f3e4ed7c80c2174c252c9c14228c6b51284ab48751"} Jan 01 09:51:19 crc kubenswrapper[4867]: I0101 09:51:19.534117 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"e20ee0e8-d836-43fb-aed1-e6efcbf7dabc","Type":"ContainerStarted","Data":"2b675c9f6e02765c799a8db8d63d3a5063eb28b074c3ee491e4adb3f3b0f6a53"} Jan 01 09:51:21 crc kubenswrapper[4867]: I0101 09:51:21.010166 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Jan 01 09:51:21 crc kubenswrapper[4867]: I0101 09:51:21.034992 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-7-default_e20ee0e8-d836-43fb-aed1-e6efcbf7dabc/mariadb-client-7-default/0.log" Jan 01 09:51:21 crc kubenswrapper[4867]: I0101 09:51:21.062665 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wr92\" (UniqueName: \"kubernetes.io/projected/e20ee0e8-d836-43fb-aed1-e6efcbf7dabc-kube-api-access-4wr92\") pod \"e20ee0e8-d836-43fb-aed1-e6efcbf7dabc\" (UID: \"e20ee0e8-d836-43fb-aed1-e6efcbf7dabc\") " Jan 01 09:51:21 crc kubenswrapper[4867]: I0101 09:51:21.063182 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-7-default"] Jan 01 09:51:21 crc kubenswrapper[4867]: I0101 09:51:21.074174 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-7-default"] Jan 01 09:51:21 crc kubenswrapper[4867]: I0101 09:51:21.081290 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e20ee0e8-d836-43fb-aed1-e6efcbf7dabc-kube-api-access-4wr92" (OuterVolumeSpecName: "kube-api-access-4wr92") pod "e20ee0e8-d836-43fb-aed1-e6efcbf7dabc" (UID: "e20ee0e8-d836-43fb-aed1-e6efcbf7dabc"). InnerVolumeSpecName "kube-api-access-4wr92". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:51:21 crc kubenswrapper[4867]: I0101 09:51:21.145771 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e20ee0e8-d836-43fb-aed1-e6efcbf7dabc" path="/var/lib/kubelet/pods/e20ee0e8-d836-43fb-aed1-e6efcbf7dabc/volumes" Jan 01 09:51:21 crc kubenswrapper[4867]: I0101 09:51:21.164722 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wr92\" (UniqueName: \"kubernetes.io/projected/e20ee0e8-d836-43fb-aed1-e6efcbf7dabc-kube-api-access-4wr92\") on node \"crc\" DevicePath \"\"" Jan 01 09:51:21 crc kubenswrapper[4867]: I0101 09:51:21.213690 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2"] Jan 01 09:51:21 crc kubenswrapper[4867]: E0101 09:51:21.214109 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e20ee0e8-d836-43fb-aed1-e6efcbf7dabc" containerName="mariadb-client-7-default" Jan 01 09:51:21 crc kubenswrapper[4867]: I0101 09:51:21.214127 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e20ee0e8-d836-43fb-aed1-e6efcbf7dabc" containerName="mariadb-client-7-default" Jan 01 09:51:21 crc kubenswrapper[4867]: I0101 09:51:21.214298 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="e20ee0e8-d836-43fb-aed1-e6efcbf7dabc" containerName="mariadb-client-7-default" Jan 01 09:51:21 crc kubenswrapper[4867]: I0101 09:51:21.214778 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Jan 01 09:51:21 crc kubenswrapper[4867]: I0101 09:51:21.220011 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Jan 01 09:51:21 crc kubenswrapper[4867]: I0101 09:51:21.265630 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpw2m\" (UniqueName: \"kubernetes.io/projected/ec0d3619-ae22-4683-87d0-8a91144a9bdb-kube-api-access-rpw2m\") pod \"mariadb-client-2\" (UID: \"ec0d3619-ae22-4683-87d0-8a91144a9bdb\") " pod="openstack/mariadb-client-2" Jan 01 09:51:21 crc kubenswrapper[4867]: I0101 09:51:21.369100 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpw2m\" (UniqueName: \"kubernetes.io/projected/ec0d3619-ae22-4683-87d0-8a91144a9bdb-kube-api-access-rpw2m\") pod \"mariadb-client-2\" (UID: \"ec0d3619-ae22-4683-87d0-8a91144a9bdb\") " pod="openstack/mariadb-client-2" Jan 01 09:51:21 crc kubenswrapper[4867]: I0101 09:51:21.401074 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpw2m\" (UniqueName: \"kubernetes.io/projected/ec0d3619-ae22-4683-87d0-8a91144a9bdb-kube-api-access-rpw2m\") pod \"mariadb-client-2\" (UID: \"ec0d3619-ae22-4683-87d0-8a91144a9bdb\") " pod="openstack/mariadb-client-2" Jan 01 09:51:21 crc kubenswrapper[4867]: I0101 09:51:21.541256 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Jan 01 09:51:21 crc kubenswrapper[4867]: I0101 09:51:21.559494 4867 scope.go:117] "RemoveContainer" containerID="c5e2b8d777ef6c17c69dc6f3e4ed7c80c2174c252c9c14228c6b51284ab48751" Jan 01 09:51:21 crc kubenswrapper[4867]: I0101 09:51:21.559566 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Jan 01 09:51:21 crc kubenswrapper[4867]: I0101 09:51:21.886421 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Jan 01 09:51:21 crc kubenswrapper[4867]: W0101 09:51:21.887494 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec0d3619_ae22_4683_87d0_8a91144a9bdb.slice/crio-e0aad675600999108a7798dcf39cfe06a58458710cc60844eba4a7c8f420c97d WatchSource:0}: Error finding container e0aad675600999108a7798dcf39cfe06a58458710cc60844eba4a7c8f420c97d: Status 404 returned error can't find the container with id e0aad675600999108a7798dcf39cfe06a58458710cc60844eba4a7c8f420c97d Jan 01 09:51:22 crc kubenswrapper[4867]: I0101 09:51:22.129250 4867 scope.go:117] "RemoveContainer" containerID="5c8950a1f682766a61154106a21527175eab09dc1f0de8f7e4d1ac387a869c79" Jan 01 09:51:22 crc kubenswrapper[4867]: E0101 09:51:22.130680 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:51:22 crc kubenswrapper[4867]: I0101 09:51:22.575786 4867 generic.go:334] "Generic (PLEG): container finished" podID="ec0d3619-ae22-4683-87d0-8a91144a9bdb" containerID="a437daf25fe445984b12e79ecd4b77dc56eff0b79ee371c7b3b6225531125def" exitCode=0 Jan 01 09:51:22 crc kubenswrapper[4867]: I0101 09:51:22.575960 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"ec0d3619-ae22-4683-87d0-8a91144a9bdb","Type":"ContainerDied","Data":"a437daf25fe445984b12e79ecd4b77dc56eff0b79ee371c7b3b6225531125def"} Jan 01 09:51:22 crc kubenswrapper[4867]: I0101 09:51:22.576381 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"ec0d3619-ae22-4683-87d0-8a91144a9bdb","Type":"ContainerStarted","Data":"e0aad675600999108a7798dcf39cfe06a58458710cc60844eba4a7c8f420c97d"} Jan 01 09:51:24 crc kubenswrapper[4867]: I0101 09:51:24.043758 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Jan 01 09:51:24 crc kubenswrapper[4867]: I0101 09:51:24.059745 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-2_ec0d3619-ae22-4683-87d0-8a91144a9bdb/mariadb-client-2/0.log" Jan 01 09:51:24 crc kubenswrapper[4867]: I0101 09:51:24.098246 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2"] Jan 01 09:51:24 crc kubenswrapper[4867]: I0101 09:51:24.106591 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2"] Jan 01 09:51:24 crc kubenswrapper[4867]: I0101 09:51:24.111456 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpw2m\" (UniqueName: \"kubernetes.io/projected/ec0d3619-ae22-4683-87d0-8a91144a9bdb-kube-api-access-rpw2m\") pod \"ec0d3619-ae22-4683-87d0-8a91144a9bdb\" (UID: \"ec0d3619-ae22-4683-87d0-8a91144a9bdb\") " Jan 01 09:51:24 crc kubenswrapper[4867]: I0101 09:51:24.120087 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec0d3619-ae22-4683-87d0-8a91144a9bdb-kube-api-access-rpw2m" (OuterVolumeSpecName: "kube-api-access-rpw2m") pod "ec0d3619-ae22-4683-87d0-8a91144a9bdb" (UID: "ec0d3619-ae22-4683-87d0-8a91144a9bdb"). InnerVolumeSpecName "kube-api-access-rpw2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:51:24 crc kubenswrapper[4867]: I0101 09:51:24.212788 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpw2m\" (UniqueName: \"kubernetes.io/projected/ec0d3619-ae22-4683-87d0-8a91144a9bdb-kube-api-access-rpw2m\") on node \"crc\" DevicePath \"\"" Jan 01 09:51:24 crc kubenswrapper[4867]: I0101 09:51:24.599648 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0aad675600999108a7798dcf39cfe06a58458710cc60844eba4a7c8f420c97d" Jan 01 09:51:24 crc kubenswrapper[4867]: I0101 09:51:24.599714 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Jan 01 09:51:25 crc kubenswrapper[4867]: I0101 09:51:25.148050 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec0d3619-ae22-4683-87d0-8a91144a9bdb" path="/var/lib/kubelet/pods/ec0d3619-ae22-4683-87d0-8a91144a9bdb/volumes" Jan 01 09:51:33 crc kubenswrapper[4867]: I0101 09:51:33.132102 4867 scope.go:117] "RemoveContainer" containerID="5c8950a1f682766a61154106a21527175eab09dc1f0de8f7e4d1ac387a869c79" Jan 01 09:51:33 crc kubenswrapper[4867]: E0101 09:51:33.132853 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:51:47 crc kubenswrapper[4867]: I0101 09:51:47.131447 4867 scope.go:117] "RemoveContainer" containerID="5c8950a1f682766a61154106a21527175eab09dc1f0de8f7e4d1ac387a869c79" Jan 01 09:51:47 crc kubenswrapper[4867]: E0101 09:51:47.132705 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:51:58 crc kubenswrapper[4867]: I0101 09:51:58.128790 4867 scope.go:117] "RemoveContainer" containerID="5c8950a1f682766a61154106a21527175eab09dc1f0de8f7e4d1ac387a869c79" Jan 01 09:51:58 crc kubenswrapper[4867]: E0101 09:51:58.129959 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:52:11 crc kubenswrapper[4867]: I0101 09:52:11.137594 4867 scope.go:117] "RemoveContainer" containerID="5c8950a1f682766a61154106a21527175eab09dc1f0de8f7e4d1ac387a869c79" Jan 01 09:52:11 crc kubenswrapper[4867]: E0101 09:52:11.138985 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:52:24 crc kubenswrapper[4867]: I0101 09:52:24.129557 4867 scope.go:117] "RemoveContainer" containerID="5c8950a1f682766a61154106a21527175eab09dc1f0de8f7e4d1ac387a869c79" Jan 01 09:52:24 crc kubenswrapper[4867]: E0101 09:52:24.130644 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:52:39 crc kubenswrapper[4867]: I0101 09:52:39.129225 4867 scope.go:117] "RemoveContainer" containerID="5c8950a1f682766a61154106a21527175eab09dc1f0de8f7e4d1ac387a869c79" Jan 01 09:52:39 crc kubenswrapper[4867]: E0101 09:52:39.132745 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:52:41 crc kubenswrapper[4867]: I0101 09:52:41.122795 4867 scope.go:117] "RemoveContainer" containerID="24afb404a91637b9d73eadb239f1b53dd5c9073622ed14aa097f4a0ba7aa3093" Jan 01 09:52:54 crc kubenswrapper[4867]: I0101 09:52:54.129159 4867 scope.go:117] "RemoveContainer" containerID="5c8950a1f682766a61154106a21527175eab09dc1f0de8f7e4d1ac387a869c79" Jan 01 09:52:54 crc kubenswrapper[4867]: E0101 09:52:54.130420 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:53:06 crc kubenswrapper[4867]: I0101 09:53:06.128223 4867 scope.go:117] "RemoveContainer" containerID="5c8950a1f682766a61154106a21527175eab09dc1f0de8f7e4d1ac387a869c79" Jan 01 09:53:06 crc kubenswrapper[4867]: E0101 09:53:06.129243 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:53:20 crc kubenswrapper[4867]: I0101 09:53:20.129533 4867 scope.go:117] "RemoveContainer" containerID="5c8950a1f682766a61154106a21527175eab09dc1f0de8f7e4d1ac387a869c79" Jan 01 09:53:20 crc kubenswrapper[4867]: E0101 09:53:20.130436 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:53:35 crc kubenswrapper[4867]: I0101 09:53:35.128833 4867 scope.go:117] "RemoveContainer" containerID="5c8950a1f682766a61154106a21527175eab09dc1f0de8f7e4d1ac387a869c79" Jan 01 09:53:35 crc kubenswrapper[4867]: E0101 09:53:35.129819 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:53:47 crc kubenswrapper[4867]: I0101 09:53:47.128999 4867 scope.go:117] "RemoveContainer" containerID="5c8950a1f682766a61154106a21527175eab09dc1f0de8f7e4d1ac387a869c79" Jan 01 09:53:47 crc kubenswrapper[4867]: E0101 09:53:47.130384 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:53:58 crc kubenswrapper[4867]: I0101 09:53:58.129770 4867 scope.go:117] "RemoveContainer" containerID="5c8950a1f682766a61154106a21527175eab09dc1f0de8f7e4d1ac387a869c79" Jan 01 09:53:58 crc kubenswrapper[4867]: E0101 09:53:58.131160 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:54:13 crc kubenswrapper[4867]: I0101 09:54:13.129340 4867 scope.go:117] "RemoveContainer" containerID="5c8950a1f682766a61154106a21527175eab09dc1f0de8f7e4d1ac387a869c79" Jan 01 09:54:13 crc kubenswrapper[4867]: E0101 09:54:13.130633 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:54:26 crc kubenswrapper[4867]: I0101 09:54:26.129021 4867 scope.go:117] "RemoveContainer" containerID="5c8950a1f682766a61154106a21527175eab09dc1f0de8f7e4d1ac387a869c79" Jan 01 09:54:26 crc kubenswrapper[4867]: E0101 09:54:26.130338 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:54:34 crc kubenswrapper[4867]: I0101 09:54:34.623658 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8g5f7"] Jan 01 09:54:34 crc kubenswrapper[4867]: E0101 09:54:34.624494 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec0d3619-ae22-4683-87d0-8a91144a9bdb" containerName="mariadb-client-2" Jan 01 09:54:34 crc kubenswrapper[4867]: I0101 09:54:34.624507 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec0d3619-ae22-4683-87d0-8a91144a9bdb" containerName="mariadb-client-2" Jan 01 09:54:34 crc kubenswrapper[4867]: I0101 09:54:34.624643 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec0d3619-ae22-4683-87d0-8a91144a9bdb" containerName="mariadb-client-2" Jan 01 09:54:34 crc kubenswrapper[4867]: I0101 09:54:34.625626 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8g5f7" Jan 01 09:54:34 crc kubenswrapper[4867]: I0101 09:54:34.639404 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8g5f7"] Jan 01 09:54:34 crc kubenswrapper[4867]: I0101 09:54:34.795316 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4eff41c5-637d-4a57-bab0-4bd761808c43-catalog-content\") pod \"redhat-operators-8g5f7\" (UID: \"4eff41c5-637d-4a57-bab0-4bd761808c43\") " pod="openshift-marketplace/redhat-operators-8g5f7" Jan 01 09:54:34 crc kubenswrapper[4867]: I0101 09:54:34.795606 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4eff41c5-637d-4a57-bab0-4bd761808c43-utilities\") pod \"redhat-operators-8g5f7\" (UID: \"4eff41c5-637d-4a57-bab0-4bd761808c43\") " pod="openshift-marketplace/redhat-operators-8g5f7" Jan 01 09:54:34 crc kubenswrapper[4867]: I0101 09:54:34.795693 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqsn2\" (UniqueName: \"kubernetes.io/projected/4eff41c5-637d-4a57-bab0-4bd761808c43-kube-api-access-pqsn2\") pod \"redhat-operators-8g5f7\" (UID: \"4eff41c5-637d-4a57-bab0-4bd761808c43\") " pod="openshift-marketplace/redhat-operators-8g5f7" Jan 01 09:54:34 crc kubenswrapper[4867]: I0101 09:54:34.897191 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4eff41c5-637d-4a57-bab0-4bd761808c43-catalog-content\") pod \"redhat-operators-8g5f7\" (UID: \"4eff41c5-637d-4a57-bab0-4bd761808c43\") " pod="openshift-marketplace/redhat-operators-8g5f7" Jan 01 09:54:34 crc kubenswrapper[4867]: I0101 09:54:34.897252 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4eff41c5-637d-4a57-bab0-4bd761808c43-utilities\") pod \"redhat-operators-8g5f7\" (UID: \"4eff41c5-637d-4a57-bab0-4bd761808c43\") " pod="openshift-marketplace/redhat-operators-8g5f7" Jan 01 09:54:34 crc kubenswrapper[4867]: I0101 09:54:34.897273 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqsn2\" (UniqueName: \"kubernetes.io/projected/4eff41c5-637d-4a57-bab0-4bd761808c43-kube-api-access-pqsn2\") pod \"redhat-operators-8g5f7\" (UID: \"4eff41c5-637d-4a57-bab0-4bd761808c43\") " pod="openshift-marketplace/redhat-operators-8g5f7" Jan 01 09:54:34 crc kubenswrapper[4867]: I0101 09:54:34.898059 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4eff41c5-637d-4a57-bab0-4bd761808c43-catalog-content\") pod \"redhat-operators-8g5f7\" (UID: \"4eff41c5-637d-4a57-bab0-4bd761808c43\") " pod="openshift-marketplace/redhat-operators-8g5f7" Jan 01 09:54:34 crc kubenswrapper[4867]: I0101 09:54:34.898185 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4eff41c5-637d-4a57-bab0-4bd761808c43-utilities\") pod \"redhat-operators-8g5f7\" (UID: \"4eff41c5-637d-4a57-bab0-4bd761808c43\") " pod="openshift-marketplace/redhat-operators-8g5f7" Jan 01 09:54:34 crc kubenswrapper[4867]: I0101 09:54:34.920138 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqsn2\" (UniqueName: \"kubernetes.io/projected/4eff41c5-637d-4a57-bab0-4bd761808c43-kube-api-access-pqsn2\") pod \"redhat-operators-8g5f7\" (UID: \"4eff41c5-637d-4a57-bab0-4bd761808c43\") " pod="openshift-marketplace/redhat-operators-8g5f7" Jan 01 09:54:34 crc kubenswrapper[4867]: I0101 09:54:34.939970 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8g5f7" Jan 01 09:54:35 crc kubenswrapper[4867]: I0101 09:54:35.413387 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8g5f7" event={"ID":"4eff41c5-637d-4a57-bab0-4bd761808c43","Type":"ContainerStarted","Data":"48e1abb5280813265358462e42b191574757acc110b38ebdfa207b3c75e3d1a9"} Jan 01 09:54:35 crc kubenswrapper[4867]: I0101 09:54:35.415971 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8g5f7"] Jan 01 09:54:36 crc kubenswrapper[4867]: I0101 09:54:36.427669 4867 generic.go:334] "Generic (PLEG): container finished" podID="4eff41c5-637d-4a57-bab0-4bd761808c43" containerID="6744e1ebe2388f023358b83276eae0f5679112666e486388b36c80a3d37a8743" exitCode=0 Jan 01 09:54:36 crc kubenswrapper[4867]: I0101 09:54:36.428193 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8g5f7" event={"ID":"4eff41c5-637d-4a57-bab0-4bd761808c43","Type":"ContainerDied","Data":"6744e1ebe2388f023358b83276eae0f5679112666e486388b36c80a3d37a8743"} Jan 01 09:54:37 crc kubenswrapper[4867]: I0101 09:54:37.441987 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8g5f7" event={"ID":"4eff41c5-637d-4a57-bab0-4bd761808c43","Type":"ContainerStarted","Data":"098dcc52863df78f510d5b6a98fc503185521b5625231e3f4f6d39c74276dfac"} Jan 01 09:54:38 crc kubenswrapper[4867]: I0101 09:54:38.457034 4867 generic.go:334] "Generic (PLEG): container finished" podID="4eff41c5-637d-4a57-bab0-4bd761808c43" containerID="098dcc52863df78f510d5b6a98fc503185521b5625231e3f4f6d39c74276dfac" exitCode=0 Jan 01 09:54:38 crc kubenswrapper[4867]: I0101 09:54:38.457099 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8g5f7" event={"ID":"4eff41c5-637d-4a57-bab0-4bd761808c43","Type":"ContainerDied","Data":"098dcc52863df78f510d5b6a98fc503185521b5625231e3f4f6d39c74276dfac"} Jan 01 09:54:39 crc kubenswrapper[4867]: I0101 09:54:39.132125 4867 scope.go:117] "RemoveContainer" containerID="5c8950a1f682766a61154106a21527175eab09dc1f0de8f7e4d1ac387a869c79" Jan 01 09:54:39 crc kubenswrapper[4867]: E0101 09:54:39.132817 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:54:39 crc kubenswrapper[4867]: I0101 09:54:39.470838 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8g5f7" event={"ID":"4eff41c5-637d-4a57-bab0-4bd761808c43","Type":"ContainerStarted","Data":"89934efaf7ddbea81a065ba182e305170139514ef4e6356220fc01e692927ba1"} Jan 01 09:54:39 crc kubenswrapper[4867]: I0101 09:54:39.498047 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8g5f7" podStartSLOduration=3.045094299 podStartE2EDuration="5.498024454s" podCreationTimestamp="2026-01-01 09:54:34 +0000 UTC" firstStartedPulling="2026-01-01 09:54:36.431316335 +0000 UTC m=+5285.566585144" lastFinishedPulling="2026-01-01 09:54:38.8842465 +0000 UTC m=+5288.019515299" observedRunningTime="2026-01-01 09:54:39.495989307 +0000 UTC m=+5288.631258136" watchObservedRunningTime="2026-01-01 09:54:39.498024454 +0000 UTC m=+5288.633293233" Jan 01 09:54:44 crc kubenswrapper[4867]: I0101 09:54:44.941022 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8g5f7" Jan 01 09:54:44 crc kubenswrapper[4867]: I0101 09:54:44.941822 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8g5f7" Jan 01 09:54:46 crc kubenswrapper[4867]: I0101 09:54:46.010785 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8g5f7" podUID="4eff41c5-637d-4a57-bab0-4bd761808c43" containerName="registry-server" probeResult="failure" output=< Jan 01 09:54:46 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Jan 01 09:54:46 crc kubenswrapper[4867]: > Jan 01 09:54:51 crc kubenswrapper[4867]: I0101 09:54:51.136938 4867 scope.go:117] "RemoveContainer" containerID="5c8950a1f682766a61154106a21527175eab09dc1f0de8f7e4d1ac387a869c79" Jan 01 09:54:51 crc kubenswrapper[4867]: E0101 09:54:51.138376 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:54:55 crc kubenswrapper[4867]: I0101 09:54:55.003784 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8g5f7" Jan 01 09:54:55 crc kubenswrapper[4867]: I0101 09:54:55.068564 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8g5f7" Jan 01 09:54:55 crc kubenswrapper[4867]: I0101 09:54:55.265554 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8g5f7"] Jan 01 09:54:56 crc kubenswrapper[4867]: I0101 09:54:56.643752 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8g5f7" podUID="4eff41c5-637d-4a57-bab0-4bd761808c43" containerName="registry-server" containerID="cri-o://89934efaf7ddbea81a065ba182e305170139514ef4e6356220fc01e692927ba1" gracePeriod=2 Jan 01 09:54:57 crc kubenswrapper[4867]: I0101 09:54:57.657784 4867 generic.go:334] "Generic (PLEG): container finished" podID="4eff41c5-637d-4a57-bab0-4bd761808c43" containerID="89934efaf7ddbea81a065ba182e305170139514ef4e6356220fc01e692927ba1" exitCode=0 Jan 01 09:54:57 crc kubenswrapper[4867]: I0101 09:54:57.657938 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8g5f7" event={"ID":"4eff41c5-637d-4a57-bab0-4bd761808c43","Type":"ContainerDied","Data":"89934efaf7ddbea81a065ba182e305170139514ef4e6356220fc01e692927ba1"} Jan 01 09:54:58 crc kubenswrapper[4867]: I0101 09:54:58.264638 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8g5f7" Jan 01 09:54:58 crc kubenswrapper[4867]: I0101 09:54:58.358706 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4eff41c5-637d-4a57-bab0-4bd761808c43-utilities\") pod \"4eff41c5-637d-4a57-bab0-4bd761808c43\" (UID: \"4eff41c5-637d-4a57-bab0-4bd761808c43\") " Jan 01 09:54:58 crc kubenswrapper[4867]: I0101 09:54:58.358843 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqsn2\" (UniqueName: \"kubernetes.io/projected/4eff41c5-637d-4a57-bab0-4bd761808c43-kube-api-access-pqsn2\") pod \"4eff41c5-637d-4a57-bab0-4bd761808c43\" (UID: \"4eff41c5-637d-4a57-bab0-4bd761808c43\") " Jan 01 09:54:58 crc kubenswrapper[4867]: I0101 09:54:58.358972 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4eff41c5-637d-4a57-bab0-4bd761808c43-catalog-content\") pod \"4eff41c5-637d-4a57-bab0-4bd761808c43\" (UID: \"4eff41c5-637d-4a57-bab0-4bd761808c43\") " Jan 01 09:54:58 crc kubenswrapper[4867]: I0101 09:54:58.359775 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4eff41c5-637d-4a57-bab0-4bd761808c43-utilities" (OuterVolumeSpecName: "utilities") pod "4eff41c5-637d-4a57-bab0-4bd761808c43" (UID: "4eff41c5-637d-4a57-bab0-4bd761808c43"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:54:58 crc kubenswrapper[4867]: I0101 09:54:58.365118 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4eff41c5-637d-4a57-bab0-4bd761808c43-kube-api-access-pqsn2" (OuterVolumeSpecName: "kube-api-access-pqsn2") pod "4eff41c5-637d-4a57-bab0-4bd761808c43" (UID: "4eff41c5-637d-4a57-bab0-4bd761808c43"). InnerVolumeSpecName "kube-api-access-pqsn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:54:58 crc kubenswrapper[4867]: I0101 09:54:58.366816 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4eff41c5-637d-4a57-bab0-4bd761808c43-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 09:54:58 crc kubenswrapper[4867]: I0101 09:54:58.366857 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqsn2\" (UniqueName: \"kubernetes.io/projected/4eff41c5-637d-4a57-bab0-4bd761808c43-kube-api-access-pqsn2\") on node \"crc\" DevicePath \"\"" Jan 01 09:54:58 crc kubenswrapper[4867]: I0101 09:54:58.490948 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4eff41c5-637d-4a57-bab0-4bd761808c43-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4eff41c5-637d-4a57-bab0-4bd761808c43" (UID: "4eff41c5-637d-4a57-bab0-4bd761808c43"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:54:58 crc kubenswrapper[4867]: I0101 09:54:58.571047 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4eff41c5-637d-4a57-bab0-4bd761808c43-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 09:54:58 crc kubenswrapper[4867]: I0101 09:54:58.672212 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8g5f7" event={"ID":"4eff41c5-637d-4a57-bab0-4bd761808c43","Type":"ContainerDied","Data":"48e1abb5280813265358462e42b191574757acc110b38ebdfa207b3c75e3d1a9"} Jan 01 09:54:58 crc kubenswrapper[4867]: I0101 09:54:58.672271 4867 scope.go:117] "RemoveContainer" containerID="89934efaf7ddbea81a065ba182e305170139514ef4e6356220fc01e692927ba1" Jan 01 09:54:58 crc kubenswrapper[4867]: I0101 09:54:58.672331 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8g5f7" Jan 01 09:54:58 crc kubenswrapper[4867]: I0101 09:54:58.698136 4867 scope.go:117] "RemoveContainer" containerID="098dcc52863df78f510d5b6a98fc503185521b5625231e3f4f6d39c74276dfac" Jan 01 09:54:58 crc kubenswrapper[4867]: I0101 09:54:58.716131 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8g5f7"] Jan 01 09:54:58 crc kubenswrapper[4867]: I0101 09:54:58.723241 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8g5f7"] Jan 01 09:54:58 crc kubenswrapper[4867]: I0101 09:54:58.733836 4867 scope.go:117] "RemoveContainer" containerID="6744e1ebe2388f023358b83276eae0f5679112666e486388b36c80a3d37a8743" Jan 01 09:54:59 crc kubenswrapper[4867]: I0101 09:54:59.144724 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4eff41c5-637d-4a57-bab0-4bd761808c43" path="/var/lib/kubelet/pods/4eff41c5-637d-4a57-bab0-4bd761808c43/volumes" Jan 01 09:55:01 crc kubenswrapper[4867]: I0101 09:55:01.732990 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Jan 01 09:55:01 crc kubenswrapper[4867]: E0101 09:55:01.735588 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eff41c5-637d-4a57-bab0-4bd761808c43" containerName="registry-server" Jan 01 09:55:01 crc kubenswrapper[4867]: I0101 09:55:01.735634 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eff41c5-637d-4a57-bab0-4bd761808c43" containerName="registry-server" Jan 01 09:55:01 crc kubenswrapper[4867]: E0101 09:55:01.735683 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eff41c5-637d-4a57-bab0-4bd761808c43" containerName="extract-content" Jan 01 09:55:01 crc kubenswrapper[4867]: I0101 09:55:01.735696 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eff41c5-637d-4a57-bab0-4bd761808c43" containerName="extract-content" Jan 01 09:55:01 crc kubenswrapper[4867]: E0101 09:55:01.735725 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eff41c5-637d-4a57-bab0-4bd761808c43" containerName="extract-utilities" Jan 01 09:55:01 crc kubenswrapper[4867]: I0101 09:55:01.735736 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eff41c5-637d-4a57-bab0-4bd761808c43" containerName="extract-utilities" Jan 01 09:55:01 crc kubenswrapper[4867]: I0101 09:55:01.736114 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eff41c5-637d-4a57-bab0-4bd761808c43" containerName="registry-server" Jan 01 09:55:01 crc kubenswrapper[4867]: I0101 09:55:01.737387 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 01 09:55:01 crc kubenswrapper[4867]: I0101 09:55:01.742867 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-86bsg" Jan 01 09:55:01 crc kubenswrapper[4867]: I0101 09:55:01.746122 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 01 09:55:01 crc kubenswrapper[4867]: I0101 09:55:01.962667 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqtkl\" (UniqueName: \"kubernetes.io/projected/c5c828c6-13cc-4866-ae31-8cb33206e039-kube-api-access-tqtkl\") pod \"mariadb-copy-data\" (UID: \"c5c828c6-13cc-4866-ae31-8cb33206e039\") " pod="openstack/mariadb-copy-data" Jan 01 09:55:01 crc kubenswrapper[4867]: I0101 09:55:01.962721 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-82dedd91-4873-48d6-b7b3-79f1ec9568e2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-82dedd91-4873-48d6-b7b3-79f1ec9568e2\") pod \"mariadb-copy-data\" (UID: \"c5c828c6-13cc-4866-ae31-8cb33206e039\") " pod="openstack/mariadb-copy-data" Jan 01 09:55:02 crc kubenswrapper[4867]: I0101 09:55:02.064615 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqtkl\" (UniqueName: \"kubernetes.io/projected/c5c828c6-13cc-4866-ae31-8cb33206e039-kube-api-access-tqtkl\") pod \"mariadb-copy-data\" (UID: \"c5c828c6-13cc-4866-ae31-8cb33206e039\") " pod="openstack/mariadb-copy-data" Jan 01 09:55:02 crc kubenswrapper[4867]: I0101 09:55:02.064693 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-82dedd91-4873-48d6-b7b3-79f1ec9568e2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-82dedd91-4873-48d6-b7b3-79f1ec9568e2\") pod \"mariadb-copy-data\" (UID: \"c5c828c6-13cc-4866-ae31-8cb33206e039\") " pod="openstack/mariadb-copy-data" Jan 01 09:55:02 crc kubenswrapper[4867]: I0101 09:55:02.068356 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 01 09:55:02 crc kubenswrapper[4867]: I0101 09:55:02.068417 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-82dedd91-4873-48d6-b7b3-79f1ec9568e2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-82dedd91-4873-48d6-b7b3-79f1ec9568e2\") pod \"mariadb-copy-data\" (UID: \"c5c828c6-13cc-4866-ae31-8cb33206e039\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2be330e588412b86db3a5a3cf14b632cffedd891224fa283b5abbf224d50aa82/globalmount\"" pod="openstack/mariadb-copy-data" Jan 01 09:55:02 crc kubenswrapper[4867]: I0101 09:55:02.090366 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqtkl\" (UniqueName: \"kubernetes.io/projected/c5c828c6-13cc-4866-ae31-8cb33206e039-kube-api-access-tqtkl\") pod \"mariadb-copy-data\" (UID: \"c5c828c6-13cc-4866-ae31-8cb33206e039\") " pod="openstack/mariadb-copy-data" Jan 01 09:55:02 crc kubenswrapper[4867]: I0101 09:55:02.103414 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-82dedd91-4873-48d6-b7b3-79f1ec9568e2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-82dedd91-4873-48d6-b7b3-79f1ec9568e2\") pod \"mariadb-copy-data\" (UID: \"c5c828c6-13cc-4866-ae31-8cb33206e039\") " pod="openstack/mariadb-copy-data" Jan 01 09:55:02 crc kubenswrapper[4867]: I0101 09:55:02.129367 4867 scope.go:117] "RemoveContainer" containerID="5c8950a1f682766a61154106a21527175eab09dc1f0de8f7e4d1ac387a869c79" Jan 01 09:55:02 crc kubenswrapper[4867]: E0101 09:55:02.129698 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:55:02 crc kubenswrapper[4867]: I0101 09:55:02.367810 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 01 09:55:02 crc kubenswrapper[4867]: I0101 09:55:02.729043 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 01 09:55:02 crc kubenswrapper[4867]: I0101 09:55:02.768510 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"c5c828c6-13cc-4866-ae31-8cb33206e039","Type":"ContainerStarted","Data":"bf48502e1f918d09b2386769275becaa51d3d2d49c6d52b1f6703acf0367c7a3"} Jan 01 09:55:03 crc kubenswrapper[4867]: I0101 09:55:03.777737 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"c5c828c6-13cc-4866-ae31-8cb33206e039","Type":"ContainerStarted","Data":"66bba86d4bc1e16b8ec21fefbffca2787b060c53664125e88191ba4e685b8e42"} Jan 01 09:55:03 crc kubenswrapper[4867]: I0101 09:55:03.802133 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=3.802119574 podStartE2EDuration="3.802119574s" podCreationTimestamp="2026-01-01 09:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 09:55:03.800071306 +0000 UTC m=+5312.935340125" watchObservedRunningTime="2026-01-01 09:55:03.802119574 +0000 UTC m=+5312.937388343" Jan 01 09:55:06 crc kubenswrapper[4867]: I0101 09:55:06.901951 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 01 09:55:06 crc kubenswrapper[4867]: I0101 09:55:06.905056 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 01 09:55:06 crc kubenswrapper[4867]: I0101 09:55:06.917001 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 01 09:55:07 crc kubenswrapper[4867]: I0101 09:55:07.093734 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpssb\" (UniqueName: \"kubernetes.io/projected/9f49224a-154a-4477-b2a9-b45cfe6b09fc-kube-api-access-wpssb\") pod \"mariadb-client\" (UID: \"9f49224a-154a-4477-b2a9-b45cfe6b09fc\") " pod="openstack/mariadb-client" Jan 01 09:55:07 crc kubenswrapper[4867]: I0101 09:55:07.196120 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpssb\" (UniqueName: \"kubernetes.io/projected/9f49224a-154a-4477-b2a9-b45cfe6b09fc-kube-api-access-wpssb\") pod \"mariadb-client\" (UID: \"9f49224a-154a-4477-b2a9-b45cfe6b09fc\") " pod="openstack/mariadb-client" Jan 01 09:55:07 crc kubenswrapper[4867]: I0101 09:55:07.230865 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpssb\" (UniqueName: \"kubernetes.io/projected/9f49224a-154a-4477-b2a9-b45cfe6b09fc-kube-api-access-wpssb\") pod \"mariadb-client\" (UID: \"9f49224a-154a-4477-b2a9-b45cfe6b09fc\") " pod="openstack/mariadb-client" Jan 01 09:55:07 crc kubenswrapper[4867]: I0101 09:55:07.242170 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 01 09:55:07 crc kubenswrapper[4867]: I0101 09:55:07.758688 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 01 09:55:07 crc kubenswrapper[4867]: I0101 09:55:07.824594 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"9f49224a-154a-4477-b2a9-b45cfe6b09fc","Type":"ContainerStarted","Data":"5162c0b82196b653e7cd0fb99aad2823f57460d8d9a99553a87c0b79dcae5e1f"} Jan 01 09:55:08 crc kubenswrapper[4867]: I0101 09:55:08.836417 4867 generic.go:334] "Generic (PLEG): container finished" podID="9f49224a-154a-4477-b2a9-b45cfe6b09fc" containerID="e1533fa58c3a26b664b7954368c94de4e3fd4cc5a9de896b829a2dfdda06c482" exitCode=0 Jan 01 09:55:08 crc kubenswrapper[4867]: I0101 09:55:08.836482 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"9f49224a-154a-4477-b2a9-b45cfe6b09fc","Type":"ContainerDied","Data":"e1533fa58c3a26b664b7954368c94de4e3fd4cc5a9de896b829a2dfdda06c482"} Jan 01 09:55:10 crc kubenswrapper[4867]: I0101 09:55:10.271873 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 01 09:55:10 crc kubenswrapper[4867]: I0101 09:55:10.305423 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_9f49224a-154a-4477-b2a9-b45cfe6b09fc/mariadb-client/0.log" Jan 01 09:55:10 crc kubenswrapper[4867]: I0101 09:55:10.348929 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 01 09:55:10 crc kubenswrapper[4867]: I0101 09:55:10.359739 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 01 09:55:10 crc kubenswrapper[4867]: I0101 09:55:10.449742 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpssb\" (UniqueName: \"kubernetes.io/projected/9f49224a-154a-4477-b2a9-b45cfe6b09fc-kube-api-access-wpssb\") pod \"9f49224a-154a-4477-b2a9-b45cfe6b09fc\" (UID: \"9f49224a-154a-4477-b2a9-b45cfe6b09fc\") " Jan 01 09:55:10 crc kubenswrapper[4867]: I0101 09:55:10.455038 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f49224a-154a-4477-b2a9-b45cfe6b09fc-kube-api-access-wpssb" (OuterVolumeSpecName: "kube-api-access-wpssb") pod "9f49224a-154a-4477-b2a9-b45cfe6b09fc" (UID: "9f49224a-154a-4477-b2a9-b45cfe6b09fc"). InnerVolumeSpecName "kube-api-access-wpssb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:55:10 crc kubenswrapper[4867]: I0101 09:55:10.531990 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 01 09:55:10 crc kubenswrapper[4867]: E0101 09:55:10.532330 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f49224a-154a-4477-b2a9-b45cfe6b09fc" containerName="mariadb-client" Jan 01 09:55:10 crc kubenswrapper[4867]: I0101 09:55:10.532350 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f49224a-154a-4477-b2a9-b45cfe6b09fc" containerName="mariadb-client" Jan 01 09:55:10 crc kubenswrapper[4867]: I0101 09:55:10.532527 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f49224a-154a-4477-b2a9-b45cfe6b09fc" containerName="mariadb-client" Jan 01 09:55:10 crc kubenswrapper[4867]: I0101 09:55:10.533091 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 01 09:55:10 crc kubenswrapper[4867]: I0101 09:55:10.550983 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 01 09:55:10 crc kubenswrapper[4867]: I0101 09:55:10.551396 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpssb\" (UniqueName: \"kubernetes.io/projected/9f49224a-154a-4477-b2a9-b45cfe6b09fc-kube-api-access-wpssb\") on node \"crc\" DevicePath \"\"" Jan 01 09:55:10 crc kubenswrapper[4867]: I0101 09:55:10.653124 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82bn6\" (UniqueName: \"kubernetes.io/projected/e7be6ada-6351-4944-9c13-d7d8d0c4d65c-kube-api-access-82bn6\") pod \"mariadb-client\" (UID: \"e7be6ada-6351-4944-9c13-d7d8d0c4d65c\") " pod="openstack/mariadb-client" Jan 01 09:55:10 crc kubenswrapper[4867]: I0101 09:55:10.754384 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82bn6\" (UniqueName: \"kubernetes.io/projected/e7be6ada-6351-4944-9c13-d7d8d0c4d65c-kube-api-access-82bn6\") pod \"mariadb-client\" (UID: \"e7be6ada-6351-4944-9c13-d7d8d0c4d65c\") " pod="openstack/mariadb-client" Jan 01 09:55:10 crc kubenswrapper[4867]: I0101 09:55:10.792442 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82bn6\" (UniqueName: \"kubernetes.io/projected/e7be6ada-6351-4944-9c13-d7d8d0c4d65c-kube-api-access-82bn6\") pod \"mariadb-client\" (UID: \"e7be6ada-6351-4944-9c13-d7d8d0c4d65c\") " pod="openstack/mariadb-client" Jan 01 09:55:10 crc kubenswrapper[4867]: I0101 09:55:10.851635 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5162c0b82196b653e7cd0fb99aad2823f57460d8d9a99553a87c0b79dcae5e1f" Jan 01 09:55:10 crc kubenswrapper[4867]: I0101 09:55:10.851685 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 01 09:55:10 crc kubenswrapper[4867]: I0101 09:55:10.861929 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 01 09:55:10 crc kubenswrapper[4867]: I0101 09:55:10.869166 4867 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="9f49224a-154a-4477-b2a9-b45cfe6b09fc" podUID="e7be6ada-6351-4944-9c13-d7d8d0c4d65c" Jan 01 09:55:11 crc kubenswrapper[4867]: I0101 09:55:11.145459 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f49224a-154a-4477-b2a9-b45cfe6b09fc" path="/var/lib/kubelet/pods/9f49224a-154a-4477-b2a9-b45cfe6b09fc/volumes" Jan 01 09:55:11 crc kubenswrapper[4867]: I0101 09:55:11.369342 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 01 09:55:11 crc kubenswrapper[4867]: W0101 09:55:11.371913 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7be6ada_6351_4944_9c13_d7d8d0c4d65c.slice/crio-e60e4515f4865ecf407e6b77fc5e6580a1094cb11c6c202109c8b202130f1a62 WatchSource:0}: Error finding container e60e4515f4865ecf407e6b77fc5e6580a1094cb11c6c202109c8b202130f1a62: Status 404 returned error can't find the container with id e60e4515f4865ecf407e6b77fc5e6580a1094cb11c6c202109c8b202130f1a62 Jan 01 09:55:11 crc kubenswrapper[4867]: I0101 09:55:11.865513 4867 generic.go:334] "Generic (PLEG): container finished" podID="e7be6ada-6351-4944-9c13-d7d8d0c4d65c" containerID="3d4fe00bd9f7ab58832c31b6e067e03ee28e44f37f7596d5b78ad71f16ad7b65" exitCode=0 Jan 01 09:55:11 crc kubenswrapper[4867]: I0101 09:55:11.865592 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"e7be6ada-6351-4944-9c13-d7d8d0c4d65c","Type":"ContainerDied","Data":"3d4fe00bd9f7ab58832c31b6e067e03ee28e44f37f7596d5b78ad71f16ad7b65"} Jan 01 09:55:11 crc kubenswrapper[4867]: I0101 09:55:11.866023 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"e7be6ada-6351-4944-9c13-d7d8d0c4d65c","Type":"ContainerStarted","Data":"e60e4515f4865ecf407e6b77fc5e6580a1094cb11c6c202109c8b202130f1a62"} Jan 01 09:55:13 crc kubenswrapper[4867]: I0101 09:55:13.255233 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 01 09:55:13 crc kubenswrapper[4867]: I0101 09:55:13.273711 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_e7be6ada-6351-4944-9c13-d7d8d0c4d65c/mariadb-client/0.log" Jan 01 09:55:13 crc kubenswrapper[4867]: I0101 09:55:13.304984 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 01 09:55:13 crc kubenswrapper[4867]: I0101 09:55:13.311117 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 01 09:55:13 crc kubenswrapper[4867]: I0101 09:55:13.399591 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82bn6\" (UniqueName: \"kubernetes.io/projected/e7be6ada-6351-4944-9c13-d7d8d0c4d65c-kube-api-access-82bn6\") pod \"e7be6ada-6351-4944-9c13-d7d8d0c4d65c\" (UID: \"e7be6ada-6351-4944-9c13-d7d8d0c4d65c\") " Jan 01 09:55:13 crc kubenswrapper[4867]: I0101 09:55:13.408951 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7be6ada-6351-4944-9c13-d7d8d0c4d65c-kube-api-access-82bn6" (OuterVolumeSpecName: "kube-api-access-82bn6") pod "e7be6ada-6351-4944-9c13-d7d8d0c4d65c" (UID: "e7be6ada-6351-4944-9c13-d7d8d0c4d65c"). InnerVolumeSpecName "kube-api-access-82bn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:55:13 crc kubenswrapper[4867]: I0101 09:55:13.501596 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82bn6\" (UniqueName: \"kubernetes.io/projected/e7be6ada-6351-4944-9c13-d7d8d0c4d65c-kube-api-access-82bn6\") on node \"crc\" DevicePath \"\"" Jan 01 09:55:13 crc kubenswrapper[4867]: I0101 09:55:13.887877 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e60e4515f4865ecf407e6b77fc5e6580a1094cb11c6c202109c8b202130f1a62" Jan 01 09:55:13 crc kubenswrapper[4867]: I0101 09:55:13.888088 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 01 09:55:14 crc kubenswrapper[4867]: I0101 09:55:14.128123 4867 scope.go:117] "RemoveContainer" containerID="5c8950a1f682766a61154106a21527175eab09dc1f0de8f7e4d1ac387a869c79" Jan 01 09:55:14 crc kubenswrapper[4867]: E0101 09:55:14.128510 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 09:55:15 crc kubenswrapper[4867]: I0101 09:55:15.147098 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7be6ada-6351-4944-9c13-d7d8d0c4d65c" path="/var/lib/kubelet/pods/e7be6ada-6351-4944-9c13-d7d8d0c4d65c/volumes" Jan 01 09:55:27 crc kubenswrapper[4867]: I0101 09:55:27.129713 4867 scope.go:117] "RemoveContainer" containerID="5c8950a1f682766a61154106a21527175eab09dc1f0de8f7e4d1ac387a869c79" Jan 01 09:55:28 crc kubenswrapper[4867]: I0101 09:55:28.024984 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerStarted","Data":"41d797c5a9ee389d0167543c461e0396ce5911531f543d8083183360c9bf4c88"} Jan 01 09:55:41 crc kubenswrapper[4867]: I0101 09:55:41.238064 4867 scope.go:117] "RemoveContainer" containerID="d909d050723c9ed78f6c3878f968960374acc20536ec4244489d3077a0d48354" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.272475 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 01 09:55:45 crc kubenswrapper[4867]: E0101 09:55:45.277356 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7be6ada-6351-4944-9c13-d7d8d0c4d65c" containerName="mariadb-client" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.277550 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7be6ada-6351-4944-9c13-d7d8d0c4d65c" containerName="mariadb-client" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.277986 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7be6ada-6351-4944-9c13-d7d8d0c4d65c" containerName="mariadb-client" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.279578 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.279775 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.284114 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.284231 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-h58c5" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.284125 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.284819 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.290036 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.290447 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.292294 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.325667 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.331588 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.348415 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.372475 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.459114 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkptb\" (UniqueName: \"kubernetes.io/projected/73ffb5ca-edc5-4352-8131-bd2322e6f9da-kube-api-access-kkptb\") pod \"ovsdbserver-nb-1\" (UID: \"73ffb5ca-edc5-4352-8131-bd2322e6f9da\") " pod="openstack/ovsdbserver-nb-1" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.459153 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/73ffb5ca-edc5-4352-8131-bd2322e6f9da-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"73ffb5ca-edc5-4352-8131-bd2322e6f9da\") " pod="openstack/ovsdbserver-nb-1" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.459194 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/73ffb5ca-edc5-4352-8131-bd2322e6f9da-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"73ffb5ca-edc5-4352-8131-bd2322e6f9da\") " pod="openstack/ovsdbserver-nb-1" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.459217 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d1a309c1-b669-4a4c-a9b1-26c6aa2952de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d1a309c1-b669-4a4c-a9b1-26c6aa2952de\") pod \"ovsdbserver-nb-1\" (UID: \"73ffb5ca-edc5-4352-8131-bd2322e6f9da\") " pod="openstack/ovsdbserver-nb-1" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.459237 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73ffb5ca-edc5-4352-8131-bd2322e6f9da-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"73ffb5ca-edc5-4352-8131-bd2322e6f9da\") " pod="openstack/ovsdbserver-nb-1" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.459258 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05\") " pod="openstack/ovsdbserver-nb-0" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.459275 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cea4cfca-bb56-4279-93dc-cdd875e9d369\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cea4cfca-bb56-4279-93dc-cdd875e9d369\") pod \"ovsdbserver-nb-2\" (UID: \"6794a96b-8a53-4b4d-81c5-61f54a3fe243\") " pod="openstack/ovsdbserver-nb-2" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.459295 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05\") " pod="openstack/ovsdbserver-nb-0" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.459315 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6794a96b-8a53-4b4d-81c5-61f54a3fe243-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"6794a96b-8a53-4b4d-81c5-61f54a3fe243\") " pod="openstack/ovsdbserver-nb-2" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.459334 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05-config\") pod \"ovsdbserver-nb-0\" (UID: \"c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05\") " pod="openstack/ovsdbserver-nb-0" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.459349 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6794a96b-8a53-4b4d-81c5-61f54a3fe243-config\") pod \"ovsdbserver-nb-2\" (UID: \"6794a96b-8a53-4b4d-81c5-61f54a3fe243\") " pod="openstack/ovsdbserver-nb-2" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.459365 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6794a96b-8a53-4b4d-81c5-61f54a3fe243-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"6794a96b-8a53-4b4d-81c5-61f54a3fe243\") " pod="openstack/ovsdbserver-nb-2" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.459386 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05\") " pod="openstack/ovsdbserver-nb-0" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.459400 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6794a96b-8a53-4b4d-81c5-61f54a3fe243-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"6794a96b-8a53-4b4d-81c5-61f54a3fe243\") " pod="openstack/ovsdbserver-nb-2" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.459435 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5591f325-da9b-4dea-824f-1daea8c27796\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5591f325-da9b-4dea-824f-1daea8c27796\") pod \"ovsdbserver-nb-0\" (UID: \"c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05\") " pod="openstack/ovsdbserver-nb-0" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.459453 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtst6\" (UniqueName: \"kubernetes.io/projected/c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05-kube-api-access-mtst6\") pod \"ovsdbserver-nb-0\" (UID: \"c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05\") " pod="openstack/ovsdbserver-nb-0" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.459467 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73ffb5ca-edc5-4352-8131-bd2322e6f9da-config\") pod \"ovsdbserver-nb-1\" (UID: \"73ffb5ca-edc5-4352-8131-bd2322e6f9da\") " pod="openstack/ovsdbserver-nb-1" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.459481 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/73ffb5ca-edc5-4352-8131-bd2322e6f9da-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"73ffb5ca-edc5-4352-8131-bd2322e6f9da\") " pod="openstack/ovsdbserver-nb-1" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.459511 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73ffb5ca-edc5-4352-8131-bd2322e6f9da-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"73ffb5ca-edc5-4352-8131-bd2322e6f9da\") " pod="openstack/ovsdbserver-nb-1" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.459571 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05\") " pod="openstack/ovsdbserver-nb-0" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.459610 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6794a96b-8a53-4b4d-81c5-61f54a3fe243-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"6794a96b-8a53-4b4d-81c5-61f54a3fe243\") " pod="openstack/ovsdbserver-nb-2" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.459637 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05\") " pod="openstack/ovsdbserver-nb-0" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.459676 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6794a96b-8a53-4b4d-81c5-61f54a3fe243-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"6794a96b-8a53-4b4d-81c5-61f54a3fe243\") " pod="openstack/ovsdbserver-nb-2" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.459691 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk2bt\" (UniqueName: \"kubernetes.io/projected/6794a96b-8a53-4b4d-81c5-61f54a3fe243-kube-api-access-nk2bt\") pod \"ovsdbserver-nb-2\" (UID: \"6794a96b-8a53-4b4d-81c5-61f54a3fe243\") " pod="openstack/ovsdbserver-nb-2" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.560499 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5591f325-da9b-4dea-824f-1daea8c27796\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5591f325-da9b-4dea-824f-1daea8c27796\") pod \"ovsdbserver-nb-0\" (UID: \"c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05\") " pod="openstack/ovsdbserver-nb-0" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.560551 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtst6\" (UniqueName: \"kubernetes.io/projected/c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05-kube-api-access-mtst6\") pod \"ovsdbserver-nb-0\" (UID: \"c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05\") " pod="openstack/ovsdbserver-nb-0" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.560573 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/73ffb5ca-edc5-4352-8131-bd2322e6f9da-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"73ffb5ca-edc5-4352-8131-bd2322e6f9da\") " pod="openstack/ovsdbserver-nb-1" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.560593 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73ffb5ca-edc5-4352-8131-bd2322e6f9da-config\") pod \"ovsdbserver-nb-1\" (UID: \"73ffb5ca-edc5-4352-8131-bd2322e6f9da\") " pod="openstack/ovsdbserver-nb-1" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.560626 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73ffb5ca-edc5-4352-8131-bd2322e6f9da-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"73ffb5ca-edc5-4352-8131-bd2322e6f9da\") " pod="openstack/ovsdbserver-nb-1" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.560642 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05\") " pod="openstack/ovsdbserver-nb-0" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.560662 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6794a96b-8a53-4b4d-81c5-61f54a3fe243-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"6794a96b-8a53-4b4d-81c5-61f54a3fe243\") " pod="openstack/ovsdbserver-nb-2" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.560678 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05\") " pod="openstack/ovsdbserver-nb-0" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.560695 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk2bt\" (UniqueName: \"kubernetes.io/projected/6794a96b-8a53-4b4d-81c5-61f54a3fe243-kube-api-access-nk2bt\") pod \"ovsdbserver-nb-2\" (UID: \"6794a96b-8a53-4b4d-81c5-61f54a3fe243\") " pod="openstack/ovsdbserver-nb-2" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.560713 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6794a96b-8a53-4b4d-81c5-61f54a3fe243-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"6794a96b-8a53-4b4d-81c5-61f54a3fe243\") " pod="openstack/ovsdbserver-nb-2" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.560733 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkptb\" (UniqueName: \"kubernetes.io/projected/73ffb5ca-edc5-4352-8131-bd2322e6f9da-kube-api-access-kkptb\") pod \"ovsdbserver-nb-1\" (UID: \"73ffb5ca-edc5-4352-8131-bd2322e6f9da\") " pod="openstack/ovsdbserver-nb-1" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.560748 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/73ffb5ca-edc5-4352-8131-bd2322e6f9da-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"73ffb5ca-edc5-4352-8131-bd2322e6f9da\") " pod="openstack/ovsdbserver-nb-1" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.560773 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/73ffb5ca-edc5-4352-8131-bd2322e6f9da-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"73ffb5ca-edc5-4352-8131-bd2322e6f9da\") " pod="openstack/ovsdbserver-nb-1" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.560794 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d1a309c1-b669-4a4c-a9b1-26c6aa2952de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d1a309c1-b669-4a4c-a9b1-26c6aa2952de\") pod \"ovsdbserver-nb-1\" (UID: \"73ffb5ca-edc5-4352-8131-bd2322e6f9da\") " pod="openstack/ovsdbserver-nb-1" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.560812 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73ffb5ca-edc5-4352-8131-bd2322e6f9da-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"73ffb5ca-edc5-4352-8131-bd2322e6f9da\") " pod="openstack/ovsdbserver-nb-1" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.560831 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cea4cfca-bb56-4279-93dc-cdd875e9d369\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cea4cfca-bb56-4279-93dc-cdd875e9d369\") pod \"ovsdbserver-nb-2\" (UID: \"6794a96b-8a53-4b4d-81c5-61f54a3fe243\") " pod="openstack/ovsdbserver-nb-2" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.560847 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05\") " pod="openstack/ovsdbserver-nb-0" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.560866 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05\") " pod="openstack/ovsdbserver-nb-0" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.560903 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6794a96b-8a53-4b4d-81c5-61f54a3fe243-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"6794a96b-8a53-4b4d-81c5-61f54a3fe243\") " pod="openstack/ovsdbserver-nb-2" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.560926 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05-config\") pod \"ovsdbserver-nb-0\" (UID: \"c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05\") " pod="openstack/ovsdbserver-nb-0" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.560941 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6794a96b-8a53-4b4d-81c5-61f54a3fe243-config\") pod \"ovsdbserver-nb-2\" (UID: \"6794a96b-8a53-4b4d-81c5-61f54a3fe243\") " pod="openstack/ovsdbserver-nb-2" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.560956 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6794a96b-8a53-4b4d-81c5-61f54a3fe243-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"6794a96b-8a53-4b4d-81c5-61f54a3fe243\") " pod="openstack/ovsdbserver-nb-2" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.560976 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05\") " pod="openstack/ovsdbserver-nb-0" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.560993 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6794a96b-8a53-4b4d-81c5-61f54a3fe243-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"6794a96b-8a53-4b4d-81c5-61f54a3fe243\") " pod="openstack/ovsdbserver-nb-2" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.561956 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/73ffb5ca-edc5-4352-8131-bd2322e6f9da-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"73ffb5ca-edc5-4352-8131-bd2322e6f9da\") " pod="openstack/ovsdbserver-nb-1" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.562849 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6794a96b-8a53-4b4d-81c5-61f54a3fe243-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"6794a96b-8a53-4b4d-81c5-61f54a3fe243\") " pod="openstack/ovsdbserver-nb-2" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.563136 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05\") " pod="openstack/ovsdbserver-nb-0" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.563373 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6794a96b-8a53-4b4d-81c5-61f54a3fe243-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"6794a96b-8a53-4b4d-81c5-61f54a3fe243\") " pod="openstack/ovsdbserver-nb-2" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.563622 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6794a96b-8a53-4b4d-81c5-61f54a3fe243-config\") pod \"ovsdbserver-nb-2\" (UID: \"6794a96b-8a53-4b4d-81c5-61f54a3fe243\") " pod="openstack/ovsdbserver-nb-2" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.564874 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.564921 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5591f325-da9b-4dea-824f-1daea8c27796\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5591f325-da9b-4dea-824f-1daea8c27796\") pod \"ovsdbserver-nb-0\" (UID: \"c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b46701213bb6c3697b89dc122b21f9074b726b159d7150a6e2ac66b77f1862b6/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.565365 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.565438 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d1a309c1-b669-4a4c-a9b1-26c6aa2952de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d1a309c1-b669-4a4c-a9b1-26c6aa2952de\") pod \"ovsdbserver-nb-1\" (UID: \"73ffb5ca-edc5-4352-8131-bd2322e6f9da\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9dce9dbc2ee67b6b75178f283ebcfc64eec1a6557c3050c2227ae115f49b8499/globalmount\"" pod="openstack/ovsdbserver-nb-1" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.565830 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73ffb5ca-edc5-4352-8131-bd2322e6f9da-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"73ffb5ca-edc5-4352-8131-bd2322e6f9da\") " pod="openstack/ovsdbserver-nb-1" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.565880 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.565968 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cea4cfca-bb56-4279-93dc-cdd875e9d369\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cea4cfca-bb56-4279-93dc-cdd875e9d369\") pod \"ovsdbserver-nb-2\" (UID: \"6794a96b-8a53-4b4d-81c5-61f54a3fe243\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4d8a5af4b5479f04a6951aa7777816725708c224c91e42cdd50db5075f833162/globalmount\"" pod="openstack/ovsdbserver-nb-2" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.566267 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05-config\") pod \"ovsdbserver-nb-0\" (UID: \"c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05\") " pod="openstack/ovsdbserver-nb-0" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.566317 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73ffb5ca-edc5-4352-8131-bd2322e6f9da-config\") pod \"ovsdbserver-nb-1\" (UID: \"73ffb5ca-edc5-4352-8131-bd2322e6f9da\") " pod="openstack/ovsdbserver-nb-1" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.567446 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05\") " pod="openstack/ovsdbserver-nb-0" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.567702 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6794a96b-8a53-4b4d-81c5-61f54a3fe243-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"6794a96b-8a53-4b4d-81c5-61f54a3fe243\") " pod="openstack/ovsdbserver-nb-2" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.570804 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6794a96b-8a53-4b4d-81c5-61f54a3fe243-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"6794a96b-8a53-4b4d-81c5-61f54a3fe243\") " pod="openstack/ovsdbserver-nb-2" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.572168 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73ffb5ca-edc5-4352-8131-bd2322e6f9da-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"73ffb5ca-edc5-4352-8131-bd2322e6f9da\") " pod="openstack/ovsdbserver-nb-1" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.572470 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05\") " pod="openstack/ovsdbserver-nb-0" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.572811 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/73ffb5ca-edc5-4352-8131-bd2322e6f9da-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"73ffb5ca-edc5-4352-8131-bd2322e6f9da\") " pod="openstack/ovsdbserver-nb-1" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.572924 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/73ffb5ca-edc5-4352-8131-bd2322e6f9da-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"73ffb5ca-edc5-4352-8131-bd2322e6f9da\") " pod="openstack/ovsdbserver-nb-1" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.574737 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05\") " pod="openstack/ovsdbserver-nb-0" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.574862 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05\") " pod="openstack/ovsdbserver-nb-0" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.580569 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkptb\" (UniqueName: \"kubernetes.io/projected/73ffb5ca-edc5-4352-8131-bd2322e6f9da-kube-api-access-kkptb\") pod \"ovsdbserver-nb-1\" (UID: \"73ffb5ca-edc5-4352-8131-bd2322e6f9da\") " pod="openstack/ovsdbserver-nb-1" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.586535 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6794a96b-8a53-4b4d-81c5-61f54a3fe243-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"6794a96b-8a53-4b4d-81c5-61f54a3fe243\") " pod="openstack/ovsdbserver-nb-2" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.590904 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtst6\" (UniqueName: \"kubernetes.io/projected/c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05-kube-api-access-mtst6\") pod \"ovsdbserver-nb-0\" (UID: \"c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05\") " pod="openstack/ovsdbserver-nb-0" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.594627 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk2bt\" (UniqueName: \"kubernetes.io/projected/6794a96b-8a53-4b4d-81c5-61f54a3fe243-kube-api-access-nk2bt\") pod \"ovsdbserver-nb-2\" (UID: \"6794a96b-8a53-4b4d-81c5-61f54a3fe243\") " pod="openstack/ovsdbserver-nb-2" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.616291 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5591f325-da9b-4dea-824f-1daea8c27796\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5591f325-da9b-4dea-824f-1daea8c27796\") pod \"ovsdbserver-nb-0\" (UID: \"c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05\") " pod="openstack/ovsdbserver-nb-0" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.616990 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cea4cfca-bb56-4279-93dc-cdd875e9d369\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cea4cfca-bb56-4279-93dc-cdd875e9d369\") pod \"ovsdbserver-nb-2\" (UID: \"6794a96b-8a53-4b4d-81c5-61f54a3fe243\") " pod="openstack/ovsdbserver-nb-2" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.624361 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d1a309c1-b669-4a4c-a9b1-26c6aa2952de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d1a309c1-b669-4a4c-a9b1-26c6aa2952de\") pod \"ovsdbserver-nb-1\" (UID: \"73ffb5ca-edc5-4352-8131-bd2322e6f9da\") " pod="openstack/ovsdbserver-nb-1" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.630040 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.649744 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 01 09:55:45 crc kubenswrapper[4867]: I0101 09:55:45.661341 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 01 09:55:46 crc kubenswrapper[4867]: W0101 09:55:46.207625 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc82f7c72_fbe7_4d7e_9db9_8c1bb5dfcc05.slice/crio-17063e3e55de9c62926275d9e70163b20afd17069427b21af33166a3aadd8f36 WatchSource:0}: Error finding container 17063e3e55de9c62926275d9e70163b20afd17069427b21af33166a3aadd8f36: Status 404 returned error can't find the container with id 17063e3e55de9c62926275d9e70163b20afd17069427b21af33166a3aadd8f36 Jan 01 09:55:46 crc kubenswrapper[4867]: I0101 09:55:46.207746 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 01 09:55:46 crc kubenswrapper[4867]: I0101 09:55:46.303397 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 01 09:55:46 crc kubenswrapper[4867]: I0101 09:55:46.899444 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 01 09:55:46 crc kubenswrapper[4867]: I0101 09:55:46.901023 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 01 09:55:46 crc kubenswrapper[4867]: I0101 09:55:46.902742 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 01 09:55:46 crc kubenswrapper[4867]: I0101 09:55:46.903412 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-m9fdj" Jan 01 09:55:46 crc kubenswrapper[4867]: I0101 09:55:46.904296 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 01 09:55:46 crc kubenswrapper[4867]: I0101 09:55:46.904335 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 01 09:55:46 crc kubenswrapper[4867]: I0101 09:55:46.920228 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 01 09:55:46 crc kubenswrapper[4867]: I0101 09:55:46.927016 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 01 09:55:46 crc kubenswrapper[4867]: I0101 09:55:46.928625 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 01 09:55:46 crc kubenswrapper[4867]: I0101 09:55:46.933170 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 01 09:55:46 crc kubenswrapper[4867]: I0101 09:55:46.935362 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 01 09:55:46 crc kubenswrapper[4867]: I0101 09:55:46.953307 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 01 09:55:46 crc kubenswrapper[4867]: I0101 09:55:46.961011 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.028219 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d31a0cf-43f1-4682-a8f0-2e778d2a06e4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3d31a0cf-43f1-4682-a8f0-2e778d2a06e4\") " pod="openstack/ovsdbserver-sb-0" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.028352 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d31a0cf-43f1-4682-a8f0-2e778d2a06e4-config\") pod \"ovsdbserver-sb-0\" (UID: \"3d31a0cf-43f1-4682-a8f0-2e778d2a06e4\") " pod="openstack/ovsdbserver-sb-0" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.028417 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3d31a0cf-43f1-4682-a8f0-2e778d2a06e4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3d31a0cf-43f1-4682-a8f0-2e778d2a06e4\") " pod="openstack/ovsdbserver-sb-0" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.028451 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d31a0cf-43f1-4682-a8f0-2e778d2a06e4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3d31a0cf-43f1-4682-a8f0-2e778d2a06e4\") " pod="openstack/ovsdbserver-sb-0" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.028490 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8cac148f-5a08-45df-9df7-cb01c0237a20\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8cac148f-5a08-45df-9df7-cb01c0237a20\") pod \"ovsdbserver-sb-0\" (UID: \"3d31a0cf-43f1-4682-a8f0-2e778d2a06e4\") " pod="openstack/ovsdbserver-sb-0" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.028573 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d31a0cf-43f1-4682-a8f0-2e778d2a06e4-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3d31a0cf-43f1-4682-a8f0-2e778d2a06e4\") " pod="openstack/ovsdbserver-sb-0" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.028638 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjxnl\" (UniqueName: \"kubernetes.io/projected/3d31a0cf-43f1-4682-a8f0-2e778d2a06e4-kube-api-access-qjxnl\") pod \"ovsdbserver-sb-0\" (UID: \"3d31a0cf-43f1-4682-a8f0-2e778d2a06e4\") " pod="openstack/ovsdbserver-sb-0" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.028675 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d31a0cf-43f1-4682-a8f0-2e778d2a06e4-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3d31a0cf-43f1-4682-a8f0-2e778d2a06e4\") " pod="openstack/ovsdbserver-sb-0" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.129866 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qpcp\" (UniqueName: \"kubernetes.io/projected/efced2ba-0a7c-4f12-8c38-442eff97aae8-kube-api-access-9qpcp\") pod \"ovsdbserver-sb-1\" (UID: \"efced2ba-0a7c-4f12-8c38-442eff97aae8\") " pod="openstack/ovsdbserver-sb-1" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.129980 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efced2ba-0a7c-4f12-8c38-442eff97aae8-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"efced2ba-0a7c-4f12-8c38-442eff97aae8\") " pod="openstack/ovsdbserver-sb-1" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.130066 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/efced2ba-0a7c-4f12-8c38-442eff97aae8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"efced2ba-0a7c-4f12-8c38-442eff97aae8\") " pod="openstack/ovsdbserver-sb-1" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.130227 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb6e3759-2038-4c6c-bd1a-4702d3f638f6-config\") pod \"ovsdbserver-sb-2\" (UID: \"eb6e3759-2038-4c6c-bd1a-4702d3f638f6\") " pod="openstack/ovsdbserver-sb-2" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.130274 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb6e3759-2038-4c6c-bd1a-4702d3f638f6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"eb6e3759-2038-4c6c-bd1a-4702d3f638f6\") " pod="openstack/ovsdbserver-sb-2" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.130315 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/efced2ba-0a7c-4f12-8c38-442eff97aae8-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"efced2ba-0a7c-4f12-8c38-442eff97aae8\") " pod="openstack/ovsdbserver-sb-1" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.130485 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d31a0cf-43f1-4682-a8f0-2e778d2a06e4-config\") pod \"ovsdbserver-sb-0\" (UID: \"3d31a0cf-43f1-4682-a8f0-2e778d2a06e4\") " pod="openstack/ovsdbserver-sb-0" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.130594 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-66d1407e-c4f5-4540-8ab1-3be1e236beae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66d1407e-c4f5-4540-8ab1-3be1e236beae\") pod \"ovsdbserver-sb-1\" (UID: \"efced2ba-0a7c-4f12-8c38-442eff97aae8\") " pod="openstack/ovsdbserver-sb-1" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.130743 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/efced2ba-0a7c-4f12-8c38-442eff97aae8-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"efced2ba-0a7c-4f12-8c38-442eff97aae8\") " pod="openstack/ovsdbserver-sb-1" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.130829 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3d31a0cf-43f1-4682-a8f0-2e778d2a06e4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3d31a0cf-43f1-4682-a8f0-2e778d2a06e4\") " pod="openstack/ovsdbserver-sb-0" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.130927 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d31a0cf-43f1-4682-a8f0-2e778d2a06e4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3d31a0cf-43f1-4682-a8f0-2e778d2a06e4\") " pod="openstack/ovsdbserver-sb-0" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.131000 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb6e3759-2038-4c6c-bd1a-4702d3f638f6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"eb6e3759-2038-4c6c-bd1a-4702d3f638f6\") " pod="openstack/ovsdbserver-sb-2" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.131066 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8cac148f-5a08-45df-9df7-cb01c0237a20\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8cac148f-5a08-45df-9df7-cb01c0237a20\") pod \"ovsdbserver-sb-0\" (UID: \"3d31a0cf-43f1-4682-a8f0-2e778d2a06e4\") " pod="openstack/ovsdbserver-sb-0" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.131564 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d31a0cf-43f1-4682-a8f0-2e778d2a06e4-config\") pod \"ovsdbserver-sb-0\" (UID: \"3d31a0cf-43f1-4682-a8f0-2e778d2a06e4\") " pod="openstack/ovsdbserver-sb-0" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.131802 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3d31a0cf-43f1-4682-a8f0-2e778d2a06e4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3d31a0cf-43f1-4682-a8f0-2e778d2a06e4\") " pod="openstack/ovsdbserver-sb-0" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.132606 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efced2ba-0a7c-4f12-8c38-442eff97aae8-config\") pod \"ovsdbserver-sb-1\" (UID: \"efced2ba-0a7c-4f12-8c38-442eff97aae8\") " pod="openstack/ovsdbserver-sb-1" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.132704 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-46ac97b0-86db-4d74-8680-034f2919880d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46ac97b0-86db-4d74-8680-034f2919880d\") pod \"ovsdbserver-sb-2\" (UID: \"eb6e3759-2038-4c6c-bd1a-4702d3f638f6\") " pod="openstack/ovsdbserver-sb-2" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.132878 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d31a0cf-43f1-4682-a8f0-2e778d2a06e4-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3d31a0cf-43f1-4682-a8f0-2e778d2a06e4\") " pod="openstack/ovsdbserver-sb-0" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.132997 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjxnl\" (UniqueName: \"kubernetes.io/projected/3d31a0cf-43f1-4682-a8f0-2e778d2a06e4-kube-api-access-qjxnl\") pod \"ovsdbserver-sb-0\" (UID: \"3d31a0cf-43f1-4682-a8f0-2e778d2a06e4\") " pod="openstack/ovsdbserver-sb-0" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.133032 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d31a0cf-43f1-4682-a8f0-2e778d2a06e4-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3d31a0cf-43f1-4682-a8f0-2e778d2a06e4\") " pod="openstack/ovsdbserver-sb-0" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.133151 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/efced2ba-0a7c-4f12-8c38-442eff97aae8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"efced2ba-0a7c-4f12-8c38-442eff97aae8\") " pod="openstack/ovsdbserver-sb-1" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.133198 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eb6e3759-2038-4c6c-bd1a-4702d3f638f6-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"eb6e3759-2038-4c6c-bd1a-4702d3f638f6\") " pod="openstack/ovsdbserver-sb-2" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.133255 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb6e3759-2038-4c6c-bd1a-4702d3f638f6-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"eb6e3759-2038-4c6c-bd1a-4702d3f638f6\") " pod="openstack/ovsdbserver-sb-2" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.133294 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ds69\" (UniqueName: \"kubernetes.io/projected/eb6e3759-2038-4c6c-bd1a-4702d3f638f6-kube-api-access-6ds69\") pod \"ovsdbserver-sb-2\" (UID: \"eb6e3759-2038-4c6c-bd1a-4702d3f638f6\") " pod="openstack/ovsdbserver-sb-2" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.133337 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d31a0cf-43f1-4682-a8f0-2e778d2a06e4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3d31a0cf-43f1-4682-a8f0-2e778d2a06e4\") " pod="openstack/ovsdbserver-sb-0" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.133407 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eb6e3759-2038-4c6c-bd1a-4702d3f638f6-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"eb6e3759-2038-4c6c-bd1a-4702d3f638f6\") " pod="openstack/ovsdbserver-sb-2" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.135341 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d31a0cf-43f1-4682-a8f0-2e778d2a06e4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3d31a0cf-43f1-4682-a8f0-2e778d2a06e4\") " pod="openstack/ovsdbserver-sb-0" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.135848 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.135926 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8cac148f-5a08-45df-9df7-cb01c0237a20\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8cac148f-5a08-45df-9df7-cb01c0237a20\") pod \"ovsdbserver-sb-0\" (UID: \"3d31a0cf-43f1-4682-a8f0-2e778d2a06e4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/884a92d8b399287cee28d0e31df6bf0cb436da8c03a84b3de9b8c09c7bbe4323/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.137214 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d31a0cf-43f1-4682-a8f0-2e778d2a06e4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3d31a0cf-43f1-4682-a8f0-2e778d2a06e4\") " pod="openstack/ovsdbserver-sb-0" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.140971 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d31a0cf-43f1-4682-a8f0-2e778d2a06e4-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3d31a0cf-43f1-4682-a8f0-2e778d2a06e4\") " pod="openstack/ovsdbserver-sb-0" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.144540 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d31a0cf-43f1-4682-a8f0-2e778d2a06e4-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3d31a0cf-43f1-4682-a8f0-2e778d2a06e4\") " pod="openstack/ovsdbserver-sb-0" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.154601 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjxnl\" (UniqueName: \"kubernetes.io/projected/3d31a0cf-43f1-4682-a8f0-2e778d2a06e4-kube-api-access-qjxnl\") pod \"ovsdbserver-sb-0\" (UID: \"3d31a0cf-43f1-4682-a8f0-2e778d2a06e4\") " pod="openstack/ovsdbserver-sb-0" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.203344 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8cac148f-5a08-45df-9df7-cb01c0237a20\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8cac148f-5a08-45df-9df7-cb01c0237a20\") pod \"ovsdbserver-sb-0\" (UID: \"3d31a0cf-43f1-4682-a8f0-2e778d2a06e4\") " pod="openstack/ovsdbserver-sb-0" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.217070 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"73ffb5ca-edc5-4352-8131-bd2322e6f9da","Type":"ContainerStarted","Data":"aaee2841c8b0a06c8b4965e640239d24ba4075ed3db8902103eb5bd6e8d81dc1"} Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.217121 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"73ffb5ca-edc5-4352-8131-bd2322e6f9da","Type":"ContainerStarted","Data":"89d9abd12e7b1692f00a6b71a9fb43b3fc1698ae35df5dbdb2670a834ce44935"} Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.217134 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"73ffb5ca-edc5-4352-8131-bd2322e6f9da","Type":"ContainerStarted","Data":"2787372596a32dc6da1b858a2828361416ad87d4ccb22c349311b347e7eb1895"} Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.219280 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05","Type":"ContainerStarted","Data":"b7147c7421686a7ab7d18055c32cad604e66af8f5d97ba7b75aa0058217aec02"} Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.219313 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05","Type":"ContainerStarted","Data":"7e23931b72e185c9fb33b353d4a3a36317ef1dc73c04b8e67cd205426d1d0f4e"} Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.219328 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05","Type":"ContainerStarted","Data":"17063e3e55de9c62926275d9e70163b20afd17069427b21af33166a3aadd8f36"} Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.232718 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.236198 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb6e3759-2038-4c6c-bd1a-4702d3f638f6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"eb6e3759-2038-4c6c-bd1a-4702d3f638f6\") " pod="openstack/ovsdbserver-sb-2" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.236277 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efced2ba-0a7c-4f12-8c38-442eff97aae8-config\") pod \"ovsdbserver-sb-1\" (UID: \"efced2ba-0a7c-4f12-8c38-442eff97aae8\") " pod="openstack/ovsdbserver-sb-1" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.236311 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-46ac97b0-86db-4d74-8680-034f2919880d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46ac97b0-86db-4d74-8680-034f2919880d\") pod \"ovsdbserver-sb-2\" (UID: \"eb6e3759-2038-4c6c-bd1a-4702d3f638f6\") " pod="openstack/ovsdbserver-sb-2" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.236402 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/efced2ba-0a7c-4f12-8c38-442eff97aae8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"efced2ba-0a7c-4f12-8c38-442eff97aae8\") " pod="openstack/ovsdbserver-sb-1" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.236429 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eb6e3759-2038-4c6c-bd1a-4702d3f638f6-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"eb6e3759-2038-4c6c-bd1a-4702d3f638f6\") " pod="openstack/ovsdbserver-sb-2" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.236456 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb6e3759-2038-4c6c-bd1a-4702d3f638f6-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"eb6e3759-2038-4c6c-bd1a-4702d3f638f6\") " pod="openstack/ovsdbserver-sb-2" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.236479 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ds69\" (UniqueName: \"kubernetes.io/projected/eb6e3759-2038-4c6c-bd1a-4702d3f638f6-kube-api-access-6ds69\") pod \"ovsdbserver-sb-2\" (UID: \"eb6e3759-2038-4c6c-bd1a-4702d3f638f6\") " pod="openstack/ovsdbserver-sb-2" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.236524 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eb6e3759-2038-4c6c-bd1a-4702d3f638f6-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"eb6e3759-2038-4c6c-bd1a-4702d3f638f6\") " pod="openstack/ovsdbserver-sb-2" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.236573 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qpcp\" (UniqueName: \"kubernetes.io/projected/efced2ba-0a7c-4f12-8c38-442eff97aae8-kube-api-access-9qpcp\") pod \"ovsdbserver-sb-1\" (UID: \"efced2ba-0a7c-4f12-8c38-442eff97aae8\") " pod="openstack/ovsdbserver-sb-1" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.236598 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efced2ba-0a7c-4f12-8c38-442eff97aae8-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"efced2ba-0a7c-4f12-8c38-442eff97aae8\") " pod="openstack/ovsdbserver-sb-1" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.236625 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/efced2ba-0a7c-4f12-8c38-442eff97aae8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"efced2ba-0a7c-4f12-8c38-442eff97aae8\") " pod="openstack/ovsdbserver-sb-1" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.236661 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb6e3759-2038-4c6c-bd1a-4702d3f638f6-config\") pod \"ovsdbserver-sb-2\" (UID: \"eb6e3759-2038-4c6c-bd1a-4702d3f638f6\") " pod="openstack/ovsdbserver-sb-2" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.236682 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb6e3759-2038-4c6c-bd1a-4702d3f638f6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"eb6e3759-2038-4c6c-bd1a-4702d3f638f6\") " pod="openstack/ovsdbserver-sb-2" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.236701 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/efced2ba-0a7c-4f12-8c38-442eff97aae8-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"efced2ba-0a7c-4f12-8c38-442eff97aae8\") " pod="openstack/ovsdbserver-sb-1" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.236724 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-66d1407e-c4f5-4540-8ab1-3be1e236beae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66d1407e-c4f5-4540-8ab1-3be1e236beae\") pod \"ovsdbserver-sb-1\" (UID: \"efced2ba-0a7c-4f12-8c38-442eff97aae8\") " pod="openstack/ovsdbserver-sb-1" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.236762 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/efced2ba-0a7c-4f12-8c38-442eff97aae8-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"efced2ba-0a7c-4f12-8c38-442eff97aae8\") " pod="openstack/ovsdbserver-sb-1" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.237151 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efced2ba-0a7c-4f12-8c38-442eff97aae8-config\") pod \"ovsdbserver-sb-1\" (UID: \"efced2ba-0a7c-4f12-8c38-442eff97aae8\") " pod="openstack/ovsdbserver-sb-1" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.237260 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/efced2ba-0a7c-4f12-8c38-442eff97aae8-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"efced2ba-0a7c-4f12-8c38-442eff97aae8\") " pod="openstack/ovsdbserver-sb-1" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.238440 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eb6e3759-2038-4c6c-bd1a-4702d3f638f6-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"eb6e3759-2038-4c6c-bd1a-4702d3f638f6\") " pod="openstack/ovsdbserver-sb-2" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.241456 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=3.241440236 podStartE2EDuration="3.241440236s" podCreationTimestamp="2026-01-01 09:55:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 09:55:47.239756799 +0000 UTC m=+5356.375025608" watchObservedRunningTime="2026-01-01 09:55:47.241440236 +0000 UTC m=+5356.376709015" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.241841 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/efced2ba-0a7c-4f12-8c38-442eff97aae8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"efced2ba-0a7c-4f12-8c38-442eff97aae8\") " pod="openstack/ovsdbserver-sb-1" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.242409 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.242437 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-46ac97b0-86db-4d74-8680-034f2919880d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46ac97b0-86db-4d74-8680-034f2919880d\") pod \"ovsdbserver-sb-2\" (UID: \"eb6e3759-2038-4c6c-bd1a-4702d3f638f6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b80a399ca72dec115213c42ad8e355d2fb59f4f689ec3cfcce8e2b87ecb8ab9b/globalmount\"" pod="openstack/ovsdbserver-sb-2" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.243044 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eb6e3759-2038-4c6c-bd1a-4702d3f638f6-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"eb6e3759-2038-4c6c-bd1a-4702d3f638f6\") " pod="openstack/ovsdbserver-sb-2" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.243378 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb6e3759-2038-4c6c-bd1a-4702d3f638f6-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"eb6e3759-2038-4c6c-bd1a-4702d3f638f6\") " pod="openstack/ovsdbserver-sb-2" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.245407 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb6e3759-2038-4c6c-bd1a-4702d3f638f6-config\") pod \"ovsdbserver-sb-2\" (UID: \"eb6e3759-2038-4c6c-bd1a-4702d3f638f6\") " pod="openstack/ovsdbserver-sb-2" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.246469 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/efced2ba-0a7c-4f12-8c38-442eff97aae8-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"efced2ba-0a7c-4f12-8c38-442eff97aae8\") " pod="openstack/ovsdbserver-sb-1" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.246986 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.247013 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-66d1407e-c4f5-4540-8ab1-3be1e236beae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66d1407e-c4f5-4540-8ab1-3be1e236beae\") pod \"ovsdbserver-sb-1\" (UID: \"efced2ba-0a7c-4f12-8c38-442eff97aae8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d9a9f7b0885ade83b1ffa7f109928be48cf0a515d1eb371c6db6340835b53bb5/globalmount\"" pod="openstack/ovsdbserver-sb-1" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.250396 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb6e3759-2038-4c6c-bd1a-4702d3f638f6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"eb6e3759-2038-4c6c-bd1a-4702d3f638f6\") " pod="openstack/ovsdbserver-sb-2" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.252670 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb6e3759-2038-4c6c-bd1a-4702d3f638f6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"eb6e3759-2038-4c6c-bd1a-4702d3f638f6\") " pod="openstack/ovsdbserver-sb-2" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.253626 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efced2ba-0a7c-4f12-8c38-442eff97aae8-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"efced2ba-0a7c-4f12-8c38-442eff97aae8\") " pod="openstack/ovsdbserver-sb-1" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.258337 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/efced2ba-0a7c-4f12-8c38-442eff97aae8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"efced2ba-0a7c-4f12-8c38-442eff97aae8\") " pod="openstack/ovsdbserver-sb-1" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.267400 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qpcp\" (UniqueName: \"kubernetes.io/projected/efced2ba-0a7c-4f12-8c38-442eff97aae8-kube-api-access-9qpcp\") pod \"ovsdbserver-sb-1\" (UID: \"efced2ba-0a7c-4f12-8c38-442eff97aae8\") " pod="openstack/ovsdbserver-sb-1" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.275056 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ds69\" (UniqueName: \"kubernetes.io/projected/eb6e3759-2038-4c6c-bd1a-4702d3f638f6-kube-api-access-6ds69\") pod \"ovsdbserver-sb-2\" (UID: \"eb6e3759-2038-4c6c-bd1a-4702d3f638f6\") " pod="openstack/ovsdbserver-sb-2" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.279721 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=3.279706361 podStartE2EDuration="3.279706361s" podCreationTimestamp="2026-01-01 09:55:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 09:55:47.272772004 +0000 UTC m=+5356.408040773" watchObservedRunningTime="2026-01-01 09:55:47.279706361 +0000 UTC m=+5356.414975130" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.292530 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-66d1407e-c4f5-4540-8ab1-3be1e236beae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66d1407e-c4f5-4540-8ab1-3be1e236beae\") pod \"ovsdbserver-sb-1\" (UID: \"efced2ba-0a7c-4f12-8c38-442eff97aae8\") " pod="openstack/ovsdbserver-sb-1" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.294286 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-46ac97b0-86db-4d74-8680-034f2919880d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46ac97b0-86db-4d74-8680-034f2919880d\") pod \"ovsdbserver-sb-2\" (UID: \"eb6e3759-2038-4c6c-bd1a-4702d3f638f6\") " pod="openstack/ovsdbserver-sb-2" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.391663 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 01 09:55:47 crc kubenswrapper[4867]: W0101 09:55:47.394030 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6794a96b_8a53_4b4d_81c5_61f54a3fe243.slice/crio-54f528a04aa155babd61323ef2b0b4629a5ab8a13b803c5a765037d8965be164 WatchSource:0}: Error finding container 54f528a04aa155babd61323ef2b0b4629a5ab8a13b803c5a765037d8965be164: Status 404 returned error can't find the container with id 54f528a04aa155babd61323ef2b0b4629a5ab8a13b803c5a765037d8965be164 Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.563462 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.575341 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 01 09:55:47 crc kubenswrapper[4867]: I0101 09:55:47.755556 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 01 09:55:47 crc kubenswrapper[4867]: W0101 09:55:47.762200 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d31a0cf_43f1_4682_a8f0_2e778d2a06e4.slice/crio-9f0b558641c5ff149aebc6b2dbc2a05eb70eaefe72ec9560fb26267ad907f944 WatchSource:0}: Error finding container 9f0b558641c5ff149aebc6b2dbc2a05eb70eaefe72ec9560fb26267ad907f944: Status 404 returned error can't find the container with id 9f0b558641c5ff149aebc6b2dbc2a05eb70eaefe72ec9560fb26267ad907f944 Jan 01 09:55:48 crc kubenswrapper[4867]: I0101 09:55:48.150364 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 01 09:55:48 crc kubenswrapper[4867]: I0101 09:55:48.229269 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"6794a96b-8a53-4b4d-81c5-61f54a3fe243","Type":"ContainerStarted","Data":"ab333299635f945291820c31a1752b2a04b99689589a2890fbab3d8d11dea09f"} Jan 01 09:55:48 crc kubenswrapper[4867]: I0101 09:55:48.229318 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"6794a96b-8a53-4b4d-81c5-61f54a3fe243","Type":"ContainerStarted","Data":"091f09e9387d59451fb9754498ebb35e04c0f3a59722d973f88ce8317e997eb6"} Jan 01 09:55:48 crc kubenswrapper[4867]: I0101 09:55:48.229329 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"6794a96b-8a53-4b4d-81c5-61f54a3fe243","Type":"ContainerStarted","Data":"54f528a04aa155babd61323ef2b0b4629a5ab8a13b803c5a765037d8965be164"} Jan 01 09:55:48 crc kubenswrapper[4867]: I0101 09:55:48.236457 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3d31a0cf-43f1-4682-a8f0-2e778d2a06e4","Type":"ContainerStarted","Data":"6fd854e793164e7c4be2f60a5af596eadb66d08e1f96b42c604f0a0ab50a4e85"} Jan 01 09:55:48 crc kubenswrapper[4867]: I0101 09:55:48.236497 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3d31a0cf-43f1-4682-a8f0-2e778d2a06e4","Type":"ContainerStarted","Data":"e082f291181ad48cc6fbde4f199e941e430b568424413271706197fcb10488f8"} Jan 01 09:55:48 crc kubenswrapper[4867]: I0101 09:55:48.236509 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3d31a0cf-43f1-4682-a8f0-2e778d2a06e4","Type":"ContainerStarted","Data":"9f0b558641c5ff149aebc6b2dbc2a05eb70eaefe72ec9560fb26267ad907f944"} Jan 01 09:55:48 crc kubenswrapper[4867]: I0101 09:55:48.238429 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"efced2ba-0a7c-4f12-8c38-442eff97aae8","Type":"ContainerStarted","Data":"839fde3fe6122f280e1785879b327ef941ae01041a656428640f9e316d1269c1"} Jan 01 09:55:48 crc kubenswrapper[4867]: I0101 09:55:48.258632 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 01 09:55:48 crc kubenswrapper[4867]: I0101 09:55:48.266687 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=4.266668791 podStartE2EDuration="4.266668791s" podCreationTimestamp="2026-01-01 09:55:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 09:55:48.247194459 +0000 UTC m=+5357.382463278" watchObservedRunningTime="2026-01-01 09:55:48.266668791 +0000 UTC m=+5357.401937570" Jan 01 09:55:48 crc kubenswrapper[4867]: I0101 09:55:48.274605 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.274590666 podStartE2EDuration="3.274590666s" podCreationTimestamp="2026-01-01 09:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 09:55:48.268264716 +0000 UTC m=+5357.403533495" watchObservedRunningTime="2026-01-01 09:55:48.274590666 +0000 UTC m=+5357.409859425" Jan 01 09:55:48 crc kubenswrapper[4867]: I0101 09:55:48.440716 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8p875"] Jan 01 09:55:48 crc kubenswrapper[4867]: I0101 09:55:48.442315 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8p875" Jan 01 09:55:48 crc kubenswrapper[4867]: I0101 09:55:48.450141 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8p875"] Jan 01 09:55:48 crc kubenswrapper[4867]: I0101 09:55:48.562929 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f0d1f4d-168e-4dba-9b60-9262d4cc24b6-utilities\") pod \"redhat-marketplace-8p875\" (UID: \"9f0d1f4d-168e-4dba-9b60-9262d4cc24b6\") " pod="openshift-marketplace/redhat-marketplace-8p875" Jan 01 09:55:48 crc kubenswrapper[4867]: I0101 09:55:48.563025 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh99j\" (UniqueName: \"kubernetes.io/projected/9f0d1f4d-168e-4dba-9b60-9262d4cc24b6-kube-api-access-wh99j\") pod \"redhat-marketplace-8p875\" (UID: \"9f0d1f4d-168e-4dba-9b60-9262d4cc24b6\") " pod="openshift-marketplace/redhat-marketplace-8p875" Jan 01 09:55:48 crc kubenswrapper[4867]: I0101 09:55:48.563048 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f0d1f4d-168e-4dba-9b60-9262d4cc24b6-catalog-content\") pod \"redhat-marketplace-8p875\" (UID: \"9f0d1f4d-168e-4dba-9b60-9262d4cc24b6\") " pod="openshift-marketplace/redhat-marketplace-8p875" Jan 01 09:55:48 crc kubenswrapper[4867]: I0101 09:55:48.630924 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 01 09:55:48 crc kubenswrapper[4867]: I0101 09:55:48.650107 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Jan 01 09:55:48 crc kubenswrapper[4867]: I0101 09:55:48.662287 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Jan 01 09:55:48 crc kubenswrapper[4867]: I0101 09:55:48.664121 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh99j\" (UniqueName: \"kubernetes.io/projected/9f0d1f4d-168e-4dba-9b60-9262d4cc24b6-kube-api-access-wh99j\") pod \"redhat-marketplace-8p875\" (UID: \"9f0d1f4d-168e-4dba-9b60-9262d4cc24b6\") " pod="openshift-marketplace/redhat-marketplace-8p875" Jan 01 09:55:48 crc kubenswrapper[4867]: I0101 09:55:48.664248 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f0d1f4d-168e-4dba-9b60-9262d4cc24b6-catalog-content\") pod \"redhat-marketplace-8p875\" (UID: \"9f0d1f4d-168e-4dba-9b60-9262d4cc24b6\") " pod="openshift-marketplace/redhat-marketplace-8p875" Jan 01 09:55:48 crc kubenswrapper[4867]: I0101 09:55:48.664390 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f0d1f4d-168e-4dba-9b60-9262d4cc24b6-utilities\") pod \"redhat-marketplace-8p875\" (UID: \"9f0d1f4d-168e-4dba-9b60-9262d4cc24b6\") " pod="openshift-marketplace/redhat-marketplace-8p875" Jan 01 09:55:48 crc kubenswrapper[4867]: I0101 09:55:48.664818 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f0d1f4d-168e-4dba-9b60-9262d4cc24b6-catalog-content\") pod \"redhat-marketplace-8p875\" (UID: \"9f0d1f4d-168e-4dba-9b60-9262d4cc24b6\") " pod="openshift-marketplace/redhat-marketplace-8p875" Jan 01 09:55:48 crc kubenswrapper[4867]: I0101 09:55:48.664820 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f0d1f4d-168e-4dba-9b60-9262d4cc24b6-utilities\") pod \"redhat-marketplace-8p875\" (UID: \"9f0d1f4d-168e-4dba-9b60-9262d4cc24b6\") " pod="openshift-marketplace/redhat-marketplace-8p875" Jan 01 09:55:48 crc kubenswrapper[4867]: I0101 09:55:48.683149 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh99j\" (UniqueName: \"kubernetes.io/projected/9f0d1f4d-168e-4dba-9b60-9262d4cc24b6-kube-api-access-wh99j\") pod \"redhat-marketplace-8p875\" (UID: \"9f0d1f4d-168e-4dba-9b60-9262d4cc24b6\") " pod="openshift-marketplace/redhat-marketplace-8p875" Jan 01 09:55:48 crc kubenswrapper[4867]: I0101 09:55:48.760020 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8p875" Jan 01 09:55:49 crc kubenswrapper[4867]: W0101 09:55:49.198853 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0d1f4d_168e_4dba_9b60_9262d4cc24b6.slice/crio-9b72f2821048ab188282425a096f083c0acd9ff9e299abaceeaf6de24d9f37b0 WatchSource:0}: Error finding container 9b72f2821048ab188282425a096f083c0acd9ff9e299abaceeaf6de24d9f37b0: Status 404 returned error can't find the container with id 9b72f2821048ab188282425a096f083c0acd9ff9e299abaceeaf6de24d9f37b0 Jan 01 09:55:49 crc kubenswrapper[4867]: I0101 09:55:49.201850 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8p875"] Jan 01 09:55:49 crc kubenswrapper[4867]: I0101 09:55:49.245981 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8p875" event={"ID":"9f0d1f4d-168e-4dba-9b60-9262d4cc24b6","Type":"ContainerStarted","Data":"9b72f2821048ab188282425a096f083c0acd9ff9e299abaceeaf6de24d9f37b0"} Jan 01 09:55:49 crc kubenswrapper[4867]: I0101 09:55:49.248736 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"eb6e3759-2038-4c6c-bd1a-4702d3f638f6","Type":"ContainerStarted","Data":"75471f9995d4bf696d51c80cf94f441e8e50faff35889f566c282fa804905a7b"} Jan 01 09:55:49 crc kubenswrapper[4867]: I0101 09:55:49.248803 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"eb6e3759-2038-4c6c-bd1a-4702d3f638f6","Type":"ContainerStarted","Data":"961bda0a820cd5a624f72e32e5d12f4b8ac6f7044ea4f74cf39ecdd216e42bc1"} Jan 01 09:55:49 crc kubenswrapper[4867]: I0101 09:55:49.248822 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"eb6e3759-2038-4c6c-bd1a-4702d3f638f6","Type":"ContainerStarted","Data":"dba17e077ba57e8455423ac07f344eea450290d6d190141075146b24d37e9429"} Jan 01 09:55:49 crc kubenswrapper[4867]: I0101 09:55:49.250744 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"efced2ba-0a7c-4f12-8c38-442eff97aae8","Type":"ContainerStarted","Data":"1b9d1a02a89fe186b9c4e9268d6813db8f2a73fcf91447647d555e89f68ff75e"} Jan 01 09:55:49 crc kubenswrapper[4867]: I0101 09:55:49.250795 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"efced2ba-0a7c-4f12-8c38-442eff97aae8","Type":"ContainerStarted","Data":"f92a877b350090fd59707ac7cb523dc8bed7b8698a6a4f33b6768180f519ebcb"} Jan 01 09:55:49 crc kubenswrapper[4867]: I0101 09:55:49.276676 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=4.276657384 podStartE2EDuration="4.276657384s" podCreationTimestamp="2026-01-01 09:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 09:55:49.272500866 +0000 UTC m=+5358.407769645" watchObservedRunningTime="2026-01-01 09:55:49.276657384 +0000 UTC m=+5358.411926153" Jan 01 09:55:49 crc kubenswrapper[4867]: I0101 09:55:49.295737 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=4.295718424 podStartE2EDuration="4.295718424s" podCreationTimestamp="2026-01-01 09:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 09:55:49.291647179 +0000 UTC m=+5358.426916008" watchObservedRunningTime="2026-01-01 09:55:49.295718424 +0000 UTC m=+5358.430987203" Jan 01 09:55:50 crc kubenswrapper[4867]: I0101 09:55:50.236496 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 01 09:55:50 crc kubenswrapper[4867]: I0101 09:55:50.264603 4867 generic.go:334] "Generic (PLEG): container finished" podID="9f0d1f4d-168e-4dba-9b60-9262d4cc24b6" containerID="009ab88d10c65a026af1998923263bb357b7a7acb187a4bbfb066ddb2a29d61c" exitCode=0 Jan 01 09:55:50 crc kubenswrapper[4867]: I0101 09:55:50.264655 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8p875" event={"ID":"9f0d1f4d-168e-4dba-9b60-9262d4cc24b6","Type":"ContainerDied","Data":"009ab88d10c65a026af1998923263bb357b7a7acb187a4bbfb066ddb2a29d61c"} Jan 01 09:55:50 crc kubenswrapper[4867]: I0101 09:55:50.311427 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 01 09:55:50 crc kubenswrapper[4867]: I0101 09:55:50.564990 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Jan 01 09:55:50 crc kubenswrapper[4867]: I0101 09:55:50.575500 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Jan 01 09:55:50 crc kubenswrapper[4867]: I0101 09:55:50.631063 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 01 09:55:50 crc kubenswrapper[4867]: I0101 09:55:50.650946 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Jan 01 09:55:50 crc kubenswrapper[4867]: I0101 09:55:50.661834 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Jan 01 09:55:51 crc kubenswrapper[4867]: I0101 09:55:51.279416 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8p875" event={"ID":"9f0d1f4d-168e-4dba-9b60-9262d4cc24b6","Type":"ContainerStarted","Data":"caf7494183cdf511b0dff49e68c34ff204f0f4dfada1f02d48a9224233f55853"} Jan 01 09:55:51 crc kubenswrapper[4867]: I0101 09:55:51.279777 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 01 09:55:51 crc kubenswrapper[4867]: I0101 09:55:51.676598 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 01 09:55:51 crc kubenswrapper[4867]: I0101 09:55:51.688127 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Jan 01 09:55:51 crc kubenswrapper[4867]: I0101 09:55:51.713298 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Jan 01 09:55:51 crc kubenswrapper[4867]: I0101 09:55:51.737308 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 01 09:55:51 crc kubenswrapper[4867]: I0101 09:55:51.758650 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Jan 01 09:55:51 crc kubenswrapper[4867]: I0101 09:55:51.953217 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8574559fdf-ndqdf"] Jan 01 09:55:51 crc kubenswrapper[4867]: I0101 09:55:51.956353 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8574559fdf-ndqdf" Jan 01 09:55:51 crc kubenswrapper[4867]: I0101 09:55:51.963526 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.012260 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8574559fdf-ndqdf"] Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.047025 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20da3d83-c1e8-40cc-9103-0f97f89a7eaa-ovsdbserver-nb\") pod \"dnsmasq-dns-8574559fdf-ndqdf\" (UID: \"20da3d83-c1e8-40cc-9103-0f97f89a7eaa\") " pod="openstack/dnsmasq-dns-8574559fdf-ndqdf" Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.047574 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq8jx\" (UniqueName: \"kubernetes.io/projected/20da3d83-c1e8-40cc-9103-0f97f89a7eaa-kube-api-access-dq8jx\") pod \"dnsmasq-dns-8574559fdf-ndqdf\" (UID: \"20da3d83-c1e8-40cc-9103-0f97f89a7eaa\") " pod="openstack/dnsmasq-dns-8574559fdf-ndqdf" Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.047687 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20da3d83-c1e8-40cc-9103-0f97f89a7eaa-config\") pod \"dnsmasq-dns-8574559fdf-ndqdf\" (UID: \"20da3d83-c1e8-40cc-9103-0f97f89a7eaa\") " pod="openstack/dnsmasq-dns-8574559fdf-ndqdf" Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.047742 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20da3d83-c1e8-40cc-9103-0f97f89a7eaa-dns-svc\") pod \"dnsmasq-dns-8574559fdf-ndqdf\" (UID: \"20da3d83-c1e8-40cc-9103-0f97f89a7eaa\") " pod="openstack/dnsmasq-dns-8574559fdf-ndqdf" Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.148980 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq8jx\" (UniqueName: \"kubernetes.io/projected/20da3d83-c1e8-40cc-9103-0f97f89a7eaa-kube-api-access-dq8jx\") pod \"dnsmasq-dns-8574559fdf-ndqdf\" (UID: \"20da3d83-c1e8-40cc-9103-0f97f89a7eaa\") " pod="openstack/dnsmasq-dns-8574559fdf-ndqdf" Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.149059 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20da3d83-c1e8-40cc-9103-0f97f89a7eaa-config\") pod \"dnsmasq-dns-8574559fdf-ndqdf\" (UID: \"20da3d83-c1e8-40cc-9103-0f97f89a7eaa\") " pod="openstack/dnsmasq-dns-8574559fdf-ndqdf" Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.149088 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20da3d83-c1e8-40cc-9103-0f97f89a7eaa-dns-svc\") pod \"dnsmasq-dns-8574559fdf-ndqdf\" (UID: \"20da3d83-c1e8-40cc-9103-0f97f89a7eaa\") " pod="openstack/dnsmasq-dns-8574559fdf-ndqdf" Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.149129 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20da3d83-c1e8-40cc-9103-0f97f89a7eaa-ovsdbserver-nb\") pod \"dnsmasq-dns-8574559fdf-ndqdf\" (UID: \"20da3d83-c1e8-40cc-9103-0f97f89a7eaa\") " pod="openstack/dnsmasq-dns-8574559fdf-ndqdf" Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.150235 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20da3d83-c1e8-40cc-9103-0f97f89a7eaa-config\") pod \"dnsmasq-dns-8574559fdf-ndqdf\" (UID: \"20da3d83-c1e8-40cc-9103-0f97f89a7eaa\") " pod="openstack/dnsmasq-dns-8574559fdf-ndqdf" Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.150372 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20da3d83-c1e8-40cc-9103-0f97f89a7eaa-ovsdbserver-nb\") pod \"dnsmasq-dns-8574559fdf-ndqdf\" (UID: \"20da3d83-c1e8-40cc-9103-0f97f89a7eaa\") " pod="openstack/dnsmasq-dns-8574559fdf-ndqdf" Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.150685 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20da3d83-c1e8-40cc-9103-0f97f89a7eaa-dns-svc\") pod \"dnsmasq-dns-8574559fdf-ndqdf\" (UID: \"20da3d83-c1e8-40cc-9103-0f97f89a7eaa\") " pod="openstack/dnsmasq-dns-8574559fdf-ndqdf" Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.187835 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq8jx\" (UniqueName: \"kubernetes.io/projected/20da3d83-c1e8-40cc-9103-0f97f89a7eaa-kube-api-access-dq8jx\") pod \"dnsmasq-dns-8574559fdf-ndqdf\" (UID: \"20da3d83-c1e8-40cc-9103-0f97f89a7eaa\") " pod="openstack/dnsmasq-dns-8574559fdf-ndqdf" Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.271679 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.288666 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8574559fdf-ndqdf" Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.288693 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8p875" event={"ID":"9f0d1f4d-168e-4dba-9b60-9262d4cc24b6","Type":"ContainerDied","Data":"caf7494183cdf511b0dff49e68c34ff204f0f4dfada1f02d48a9224233f55853"} Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.288663 4867 generic.go:334] "Generic (PLEG): container finished" podID="9f0d1f4d-168e-4dba-9b60-9262d4cc24b6" containerID="caf7494183cdf511b0dff49e68c34ff204f0f4dfada1f02d48a9224233f55853" exitCode=0 Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.347506 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.506836 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8574559fdf-ndqdf"] Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.533719 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69db5595f9-9tdw9"] Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.535424 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69db5595f9-9tdw9" Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.542221 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.549547 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69db5595f9-9tdw9"] Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.564195 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.575845 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.659670 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5017d29-c559-474f-b05c-f54aa95fc6a6-ovsdbserver-sb\") pod \"dnsmasq-dns-69db5595f9-9tdw9\" (UID: \"a5017d29-c559-474f-b05c-f54aa95fc6a6\") " pod="openstack/dnsmasq-dns-69db5595f9-9tdw9" Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.659734 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5017d29-c559-474f-b05c-f54aa95fc6a6-config\") pod \"dnsmasq-dns-69db5595f9-9tdw9\" (UID: \"a5017d29-c559-474f-b05c-f54aa95fc6a6\") " pod="openstack/dnsmasq-dns-69db5595f9-9tdw9" Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.659833 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5017d29-c559-474f-b05c-f54aa95fc6a6-dns-svc\") pod \"dnsmasq-dns-69db5595f9-9tdw9\" (UID: \"a5017d29-c559-474f-b05c-f54aa95fc6a6\") " pod="openstack/dnsmasq-dns-69db5595f9-9tdw9" Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.659918 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a5017d29-c559-474f-b05c-f54aa95fc6a6-ovsdbserver-nb\") pod \"dnsmasq-dns-69db5595f9-9tdw9\" (UID: \"a5017d29-c559-474f-b05c-f54aa95fc6a6\") " pod="openstack/dnsmasq-dns-69db5595f9-9tdw9" Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.659990 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpgtb\" (UniqueName: \"kubernetes.io/projected/a5017d29-c559-474f-b05c-f54aa95fc6a6-kube-api-access-mpgtb\") pod \"dnsmasq-dns-69db5595f9-9tdw9\" (UID: \"a5017d29-c559-474f-b05c-f54aa95fc6a6\") " pod="openstack/dnsmasq-dns-69db5595f9-9tdw9" Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.761201 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpgtb\" (UniqueName: \"kubernetes.io/projected/a5017d29-c559-474f-b05c-f54aa95fc6a6-kube-api-access-mpgtb\") pod \"dnsmasq-dns-69db5595f9-9tdw9\" (UID: \"a5017d29-c559-474f-b05c-f54aa95fc6a6\") " pod="openstack/dnsmasq-dns-69db5595f9-9tdw9" Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.761271 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5017d29-c559-474f-b05c-f54aa95fc6a6-ovsdbserver-sb\") pod \"dnsmasq-dns-69db5595f9-9tdw9\" (UID: \"a5017d29-c559-474f-b05c-f54aa95fc6a6\") " pod="openstack/dnsmasq-dns-69db5595f9-9tdw9" Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.761305 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5017d29-c559-474f-b05c-f54aa95fc6a6-config\") pod \"dnsmasq-dns-69db5595f9-9tdw9\" (UID: \"a5017d29-c559-474f-b05c-f54aa95fc6a6\") " pod="openstack/dnsmasq-dns-69db5595f9-9tdw9" Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.761366 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5017d29-c559-474f-b05c-f54aa95fc6a6-dns-svc\") pod \"dnsmasq-dns-69db5595f9-9tdw9\" (UID: \"a5017d29-c559-474f-b05c-f54aa95fc6a6\") " pod="openstack/dnsmasq-dns-69db5595f9-9tdw9" Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.761390 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a5017d29-c559-474f-b05c-f54aa95fc6a6-ovsdbserver-nb\") pod \"dnsmasq-dns-69db5595f9-9tdw9\" (UID: \"a5017d29-c559-474f-b05c-f54aa95fc6a6\") " pod="openstack/dnsmasq-dns-69db5595f9-9tdw9" Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.762255 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5017d29-c559-474f-b05c-f54aa95fc6a6-ovsdbserver-sb\") pod \"dnsmasq-dns-69db5595f9-9tdw9\" (UID: \"a5017d29-c559-474f-b05c-f54aa95fc6a6\") " pod="openstack/dnsmasq-dns-69db5595f9-9tdw9" Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.762268 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a5017d29-c559-474f-b05c-f54aa95fc6a6-ovsdbserver-nb\") pod \"dnsmasq-dns-69db5595f9-9tdw9\" (UID: \"a5017d29-c559-474f-b05c-f54aa95fc6a6\") " pod="openstack/dnsmasq-dns-69db5595f9-9tdw9" Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.762755 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5017d29-c559-474f-b05c-f54aa95fc6a6-config\") pod \"dnsmasq-dns-69db5595f9-9tdw9\" (UID: \"a5017d29-c559-474f-b05c-f54aa95fc6a6\") " pod="openstack/dnsmasq-dns-69db5595f9-9tdw9" Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.763864 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5017d29-c559-474f-b05c-f54aa95fc6a6-dns-svc\") pod \"dnsmasq-dns-69db5595f9-9tdw9\" (UID: \"a5017d29-c559-474f-b05c-f54aa95fc6a6\") " pod="openstack/dnsmasq-dns-69db5595f9-9tdw9" Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.780137 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpgtb\" (UniqueName: \"kubernetes.io/projected/a5017d29-c559-474f-b05c-f54aa95fc6a6-kube-api-access-mpgtb\") pod \"dnsmasq-dns-69db5595f9-9tdw9\" (UID: \"a5017d29-c559-474f-b05c-f54aa95fc6a6\") " pod="openstack/dnsmasq-dns-69db5595f9-9tdw9" Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.861382 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69db5595f9-9tdw9" Jan 01 09:55:52 crc kubenswrapper[4867]: I0101 09:55:52.879586 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8574559fdf-ndqdf"] Jan 01 09:55:52 crc kubenswrapper[4867]: W0101 09:55:52.884994 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20da3d83_c1e8_40cc_9103_0f97f89a7eaa.slice/crio-438e3b9f828d7f7d966f02c7cb1fb496fa96754323cc46a1fc6a0e0c97817f5b WatchSource:0}: Error finding container 438e3b9f828d7f7d966f02c7cb1fb496fa96754323cc46a1fc6a0e0c97817f5b: Status 404 returned error can't find the container with id 438e3b9f828d7f7d966f02c7cb1fb496fa96754323cc46a1fc6a0e0c97817f5b Jan 01 09:55:53 crc kubenswrapper[4867]: I0101 09:55:53.297564 4867 generic.go:334] "Generic (PLEG): container finished" podID="20da3d83-c1e8-40cc-9103-0f97f89a7eaa" containerID="ca0f4958f09fa4c7d95f7b8130a8e3d459d2c75232ea1c882183776cd26ac4cc" exitCode=0 Jan 01 09:55:53 crc kubenswrapper[4867]: I0101 09:55:53.297743 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8574559fdf-ndqdf" event={"ID":"20da3d83-c1e8-40cc-9103-0f97f89a7eaa","Type":"ContainerDied","Data":"ca0f4958f09fa4c7d95f7b8130a8e3d459d2c75232ea1c882183776cd26ac4cc"} Jan 01 09:55:53 crc kubenswrapper[4867]: I0101 09:55:53.298007 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8574559fdf-ndqdf" event={"ID":"20da3d83-c1e8-40cc-9103-0f97f89a7eaa","Type":"ContainerStarted","Data":"438e3b9f828d7f7d966f02c7cb1fb496fa96754323cc46a1fc6a0e0c97817f5b"} Jan 01 09:55:53 crc kubenswrapper[4867]: I0101 09:55:53.301901 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8p875" event={"ID":"9f0d1f4d-168e-4dba-9b60-9262d4cc24b6","Type":"ContainerStarted","Data":"510294dc44f059bc7ad5bc22964649ff3507c6e444a78c614372669e9057f88a"} Jan 01 09:55:53 crc kubenswrapper[4867]: I0101 09:55:53.343344 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8p875" podStartSLOduration=2.905836903 podStartE2EDuration="5.343328741s" podCreationTimestamp="2026-01-01 09:55:48 +0000 UTC" firstStartedPulling="2026-01-01 09:55:50.269804738 +0000 UTC m=+5359.405073547" lastFinishedPulling="2026-01-01 09:55:52.707296616 +0000 UTC m=+5361.842565385" observedRunningTime="2026-01-01 09:55:53.340327546 +0000 UTC m=+5362.475596325" watchObservedRunningTime="2026-01-01 09:55:53.343328741 +0000 UTC m=+5362.478597510" Jan 01 09:55:53 crc kubenswrapper[4867]: I0101 09:55:53.376063 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69db5595f9-9tdw9"] Jan 01 09:55:53 crc kubenswrapper[4867]: I0101 09:55:53.601602 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Jan 01 09:55:53 crc kubenswrapper[4867]: I0101 09:55:53.642880 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Jan 01 09:55:53 crc kubenswrapper[4867]: I0101 09:55:53.655600 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Jan 01 09:55:53 crc kubenswrapper[4867]: I0101 09:55:53.696796 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Jan 01 09:55:53 crc kubenswrapper[4867]: I0101 09:55:53.699255 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8574559fdf-ndqdf" Jan 01 09:55:53 crc kubenswrapper[4867]: I0101 09:55:53.886749 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20da3d83-c1e8-40cc-9103-0f97f89a7eaa-config\") pod \"20da3d83-c1e8-40cc-9103-0f97f89a7eaa\" (UID: \"20da3d83-c1e8-40cc-9103-0f97f89a7eaa\") " Jan 01 09:55:53 crc kubenswrapper[4867]: I0101 09:55:53.886848 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dq8jx\" (UniqueName: \"kubernetes.io/projected/20da3d83-c1e8-40cc-9103-0f97f89a7eaa-kube-api-access-dq8jx\") pod \"20da3d83-c1e8-40cc-9103-0f97f89a7eaa\" (UID: \"20da3d83-c1e8-40cc-9103-0f97f89a7eaa\") " Jan 01 09:55:53 crc kubenswrapper[4867]: I0101 09:55:53.886897 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20da3d83-c1e8-40cc-9103-0f97f89a7eaa-ovsdbserver-nb\") pod \"20da3d83-c1e8-40cc-9103-0f97f89a7eaa\" (UID: \"20da3d83-c1e8-40cc-9103-0f97f89a7eaa\") " Jan 01 09:55:53 crc kubenswrapper[4867]: I0101 09:55:53.886919 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20da3d83-c1e8-40cc-9103-0f97f89a7eaa-dns-svc\") pod \"20da3d83-c1e8-40cc-9103-0f97f89a7eaa\" (UID: \"20da3d83-c1e8-40cc-9103-0f97f89a7eaa\") " Jan 01 09:55:53 crc kubenswrapper[4867]: I0101 09:55:53.893153 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20da3d83-c1e8-40cc-9103-0f97f89a7eaa-kube-api-access-dq8jx" (OuterVolumeSpecName: "kube-api-access-dq8jx") pod "20da3d83-c1e8-40cc-9103-0f97f89a7eaa" (UID: "20da3d83-c1e8-40cc-9103-0f97f89a7eaa"). InnerVolumeSpecName "kube-api-access-dq8jx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:55:53 crc kubenswrapper[4867]: I0101 09:55:53.908571 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20da3d83-c1e8-40cc-9103-0f97f89a7eaa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "20da3d83-c1e8-40cc-9103-0f97f89a7eaa" (UID: "20da3d83-c1e8-40cc-9103-0f97f89a7eaa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 09:55:53 crc kubenswrapper[4867]: I0101 09:55:53.909106 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20da3d83-c1e8-40cc-9103-0f97f89a7eaa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "20da3d83-c1e8-40cc-9103-0f97f89a7eaa" (UID: "20da3d83-c1e8-40cc-9103-0f97f89a7eaa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 09:55:53 crc kubenswrapper[4867]: I0101 09:55:53.918593 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20da3d83-c1e8-40cc-9103-0f97f89a7eaa-config" (OuterVolumeSpecName: "config") pod "20da3d83-c1e8-40cc-9103-0f97f89a7eaa" (UID: "20da3d83-c1e8-40cc-9103-0f97f89a7eaa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 09:55:53 crc kubenswrapper[4867]: I0101 09:55:53.989085 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20da3d83-c1e8-40cc-9103-0f97f89a7eaa-config\") on node \"crc\" DevicePath \"\"" Jan 01 09:55:53 crc kubenswrapper[4867]: I0101 09:55:53.989137 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dq8jx\" (UniqueName: \"kubernetes.io/projected/20da3d83-c1e8-40cc-9103-0f97f89a7eaa-kube-api-access-dq8jx\") on node \"crc\" DevicePath \"\"" Jan 01 09:55:53 crc kubenswrapper[4867]: I0101 09:55:53.989158 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20da3d83-c1e8-40cc-9103-0f97f89a7eaa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 01 09:55:53 crc kubenswrapper[4867]: I0101 09:55:53.989175 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20da3d83-c1e8-40cc-9103-0f97f89a7eaa-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 01 09:55:54 crc kubenswrapper[4867]: I0101 09:55:54.310175 4867 generic.go:334] "Generic (PLEG): container finished" podID="a5017d29-c559-474f-b05c-f54aa95fc6a6" containerID="061b19fc4abd885559de0a0d33eb6624450c8db6c671f153ef4897101b33665b" exitCode=0 Jan 01 09:55:54 crc kubenswrapper[4867]: I0101 09:55:54.310216 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69db5595f9-9tdw9" event={"ID":"a5017d29-c559-474f-b05c-f54aa95fc6a6","Type":"ContainerDied","Data":"061b19fc4abd885559de0a0d33eb6624450c8db6c671f153ef4897101b33665b"} Jan 01 09:55:54 crc kubenswrapper[4867]: I0101 09:55:54.310265 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69db5595f9-9tdw9" event={"ID":"a5017d29-c559-474f-b05c-f54aa95fc6a6","Type":"ContainerStarted","Data":"8a8cc7a55c72c5882752564c5eb3df3c4bf5e172832281fd6322de600649d941"} Jan 01 09:55:54 crc kubenswrapper[4867]: I0101 09:55:54.317217 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8574559fdf-ndqdf" Jan 01 09:55:54 crc kubenswrapper[4867]: I0101 09:55:54.317628 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8574559fdf-ndqdf" event={"ID":"20da3d83-c1e8-40cc-9103-0f97f89a7eaa","Type":"ContainerDied","Data":"438e3b9f828d7f7d966f02c7cb1fb496fa96754323cc46a1fc6a0e0c97817f5b"} Jan 01 09:55:54 crc kubenswrapper[4867]: I0101 09:55:54.317975 4867 scope.go:117] "RemoveContainer" containerID="ca0f4958f09fa4c7d95f7b8130a8e3d459d2c75232ea1c882183776cd26ac4cc" Jan 01 09:55:54 crc kubenswrapper[4867]: I0101 09:55:54.511357 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8574559fdf-ndqdf"] Jan 01 09:55:54 crc kubenswrapper[4867]: I0101 09:55:54.516787 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8574559fdf-ndqdf"] Jan 01 09:55:55 crc kubenswrapper[4867]: I0101 09:55:55.145997 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20da3d83-c1e8-40cc-9103-0f97f89a7eaa" path="/var/lib/kubelet/pods/20da3d83-c1e8-40cc-9103-0f97f89a7eaa/volumes" Jan 01 09:55:55 crc kubenswrapper[4867]: I0101 09:55:55.335327 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69db5595f9-9tdw9" event={"ID":"a5017d29-c559-474f-b05c-f54aa95fc6a6","Type":"ContainerStarted","Data":"b69fde5318f8057f95070322b0e79a50405226e1619d9431cb6fe75616b59aac"} Jan 01 09:55:55 crc kubenswrapper[4867]: I0101 09:55:55.336072 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69db5595f9-9tdw9" Jan 01 09:55:55 crc kubenswrapper[4867]: I0101 09:55:55.368761 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69db5595f9-9tdw9" podStartSLOduration=3.368737311 podStartE2EDuration="3.368737311s" podCreationTimestamp="2026-01-01 09:55:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 09:55:55.365364895 +0000 UTC m=+5364.500633694" watchObservedRunningTime="2026-01-01 09:55:55.368737311 +0000 UTC m=+5364.504006110" Jan 01 09:55:56 crc kubenswrapper[4867]: I0101 09:55:56.446379 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Jan 01 09:55:56 crc kubenswrapper[4867]: E0101 09:55:56.447524 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20da3d83-c1e8-40cc-9103-0f97f89a7eaa" containerName="init" Jan 01 09:55:56 crc kubenswrapper[4867]: I0101 09:55:56.447552 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="20da3d83-c1e8-40cc-9103-0f97f89a7eaa" containerName="init" Jan 01 09:55:56 crc kubenswrapper[4867]: I0101 09:55:56.447872 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="20da3d83-c1e8-40cc-9103-0f97f89a7eaa" containerName="init" Jan 01 09:55:56 crc kubenswrapper[4867]: I0101 09:55:56.449146 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 01 09:55:56 crc kubenswrapper[4867]: I0101 09:55:56.452366 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Jan 01 09:55:56 crc kubenswrapper[4867]: I0101 09:55:56.462601 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 01 09:55:56 crc kubenswrapper[4867]: I0101 09:55:56.536784 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhtkb\" (UniqueName: \"kubernetes.io/projected/06c46cb6-0f6e-49e3-bbd8-c0ffbedfd8ab-kube-api-access-vhtkb\") pod \"ovn-copy-data\" (UID: \"06c46cb6-0f6e-49e3-bbd8-c0ffbedfd8ab\") " pod="openstack/ovn-copy-data" Jan 01 09:55:56 crc kubenswrapper[4867]: I0101 09:55:56.537035 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/06c46cb6-0f6e-49e3-bbd8-c0ffbedfd8ab-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"06c46cb6-0f6e-49e3-bbd8-c0ffbedfd8ab\") " pod="openstack/ovn-copy-data" Jan 01 09:55:56 crc kubenswrapper[4867]: I0101 09:55:56.537231 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-78de1331-2dac-486c-b0b2-64db602b0dd8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-78de1331-2dac-486c-b0b2-64db602b0dd8\") pod \"ovn-copy-data\" (UID: \"06c46cb6-0f6e-49e3-bbd8-c0ffbedfd8ab\") " pod="openstack/ovn-copy-data" Jan 01 09:55:56 crc kubenswrapper[4867]: I0101 09:55:56.639719 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-78de1331-2dac-486c-b0b2-64db602b0dd8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-78de1331-2dac-486c-b0b2-64db602b0dd8\") pod \"ovn-copy-data\" (UID: \"06c46cb6-0f6e-49e3-bbd8-c0ffbedfd8ab\") " pod="openstack/ovn-copy-data" Jan 01 09:55:56 crc kubenswrapper[4867]: I0101 09:55:56.639959 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhtkb\" (UniqueName: \"kubernetes.io/projected/06c46cb6-0f6e-49e3-bbd8-c0ffbedfd8ab-kube-api-access-vhtkb\") pod \"ovn-copy-data\" (UID: \"06c46cb6-0f6e-49e3-bbd8-c0ffbedfd8ab\") " pod="openstack/ovn-copy-data" Jan 01 09:55:56 crc kubenswrapper[4867]: I0101 09:55:56.640073 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/06c46cb6-0f6e-49e3-bbd8-c0ffbedfd8ab-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"06c46cb6-0f6e-49e3-bbd8-c0ffbedfd8ab\") " pod="openstack/ovn-copy-data" Jan 01 09:55:56 crc kubenswrapper[4867]: I0101 09:55:56.646339 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 01 09:55:56 crc kubenswrapper[4867]: I0101 09:55:56.646414 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-78de1331-2dac-486c-b0b2-64db602b0dd8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-78de1331-2dac-486c-b0b2-64db602b0dd8\") pod \"ovn-copy-data\" (UID: \"06c46cb6-0f6e-49e3-bbd8-c0ffbedfd8ab\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/08b8c51ed133b39bf6fe91c3ede7ea8bd777c1e58dfaae633001781cdd9ab41f/globalmount\"" pod="openstack/ovn-copy-data" Jan 01 09:55:56 crc kubenswrapper[4867]: I0101 09:55:56.653008 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/06c46cb6-0f6e-49e3-bbd8-c0ffbedfd8ab-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"06c46cb6-0f6e-49e3-bbd8-c0ffbedfd8ab\") " pod="openstack/ovn-copy-data" Jan 01 09:55:56 crc kubenswrapper[4867]: I0101 09:55:56.675124 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhtkb\" (UniqueName: \"kubernetes.io/projected/06c46cb6-0f6e-49e3-bbd8-c0ffbedfd8ab-kube-api-access-vhtkb\") pod \"ovn-copy-data\" (UID: \"06c46cb6-0f6e-49e3-bbd8-c0ffbedfd8ab\") " pod="openstack/ovn-copy-data" Jan 01 09:55:56 crc kubenswrapper[4867]: I0101 09:55:56.702121 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-78de1331-2dac-486c-b0b2-64db602b0dd8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-78de1331-2dac-486c-b0b2-64db602b0dd8\") pod \"ovn-copy-data\" (UID: \"06c46cb6-0f6e-49e3-bbd8-c0ffbedfd8ab\") " pod="openstack/ovn-copy-data" Jan 01 09:55:56 crc kubenswrapper[4867]: I0101 09:55:56.787282 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 01 09:55:57 crc kubenswrapper[4867]: I0101 09:55:57.398679 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 01 09:55:57 crc kubenswrapper[4867]: W0101 09:55:57.398787 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06c46cb6_0f6e_49e3_bbd8_c0ffbedfd8ab.slice/crio-d36c1a2ef49d31d549bf6e95eacd01b392c6f9183165af2f870815052389dc0c WatchSource:0}: Error finding container d36c1a2ef49d31d549bf6e95eacd01b392c6f9183165af2f870815052389dc0c: Status 404 returned error can't find the container with id d36c1a2ef49d31d549bf6e95eacd01b392c6f9183165af2f870815052389dc0c Jan 01 09:55:57 crc kubenswrapper[4867]: I0101 09:55:57.403129 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 01 09:55:58 crc kubenswrapper[4867]: I0101 09:55:58.369215 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"06c46cb6-0f6e-49e3-bbd8-c0ffbedfd8ab","Type":"ContainerStarted","Data":"d36c1a2ef49d31d549bf6e95eacd01b392c6f9183165af2f870815052389dc0c"} Jan 01 09:55:58 crc kubenswrapper[4867]: I0101 09:55:58.761316 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8p875" Jan 01 09:55:58 crc kubenswrapper[4867]: I0101 09:55:58.761588 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8p875" Jan 01 09:55:58 crc kubenswrapper[4867]: I0101 09:55:58.828263 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8p875" Jan 01 09:55:59 crc kubenswrapper[4867]: I0101 09:55:59.393222 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"06c46cb6-0f6e-49e3-bbd8-c0ffbedfd8ab","Type":"ContainerStarted","Data":"d271fbd07c531618d0b22cf4c695d1991594c82a28dabd606a74c1f7efcad842"} Jan 01 09:55:59 crc kubenswrapper[4867]: I0101 09:55:59.415634 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=3.549443881 podStartE2EDuration="4.415611668s" podCreationTimestamp="2026-01-01 09:55:55 +0000 UTC" firstStartedPulling="2026-01-01 09:55:57.402827916 +0000 UTC m=+5366.538096695" lastFinishedPulling="2026-01-01 09:55:58.268995703 +0000 UTC m=+5367.404264482" observedRunningTime="2026-01-01 09:55:59.410222326 +0000 UTC m=+5368.545491145" watchObservedRunningTime="2026-01-01 09:55:59.415611668 +0000 UTC m=+5368.550880437" Jan 01 09:55:59 crc kubenswrapper[4867]: I0101 09:55:59.457962 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8p875" Jan 01 09:55:59 crc kubenswrapper[4867]: I0101 09:55:59.508748 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8p875"] Jan 01 09:56:01 crc kubenswrapper[4867]: I0101 09:56:01.417525 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8p875" podUID="9f0d1f4d-168e-4dba-9b60-9262d4cc24b6" containerName="registry-server" containerID="cri-o://510294dc44f059bc7ad5bc22964649ff3507c6e444a78c614372669e9057f88a" gracePeriod=2 Jan 01 09:56:02 crc kubenswrapper[4867]: I0101 09:56:02.863433 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-69db5595f9-9tdw9" Jan 01 09:56:02 crc kubenswrapper[4867]: I0101 09:56:02.929115 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-wnnzv"] Jan 01 09:56:02 crc kubenswrapper[4867]: I0101 09:56:02.929497 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-699964fbc-wnnzv" podUID="cfe3ab9e-fd94-4a13-bd03-5716336019bf" containerName="dnsmasq-dns" containerID="cri-o://0f76aaeb6ee1331c789dc5243640746809b7bfd2e51d845f7f2bdfd176aae86c" gracePeriod=10 Jan 01 09:56:03 crc kubenswrapper[4867]: I0101 09:56:03.436337 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-wnnzv" Jan 01 09:56:03 crc kubenswrapper[4867]: I0101 09:56:03.437002 4867 generic.go:334] "Generic (PLEG): container finished" podID="cfe3ab9e-fd94-4a13-bd03-5716336019bf" containerID="0f76aaeb6ee1331c789dc5243640746809b7bfd2e51d845f7f2bdfd176aae86c" exitCode=0 Jan 01 09:56:03 crc kubenswrapper[4867]: I0101 09:56:03.437064 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-wnnzv" event={"ID":"cfe3ab9e-fd94-4a13-bd03-5716336019bf","Type":"ContainerDied","Data":"0f76aaeb6ee1331c789dc5243640746809b7bfd2e51d845f7f2bdfd176aae86c"} Jan 01 09:56:03 crc kubenswrapper[4867]: I0101 09:56:03.437244 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-wnnzv" event={"ID":"cfe3ab9e-fd94-4a13-bd03-5716336019bf","Type":"ContainerDied","Data":"b5637f1be7b40db6d99d4035f6cfe9cbbb4b70cfd568619f875fb1830b5db4a3"} Jan 01 09:56:03 crc kubenswrapper[4867]: I0101 09:56:03.437266 4867 scope.go:117] "RemoveContainer" containerID="0f76aaeb6ee1331c789dc5243640746809b7bfd2e51d845f7f2bdfd176aae86c" Jan 01 09:56:03 crc kubenswrapper[4867]: I0101 09:56:03.443824 4867 generic.go:334] "Generic (PLEG): container finished" podID="9f0d1f4d-168e-4dba-9b60-9262d4cc24b6" containerID="510294dc44f059bc7ad5bc22964649ff3507c6e444a78c614372669e9057f88a" exitCode=0 Jan 01 09:56:03 crc kubenswrapper[4867]: I0101 09:56:03.443902 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8p875" event={"ID":"9f0d1f4d-168e-4dba-9b60-9262d4cc24b6","Type":"ContainerDied","Data":"510294dc44f059bc7ad5bc22964649ff3507c6e444a78c614372669e9057f88a"} Jan 01 09:56:03 crc kubenswrapper[4867]: I0101 09:56:03.475683 4867 scope.go:117] "RemoveContainer" containerID="62138944240db446e9a41ec6f854cd32f3a93569e40c327b3f33d330760472f5" Jan 01 09:56:03 crc kubenswrapper[4867]: I0101 09:56:03.510328 4867 scope.go:117] "RemoveContainer" containerID="0f76aaeb6ee1331c789dc5243640746809b7bfd2e51d845f7f2bdfd176aae86c" Jan 01 09:56:03 crc kubenswrapper[4867]: E0101 09:56:03.511783 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f76aaeb6ee1331c789dc5243640746809b7bfd2e51d845f7f2bdfd176aae86c\": container with ID starting with 0f76aaeb6ee1331c789dc5243640746809b7bfd2e51d845f7f2bdfd176aae86c not found: ID does not exist" containerID="0f76aaeb6ee1331c789dc5243640746809b7bfd2e51d845f7f2bdfd176aae86c" Jan 01 09:56:03 crc kubenswrapper[4867]: I0101 09:56:03.511814 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f76aaeb6ee1331c789dc5243640746809b7bfd2e51d845f7f2bdfd176aae86c"} err="failed to get container status \"0f76aaeb6ee1331c789dc5243640746809b7bfd2e51d845f7f2bdfd176aae86c\": rpc error: code = NotFound desc = could not find container \"0f76aaeb6ee1331c789dc5243640746809b7bfd2e51d845f7f2bdfd176aae86c\": container with ID starting with 0f76aaeb6ee1331c789dc5243640746809b7bfd2e51d845f7f2bdfd176aae86c not found: ID does not exist" Jan 01 09:56:03 crc kubenswrapper[4867]: I0101 09:56:03.511834 4867 scope.go:117] "RemoveContainer" containerID="62138944240db446e9a41ec6f854cd32f3a93569e40c327b3f33d330760472f5" Jan 01 09:56:03 crc kubenswrapper[4867]: E0101 09:56:03.512700 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62138944240db446e9a41ec6f854cd32f3a93569e40c327b3f33d330760472f5\": container with ID starting with 62138944240db446e9a41ec6f854cd32f3a93569e40c327b3f33d330760472f5 not found: ID does not exist" containerID="62138944240db446e9a41ec6f854cd32f3a93569e40c327b3f33d330760472f5" Jan 01 09:56:03 crc kubenswrapper[4867]: I0101 09:56:03.512719 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62138944240db446e9a41ec6f854cd32f3a93569e40c327b3f33d330760472f5"} err="failed to get container status \"62138944240db446e9a41ec6f854cd32f3a93569e40c327b3f33d330760472f5\": rpc error: code = NotFound desc = could not find container \"62138944240db446e9a41ec6f854cd32f3a93569e40c327b3f33d330760472f5\": container with ID starting with 62138944240db446e9a41ec6f854cd32f3a93569e40c327b3f33d330760472f5 not found: ID does not exist" Jan 01 09:56:03 crc kubenswrapper[4867]: I0101 09:56:03.574924 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8p875" Jan 01 09:56:03 crc kubenswrapper[4867]: I0101 09:56:03.593714 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfe3ab9e-fd94-4a13-bd03-5716336019bf-dns-svc\") pod \"cfe3ab9e-fd94-4a13-bd03-5716336019bf\" (UID: \"cfe3ab9e-fd94-4a13-bd03-5716336019bf\") " Jan 01 09:56:03 crc kubenswrapper[4867]: I0101 09:56:03.593903 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs8bk\" (UniqueName: \"kubernetes.io/projected/cfe3ab9e-fd94-4a13-bd03-5716336019bf-kube-api-access-bs8bk\") pod \"cfe3ab9e-fd94-4a13-bd03-5716336019bf\" (UID: \"cfe3ab9e-fd94-4a13-bd03-5716336019bf\") " Jan 01 09:56:03 crc kubenswrapper[4867]: I0101 09:56:03.593982 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfe3ab9e-fd94-4a13-bd03-5716336019bf-config\") pod \"cfe3ab9e-fd94-4a13-bd03-5716336019bf\" (UID: \"cfe3ab9e-fd94-4a13-bd03-5716336019bf\") " Jan 01 09:56:03 crc kubenswrapper[4867]: I0101 09:56:03.601847 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfe3ab9e-fd94-4a13-bd03-5716336019bf-kube-api-access-bs8bk" (OuterVolumeSpecName: "kube-api-access-bs8bk") pod "cfe3ab9e-fd94-4a13-bd03-5716336019bf" (UID: "cfe3ab9e-fd94-4a13-bd03-5716336019bf"). InnerVolumeSpecName "kube-api-access-bs8bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:56:03 crc kubenswrapper[4867]: I0101 09:56:03.637843 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfe3ab9e-fd94-4a13-bd03-5716336019bf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cfe3ab9e-fd94-4a13-bd03-5716336019bf" (UID: "cfe3ab9e-fd94-4a13-bd03-5716336019bf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 09:56:03 crc kubenswrapper[4867]: I0101 09:56:03.645448 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfe3ab9e-fd94-4a13-bd03-5716336019bf-config" (OuterVolumeSpecName: "config") pod "cfe3ab9e-fd94-4a13-bd03-5716336019bf" (UID: "cfe3ab9e-fd94-4a13-bd03-5716336019bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 09:56:03 crc kubenswrapper[4867]: I0101 09:56:03.695233 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f0d1f4d-168e-4dba-9b60-9262d4cc24b6-utilities\") pod \"9f0d1f4d-168e-4dba-9b60-9262d4cc24b6\" (UID: \"9f0d1f4d-168e-4dba-9b60-9262d4cc24b6\") " Jan 01 09:56:03 crc kubenswrapper[4867]: I0101 09:56:03.695292 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f0d1f4d-168e-4dba-9b60-9262d4cc24b6-catalog-content\") pod \"9f0d1f4d-168e-4dba-9b60-9262d4cc24b6\" (UID: \"9f0d1f4d-168e-4dba-9b60-9262d4cc24b6\") " Jan 01 09:56:03 crc kubenswrapper[4867]: I0101 09:56:03.695479 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh99j\" (UniqueName: \"kubernetes.io/projected/9f0d1f4d-168e-4dba-9b60-9262d4cc24b6-kube-api-access-wh99j\") pod \"9f0d1f4d-168e-4dba-9b60-9262d4cc24b6\" (UID: \"9f0d1f4d-168e-4dba-9b60-9262d4cc24b6\") " Jan 01 09:56:03 crc kubenswrapper[4867]: I0101 09:56:03.695847 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs8bk\" (UniqueName: \"kubernetes.io/projected/cfe3ab9e-fd94-4a13-bd03-5716336019bf-kube-api-access-bs8bk\") on node \"crc\" DevicePath \"\"" Jan 01 09:56:03 crc kubenswrapper[4867]: I0101 09:56:03.695861 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfe3ab9e-fd94-4a13-bd03-5716336019bf-config\") on node \"crc\" DevicePath \"\"" Jan 01 09:56:03 crc kubenswrapper[4867]: I0101 09:56:03.695872 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfe3ab9e-fd94-4a13-bd03-5716336019bf-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 01 09:56:03 crc kubenswrapper[4867]: I0101 09:56:03.698448 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f0d1f4d-168e-4dba-9b60-9262d4cc24b6-utilities" (OuterVolumeSpecName: "utilities") pod "9f0d1f4d-168e-4dba-9b60-9262d4cc24b6" (UID: "9f0d1f4d-168e-4dba-9b60-9262d4cc24b6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:56:03 crc kubenswrapper[4867]: I0101 09:56:03.700328 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f0d1f4d-168e-4dba-9b60-9262d4cc24b6-kube-api-access-wh99j" (OuterVolumeSpecName: "kube-api-access-wh99j") pod "9f0d1f4d-168e-4dba-9b60-9262d4cc24b6" (UID: "9f0d1f4d-168e-4dba-9b60-9262d4cc24b6"). InnerVolumeSpecName "kube-api-access-wh99j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:56:03 crc kubenswrapper[4867]: I0101 09:56:03.720527 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f0d1f4d-168e-4dba-9b60-9262d4cc24b6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f0d1f4d-168e-4dba-9b60-9262d4cc24b6" (UID: "9f0d1f4d-168e-4dba-9b60-9262d4cc24b6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:56:03 crc kubenswrapper[4867]: I0101 09:56:03.798650 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f0d1f4d-168e-4dba-9b60-9262d4cc24b6-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 09:56:03 crc kubenswrapper[4867]: I0101 09:56:03.798746 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f0d1f4d-168e-4dba-9b60-9262d4cc24b6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 09:56:03 crc kubenswrapper[4867]: I0101 09:56:03.798764 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh99j\" (UniqueName: \"kubernetes.io/projected/9f0d1f4d-168e-4dba-9b60-9262d4cc24b6-kube-api-access-wh99j\") on node \"crc\" DevicePath \"\"" Jan 01 09:56:04 crc kubenswrapper[4867]: I0101 09:56:04.456990 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-wnnzv" Jan 01 09:56:04 crc kubenswrapper[4867]: I0101 09:56:04.463248 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8p875" event={"ID":"9f0d1f4d-168e-4dba-9b60-9262d4cc24b6","Type":"ContainerDied","Data":"9b72f2821048ab188282425a096f083c0acd9ff9e299abaceeaf6de24d9f37b0"} Jan 01 09:56:04 crc kubenswrapper[4867]: I0101 09:56:04.463342 4867 scope.go:117] "RemoveContainer" containerID="510294dc44f059bc7ad5bc22964649ff3507c6e444a78c614372669e9057f88a" Jan 01 09:56:04 crc kubenswrapper[4867]: I0101 09:56:04.463557 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8p875" Jan 01 09:56:04 crc kubenswrapper[4867]: I0101 09:56:04.490346 4867 scope.go:117] "RemoveContainer" containerID="caf7494183cdf511b0dff49e68c34ff204f0f4dfada1f02d48a9224233f55853" Jan 01 09:56:04 crc kubenswrapper[4867]: I0101 09:56:04.507428 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-wnnzv"] Jan 01 09:56:04 crc kubenswrapper[4867]: I0101 09:56:04.514027 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-wnnzv"] Jan 01 09:56:04 crc kubenswrapper[4867]: I0101 09:56:04.523531 4867 scope.go:117] "RemoveContainer" containerID="009ab88d10c65a026af1998923263bb357b7a7acb187a4bbfb066ddb2a29d61c" Jan 01 09:56:04 crc kubenswrapper[4867]: I0101 09:56:04.587227 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8p875"] Jan 01 09:56:04 crc kubenswrapper[4867]: I0101 09:56:04.605101 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8p875"] Jan 01 09:56:05 crc kubenswrapper[4867]: I0101 09:56:05.144804 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f0d1f4d-168e-4dba-9b60-9262d4cc24b6" path="/var/lib/kubelet/pods/9f0d1f4d-168e-4dba-9b60-9262d4cc24b6/volumes" Jan 01 09:56:05 crc kubenswrapper[4867]: I0101 09:56:05.146388 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfe3ab9e-fd94-4a13-bd03-5716336019bf" path="/var/lib/kubelet/pods/cfe3ab9e-fd94-4a13-bd03-5716336019bf/volumes" Jan 01 09:56:06 crc kubenswrapper[4867]: I0101 09:56:06.790094 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 01 09:56:06 crc kubenswrapper[4867]: E0101 09:56:06.790387 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f0d1f4d-168e-4dba-9b60-9262d4cc24b6" containerName="extract-utilities" Jan 01 09:56:06 crc kubenswrapper[4867]: I0101 09:56:06.791684 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f0d1f4d-168e-4dba-9b60-9262d4cc24b6" containerName="extract-utilities" Jan 01 09:56:06 crc kubenswrapper[4867]: E0101 09:56:06.791698 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f0d1f4d-168e-4dba-9b60-9262d4cc24b6" containerName="extract-content" Jan 01 09:56:06 crc kubenswrapper[4867]: I0101 09:56:06.791706 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f0d1f4d-168e-4dba-9b60-9262d4cc24b6" containerName="extract-content" Jan 01 09:56:06 crc kubenswrapper[4867]: E0101 09:56:06.791720 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfe3ab9e-fd94-4a13-bd03-5716336019bf" containerName="dnsmasq-dns" Jan 01 09:56:06 crc kubenswrapper[4867]: I0101 09:56:06.791726 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfe3ab9e-fd94-4a13-bd03-5716336019bf" containerName="dnsmasq-dns" Jan 01 09:56:06 crc kubenswrapper[4867]: E0101 09:56:06.791737 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f0d1f4d-168e-4dba-9b60-9262d4cc24b6" containerName="registry-server" Jan 01 09:56:06 crc kubenswrapper[4867]: I0101 09:56:06.791743 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f0d1f4d-168e-4dba-9b60-9262d4cc24b6" containerName="registry-server" Jan 01 09:56:06 crc kubenswrapper[4867]: E0101 09:56:06.791762 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfe3ab9e-fd94-4a13-bd03-5716336019bf" containerName="init" Jan 01 09:56:06 crc kubenswrapper[4867]: I0101 09:56:06.791768 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfe3ab9e-fd94-4a13-bd03-5716336019bf" containerName="init" Jan 01 09:56:06 crc kubenswrapper[4867]: I0101 09:56:06.791947 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfe3ab9e-fd94-4a13-bd03-5716336019bf" containerName="dnsmasq-dns" Jan 01 09:56:06 crc kubenswrapper[4867]: I0101 09:56:06.791965 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f0d1f4d-168e-4dba-9b60-9262d4cc24b6" containerName="registry-server" Jan 01 09:56:06 crc kubenswrapper[4867]: I0101 09:56:06.793813 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 01 09:56:06 crc kubenswrapper[4867]: I0101 09:56:06.801983 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 01 09:56:06 crc kubenswrapper[4867]: I0101 09:56:06.802402 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 01 09:56:06 crc kubenswrapper[4867]: I0101 09:56:06.802766 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-9gmhm" Jan 01 09:56:06 crc kubenswrapper[4867]: I0101 09:56:06.802854 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 01 09:56:06 crc kubenswrapper[4867]: I0101 09:56:06.815761 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 01 09:56:06 crc kubenswrapper[4867]: I0101 09:56:06.957363 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c356edf9-e029-4fb9-b0e5-7cd7b5544429-scripts\") pod \"ovn-northd-0\" (UID: \"c356edf9-e029-4fb9-b0e5-7cd7b5544429\") " pod="openstack/ovn-northd-0" Jan 01 09:56:06 crc kubenswrapper[4867]: I0101 09:56:06.957794 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c356edf9-e029-4fb9-b0e5-7cd7b5544429-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c356edf9-e029-4fb9-b0e5-7cd7b5544429\") " pod="openstack/ovn-northd-0" Jan 01 09:56:06 crc kubenswrapper[4867]: I0101 09:56:06.958080 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c356edf9-e029-4fb9-b0e5-7cd7b5544429-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"c356edf9-e029-4fb9-b0e5-7cd7b5544429\") " pod="openstack/ovn-northd-0" Jan 01 09:56:06 crc kubenswrapper[4867]: I0101 09:56:06.958379 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c356edf9-e029-4fb9-b0e5-7cd7b5544429-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"c356edf9-e029-4fb9-b0e5-7cd7b5544429\") " pod="openstack/ovn-northd-0" Jan 01 09:56:06 crc kubenswrapper[4867]: I0101 09:56:06.958680 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvtm9\" (UniqueName: \"kubernetes.io/projected/c356edf9-e029-4fb9-b0e5-7cd7b5544429-kube-api-access-jvtm9\") pod \"ovn-northd-0\" (UID: \"c356edf9-e029-4fb9-b0e5-7cd7b5544429\") " pod="openstack/ovn-northd-0" Jan 01 09:56:06 crc kubenswrapper[4867]: I0101 09:56:06.958870 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c356edf9-e029-4fb9-b0e5-7cd7b5544429-config\") pod \"ovn-northd-0\" (UID: \"c356edf9-e029-4fb9-b0e5-7cd7b5544429\") " pod="openstack/ovn-northd-0" Jan 01 09:56:06 crc kubenswrapper[4867]: I0101 09:56:06.959120 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c356edf9-e029-4fb9-b0e5-7cd7b5544429-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c356edf9-e029-4fb9-b0e5-7cd7b5544429\") " pod="openstack/ovn-northd-0" Jan 01 09:56:07 crc kubenswrapper[4867]: I0101 09:56:07.060902 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c356edf9-e029-4fb9-b0e5-7cd7b5544429-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"c356edf9-e029-4fb9-b0e5-7cd7b5544429\") " pod="openstack/ovn-northd-0" Jan 01 09:56:07 crc kubenswrapper[4867]: I0101 09:56:07.061204 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvtm9\" (UniqueName: \"kubernetes.io/projected/c356edf9-e029-4fb9-b0e5-7cd7b5544429-kube-api-access-jvtm9\") pod \"ovn-northd-0\" (UID: \"c356edf9-e029-4fb9-b0e5-7cd7b5544429\") " pod="openstack/ovn-northd-0" Jan 01 09:56:07 crc kubenswrapper[4867]: I0101 09:56:07.061344 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c356edf9-e029-4fb9-b0e5-7cd7b5544429-config\") pod \"ovn-northd-0\" (UID: \"c356edf9-e029-4fb9-b0e5-7cd7b5544429\") " pod="openstack/ovn-northd-0" Jan 01 09:56:07 crc kubenswrapper[4867]: I0101 09:56:07.061496 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c356edf9-e029-4fb9-b0e5-7cd7b5544429-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c356edf9-e029-4fb9-b0e5-7cd7b5544429\") " pod="openstack/ovn-northd-0" Jan 01 09:56:07 crc kubenswrapper[4867]: I0101 09:56:07.061705 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c356edf9-e029-4fb9-b0e5-7cd7b5544429-scripts\") pod \"ovn-northd-0\" (UID: \"c356edf9-e029-4fb9-b0e5-7cd7b5544429\") " pod="openstack/ovn-northd-0" Jan 01 09:56:07 crc kubenswrapper[4867]: I0101 09:56:07.061843 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c356edf9-e029-4fb9-b0e5-7cd7b5544429-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c356edf9-e029-4fb9-b0e5-7cd7b5544429\") " pod="openstack/ovn-northd-0" Jan 01 09:56:07 crc kubenswrapper[4867]: I0101 09:56:07.062052 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c356edf9-e029-4fb9-b0e5-7cd7b5544429-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"c356edf9-e029-4fb9-b0e5-7cd7b5544429\") " pod="openstack/ovn-northd-0" Jan 01 09:56:07 crc kubenswrapper[4867]: I0101 09:56:07.062695 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c356edf9-e029-4fb9-b0e5-7cd7b5544429-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c356edf9-e029-4fb9-b0e5-7cd7b5544429\") " pod="openstack/ovn-northd-0" Jan 01 09:56:07 crc kubenswrapper[4867]: I0101 09:56:07.063147 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c356edf9-e029-4fb9-b0e5-7cd7b5544429-config\") pod \"ovn-northd-0\" (UID: \"c356edf9-e029-4fb9-b0e5-7cd7b5544429\") " pod="openstack/ovn-northd-0" Jan 01 09:56:07 crc kubenswrapper[4867]: I0101 09:56:07.063339 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c356edf9-e029-4fb9-b0e5-7cd7b5544429-scripts\") pod \"ovn-northd-0\" (UID: \"c356edf9-e029-4fb9-b0e5-7cd7b5544429\") " pod="openstack/ovn-northd-0" Jan 01 09:56:07 crc kubenswrapper[4867]: I0101 09:56:07.068049 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c356edf9-e029-4fb9-b0e5-7cd7b5544429-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"c356edf9-e029-4fb9-b0e5-7cd7b5544429\") " pod="openstack/ovn-northd-0" Jan 01 09:56:07 crc kubenswrapper[4867]: I0101 09:56:07.070390 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c356edf9-e029-4fb9-b0e5-7cd7b5544429-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c356edf9-e029-4fb9-b0e5-7cd7b5544429\") " pod="openstack/ovn-northd-0" Jan 01 09:56:07 crc kubenswrapper[4867]: I0101 09:56:07.074878 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c356edf9-e029-4fb9-b0e5-7cd7b5544429-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"c356edf9-e029-4fb9-b0e5-7cd7b5544429\") " pod="openstack/ovn-northd-0" Jan 01 09:56:07 crc kubenswrapper[4867]: I0101 09:56:07.087543 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvtm9\" (UniqueName: \"kubernetes.io/projected/c356edf9-e029-4fb9-b0e5-7cd7b5544429-kube-api-access-jvtm9\") pod \"ovn-northd-0\" (UID: \"c356edf9-e029-4fb9-b0e5-7cd7b5544429\") " pod="openstack/ovn-northd-0" Jan 01 09:56:07 crc kubenswrapper[4867]: I0101 09:56:07.133760 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 01 09:56:07 crc kubenswrapper[4867]: I0101 09:56:07.665828 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 01 09:56:07 crc kubenswrapper[4867]: W0101 09:56:07.673459 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc356edf9_e029_4fb9_b0e5_7cd7b5544429.slice/crio-3101a498633597b6a71ab553adc10350cc965832378a251907b508021b3e13e5 WatchSource:0}: Error finding container 3101a498633597b6a71ab553adc10350cc965832378a251907b508021b3e13e5: Status 404 returned error can't find the container with id 3101a498633597b6a71ab553adc10350cc965832378a251907b508021b3e13e5 Jan 01 09:56:08 crc kubenswrapper[4867]: I0101 09:56:08.508929 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c356edf9-e029-4fb9-b0e5-7cd7b5544429","Type":"ContainerStarted","Data":"6618b47dfe189eab56b767fc262fdb95c9ad93475ab1513e22f9829655aa368b"} Jan 01 09:56:08 crc kubenswrapper[4867]: I0101 09:56:08.509497 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 01 09:56:08 crc kubenswrapper[4867]: I0101 09:56:08.509511 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c356edf9-e029-4fb9-b0e5-7cd7b5544429","Type":"ContainerStarted","Data":"fea6ae0fe1a82e2a110f30c2c61195d5adcdf0324ac82f272b1a491f3e60e4ff"} Jan 01 09:56:08 crc kubenswrapper[4867]: I0101 09:56:08.509523 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c356edf9-e029-4fb9-b0e5-7cd7b5544429","Type":"ContainerStarted","Data":"3101a498633597b6a71ab553adc10350cc965832378a251907b508021b3e13e5"} Jan 01 09:56:08 crc kubenswrapper[4867]: I0101 09:56:08.534866 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.534840455 podStartE2EDuration="2.534840455s" podCreationTimestamp="2026-01-01 09:56:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 09:56:08.53184931 +0000 UTC m=+5377.667118179" watchObservedRunningTime="2026-01-01 09:56:08.534840455 +0000 UTC m=+5377.670109234" Jan 01 09:56:12 crc kubenswrapper[4867]: I0101 09:56:12.908624 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-xz5g7"] Jan 01 09:56:12 crc kubenswrapper[4867]: I0101 09:56:12.910096 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-xz5g7" Jan 01 09:56:12 crc kubenswrapper[4867]: I0101 09:56:12.924481 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-xz5g7"] Jan 01 09:56:13 crc kubenswrapper[4867]: I0101 09:56:13.003952 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-f125-account-create-update-jh6hd"] Jan 01 09:56:13 crc kubenswrapper[4867]: I0101 09:56:13.004861 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f125-account-create-update-jh6hd" Jan 01 09:56:13 crc kubenswrapper[4867]: I0101 09:56:13.006484 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 01 09:56:13 crc kubenswrapper[4867]: I0101 09:56:13.014304 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f125-account-create-update-jh6hd"] Jan 01 09:56:13 crc kubenswrapper[4867]: I0101 09:56:13.079406 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tptgf\" (UniqueName: \"kubernetes.io/projected/3f39adf8-53ff-43ed-8646-9bee0c5fad79-kube-api-access-tptgf\") pod \"keystone-db-create-xz5g7\" (UID: \"3f39adf8-53ff-43ed-8646-9bee0c5fad79\") " pod="openstack/keystone-db-create-xz5g7" Jan 01 09:56:13 crc kubenswrapper[4867]: I0101 09:56:13.079593 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f39adf8-53ff-43ed-8646-9bee0c5fad79-operator-scripts\") pod \"keystone-db-create-xz5g7\" (UID: \"3f39adf8-53ff-43ed-8646-9bee0c5fad79\") " pod="openstack/keystone-db-create-xz5g7" Jan 01 09:56:13 crc kubenswrapper[4867]: I0101 09:56:13.180783 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f374d6d6-c44d-4f8a-b4ed-ee985a92300f-operator-scripts\") pod \"keystone-f125-account-create-update-jh6hd\" (UID: \"f374d6d6-c44d-4f8a-b4ed-ee985a92300f\") " pod="openstack/keystone-f125-account-create-update-jh6hd" Jan 01 09:56:13 crc kubenswrapper[4867]: I0101 09:56:13.180874 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f39adf8-53ff-43ed-8646-9bee0c5fad79-operator-scripts\") pod \"keystone-db-create-xz5g7\" (UID: \"3f39adf8-53ff-43ed-8646-9bee0c5fad79\") " pod="openstack/keystone-db-create-xz5g7" Jan 01 09:56:13 crc kubenswrapper[4867]: I0101 09:56:13.180918 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpgxg\" (UniqueName: \"kubernetes.io/projected/f374d6d6-c44d-4f8a-b4ed-ee985a92300f-kube-api-access-vpgxg\") pod \"keystone-f125-account-create-update-jh6hd\" (UID: \"f374d6d6-c44d-4f8a-b4ed-ee985a92300f\") " pod="openstack/keystone-f125-account-create-update-jh6hd" Jan 01 09:56:13 crc kubenswrapper[4867]: I0101 09:56:13.180955 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tptgf\" (UniqueName: \"kubernetes.io/projected/3f39adf8-53ff-43ed-8646-9bee0c5fad79-kube-api-access-tptgf\") pod \"keystone-db-create-xz5g7\" (UID: \"3f39adf8-53ff-43ed-8646-9bee0c5fad79\") " pod="openstack/keystone-db-create-xz5g7" Jan 01 09:56:13 crc kubenswrapper[4867]: I0101 09:56:13.181867 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f39adf8-53ff-43ed-8646-9bee0c5fad79-operator-scripts\") pod \"keystone-db-create-xz5g7\" (UID: \"3f39adf8-53ff-43ed-8646-9bee0c5fad79\") " pod="openstack/keystone-db-create-xz5g7" Jan 01 09:56:13 crc kubenswrapper[4867]: I0101 09:56:13.204015 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tptgf\" (UniqueName: \"kubernetes.io/projected/3f39adf8-53ff-43ed-8646-9bee0c5fad79-kube-api-access-tptgf\") pod \"keystone-db-create-xz5g7\" (UID: \"3f39adf8-53ff-43ed-8646-9bee0c5fad79\") " pod="openstack/keystone-db-create-xz5g7" Jan 01 09:56:13 crc kubenswrapper[4867]: I0101 09:56:13.232535 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-xz5g7" Jan 01 09:56:13 crc kubenswrapper[4867]: I0101 09:56:13.283565 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f374d6d6-c44d-4f8a-b4ed-ee985a92300f-operator-scripts\") pod \"keystone-f125-account-create-update-jh6hd\" (UID: \"f374d6d6-c44d-4f8a-b4ed-ee985a92300f\") " pod="openstack/keystone-f125-account-create-update-jh6hd" Jan 01 09:56:13 crc kubenswrapper[4867]: I0101 09:56:13.283969 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpgxg\" (UniqueName: \"kubernetes.io/projected/f374d6d6-c44d-4f8a-b4ed-ee985a92300f-kube-api-access-vpgxg\") pod \"keystone-f125-account-create-update-jh6hd\" (UID: \"f374d6d6-c44d-4f8a-b4ed-ee985a92300f\") " pod="openstack/keystone-f125-account-create-update-jh6hd" Jan 01 09:56:13 crc kubenswrapper[4867]: I0101 09:56:13.287461 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f374d6d6-c44d-4f8a-b4ed-ee985a92300f-operator-scripts\") pod \"keystone-f125-account-create-update-jh6hd\" (UID: \"f374d6d6-c44d-4f8a-b4ed-ee985a92300f\") " pod="openstack/keystone-f125-account-create-update-jh6hd" Jan 01 09:56:13 crc kubenswrapper[4867]: I0101 09:56:13.302410 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpgxg\" (UniqueName: \"kubernetes.io/projected/f374d6d6-c44d-4f8a-b4ed-ee985a92300f-kube-api-access-vpgxg\") pod \"keystone-f125-account-create-update-jh6hd\" (UID: \"f374d6d6-c44d-4f8a-b4ed-ee985a92300f\") " pod="openstack/keystone-f125-account-create-update-jh6hd" Jan 01 09:56:13 crc kubenswrapper[4867]: I0101 09:56:13.323459 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f125-account-create-update-jh6hd" Jan 01 09:56:13 crc kubenswrapper[4867]: I0101 09:56:13.741341 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-xz5g7"] Jan 01 09:56:13 crc kubenswrapper[4867]: W0101 09:56:13.744802 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f39adf8_53ff_43ed_8646_9bee0c5fad79.slice/crio-b4fb091634ca8e8e85a00d3f5c4ed6c1724bdc35043879d76ae9b13d4d9a6046 WatchSource:0}: Error finding container b4fb091634ca8e8e85a00d3f5c4ed6c1724bdc35043879d76ae9b13d4d9a6046: Status 404 returned error can't find the container with id b4fb091634ca8e8e85a00d3f5c4ed6c1724bdc35043879d76ae9b13d4d9a6046 Jan 01 09:56:13 crc kubenswrapper[4867]: I0101 09:56:13.830680 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f125-account-create-update-jh6hd"] Jan 01 09:56:14 crc kubenswrapper[4867]: I0101 09:56:14.577401 4867 generic.go:334] "Generic (PLEG): container finished" podID="3f39adf8-53ff-43ed-8646-9bee0c5fad79" containerID="d7072e14166067c2b75ee37fba0b612ee2cbafcfee9188c3a0f2ec1971075f8d" exitCode=0 Jan 01 09:56:14 crc kubenswrapper[4867]: I0101 09:56:14.577512 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-xz5g7" event={"ID":"3f39adf8-53ff-43ed-8646-9bee0c5fad79","Type":"ContainerDied","Data":"d7072e14166067c2b75ee37fba0b612ee2cbafcfee9188c3a0f2ec1971075f8d"} Jan 01 09:56:14 crc kubenswrapper[4867]: I0101 09:56:14.577569 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-xz5g7" event={"ID":"3f39adf8-53ff-43ed-8646-9bee0c5fad79","Type":"ContainerStarted","Data":"b4fb091634ca8e8e85a00d3f5c4ed6c1724bdc35043879d76ae9b13d4d9a6046"} Jan 01 09:56:14 crc kubenswrapper[4867]: I0101 09:56:14.584742 4867 generic.go:334] "Generic (PLEG): container finished" podID="f374d6d6-c44d-4f8a-b4ed-ee985a92300f" containerID="b25daf198ff6df50addcb2237bb7effc702e76f3eaae6a252674f1805ee261b7" exitCode=0 Jan 01 09:56:14 crc kubenswrapper[4867]: I0101 09:56:14.584837 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f125-account-create-update-jh6hd" event={"ID":"f374d6d6-c44d-4f8a-b4ed-ee985a92300f","Type":"ContainerDied","Data":"b25daf198ff6df50addcb2237bb7effc702e76f3eaae6a252674f1805ee261b7"} Jan 01 09:56:14 crc kubenswrapper[4867]: I0101 09:56:14.584925 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f125-account-create-update-jh6hd" event={"ID":"f374d6d6-c44d-4f8a-b4ed-ee985a92300f","Type":"ContainerStarted","Data":"e54a7bb208d1b78e920c96b4792281c95779d16bfd896eaa06736b4adbba1d07"} Jan 01 09:56:16 crc kubenswrapper[4867]: I0101 09:56:16.039872 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-xz5g7" Jan 01 09:56:16 crc kubenswrapper[4867]: I0101 09:56:16.047397 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f125-account-create-update-jh6hd" Jan 01 09:56:16 crc kubenswrapper[4867]: I0101 09:56:16.146528 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tptgf\" (UniqueName: \"kubernetes.io/projected/3f39adf8-53ff-43ed-8646-9bee0c5fad79-kube-api-access-tptgf\") pod \"3f39adf8-53ff-43ed-8646-9bee0c5fad79\" (UID: \"3f39adf8-53ff-43ed-8646-9bee0c5fad79\") " Jan 01 09:56:16 crc kubenswrapper[4867]: I0101 09:56:16.146819 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f39adf8-53ff-43ed-8646-9bee0c5fad79-operator-scripts\") pod \"3f39adf8-53ff-43ed-8646-9bee0c5fad79\" (UID: \"3f39adf8-53ff-43ed-8646-9bee0c5fad79\") " Jan 01 09:56:16 crc kubenswrapper[4867]: I0101 09:56:16.147681 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f39adf8-53ff-43ed-8646-9bee0c5fad79-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3f39adf8-53ff-43ed-8646-9bee0c5fad79" (UID: "3f39adf8-53ff-43ed-8646-9bee0c5fad79"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 09:56:16 crc kubenswrapper[4867]: I0101 09:56:16.154858 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f39adf8-53ff-43ed-8646-9bee0c5fad79-kube-api-access-tptgf" (OuterVolumeSpecName: "kube-api-access-tptgf") pod "3f39adf8-53ff-43ed-8646-9bee0c5fad79" (UID: "3f39adf8-53ff-43ed-8646-9bee0c5fad79"). InnerVolumeSpecName "kube-api-access-tptgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:56:16 crc kubenswrapper[4867]: I0101 09:56:16.248647 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f374d6d6-c44d-4f8a-b4ed-ee985a92300f-operator-scripts\") pod \"f374d6d6-c44d-4f8a-b4ed-ee985a92300f\" (UID: \"f374d6d6-c44d-4f8a-b4ed-ee985a92300f\") " Jan 01 09:56:16 crc kubenswrapper[4867]: I0101 09:56:16.249258 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f374d6d6-c44d-4f8a-b4ed-ee985a92300f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f374d6d6-c44d-4f8a-b4ed-ee985a92300f" (UID: "f374d6d6-c44d-4f8a-b4ed-ee985a92300f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 09:56:16 crc kubenswrapper[4867]: I0101 09:56:16.249582 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpgxg\" (UniqueName: \"kubernetes.io/projected/f374d6d6-c44d-4f8a-b4ed-ee985a92300f-kube-api-access-vpgxg\") pod \"f374d6d6-c44d-4f8a-b4ed-ee985a92300f\" (UID: \"f374d6d6-c44d-4f8a-b4ed-ee985a92300f\") " Jan 01 09:56:16 crc kubenswrapper[4867]: I0101 09:56:16.250229 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tptgf\" (UniqueName: \"kubernetes.io/projected/3f39adf8-53ff-43ed-8646-9bee0c5fad79-kube-api-access-tptgf\") on node \"crc\" DevicePath \"\"" Jan 01 09:56:16 crc kubenswrapper[4867]: I0101 09:56:16.250253 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f374d6d6-c44d-4f8a-b4ed-ee985a92300f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 09:56:16 crc kubenswrapper[4867]: I0101 09:56:16.250266 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f39adf8-53ff-43ed-8646-9bee0c5fad79-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 09:56:16 crc kubenswrapper[4867]: I0101 09:56:16.254127 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f374d6d6-c44d-4f8a-b4ed-ee985a92300f-kube-api-access-vpgxg" (OuterVolumeSpecName: "kube-api-access-vpgxg") pod "f374d6d6-c44d-4f8a-b4ed-ee985a92300f" (UID: "f374d6d6-c44d-4f8a-b4ed-ee985a92300f"). InnerVolumeSpecName "kube-api-access-vpgxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:56:16 crc kubenswrapper[4867]: I0101 09:56:16.351293 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpgxg\" (UniqueName: \"kubernetes.io/projected/f374d6d6-c44d-4f8a-b4ed-ee985a92300f-kube-api-access-vpgxg\") on node \"crc\" DevicePath \"\"" Jan 01 09:56:16 crc kubenswrapper[4867]: I0101 09:56:16.638142 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-xz5g7" event={"ID":"3f39adf8-53ff-43ed-8646-9bee0c5fad79","Type":"ContainerDied","Data":"b4fb091634ca8e8e85a00d3f5c4ed6c1724bdc35043879d76ae9b13d4d9a6046"} Jan 01 09:56:16 crc kubenswrapper[4867]: I0101 09:56:16.638215 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4fb091634ca8e8e85a00d3f5c4ed6c1724bdc35043879d76ae9b13d4d9a6046" Jan 01 09:56:16 crc kubenswrapper[4867]: I0101 09:56:16.638183 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-xz5g7" Jan 01 09:56:16 crc kubenswrapper[4867]: I0101 09:56:16.657630 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f125-account-create-update-jh6hd" event={"ID":"f374d6d6-c44d-4f8a-b4ed-ee985a92300f","Type":"ContainerDied","Data":"e54a7bb208d1b78e920c96b4792281c95779d16bfd896eaa06736b4adbba1d07"} Jan 01 09:56:16 crc kubenswrapper[4867]: I0101 09:56:16.657679 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e54a7bb208d1b78e920c96b4792281c95779d16bfd896eaa06736b4adbba1d07" Jan 01 09:56:16 crc kubenswrapper[4867]: I0101 09:56:16.657745 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f125-account-create-update-jh6hd" Jan 01 09:56:17 crc kubenswrapper[4867]: I0101 09:56:17.205861 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 01 09:56:18 crc kubenswrapper[4867]: I0101 09:56:18.584831 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-4gdmp"] Jan 01 09:56:18 crc kubenswrapper[4867]: E0101 09:56:18.585545 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f374d6d6-c44d-4f8a-b4ed-ee985a92300f" containerName="mariadb-account-create-update" Jan 01 09:56:18 crc kubenswrapper[4867]: I0101 09:56:18.585561 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f374d6d6-c44d-4f8a-b4ed-ee985a92300f" containerName="mariadb-account-create-update" Jan 01 09:56:18 crc kubenswrapper[4867]: E0101 09:56:18.585594 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f39adf8-53ff-43ed-8646-9bee0c5fad79" containerName="mariadb-database-create" Jan 01 09:56:18 crc kubenswrapper[4867]: I0101 09:56:18.585602 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f39adf8-53ff-43ed-8646-9bee0c5fad79" containerName="mariadb-database-create" Jan 01 09:56:18 crc kubenswrapper[4867]: I0101 09:56:18.585780 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f39adf8-53ff-43ed-8646-9bee0c5fad79" containerName="mariadb-database-create" Jan 01 09:56:18 crc kubenswrapper[4867]: I0101 09:56:18.585803 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f374d6d6-c44d-4f8a-b4ed-ee985a92300f" containerName="mariadb-account-create-update" Jan 01 09:56:18 crc kubenswrapper[4867]: I0101 09:56:18.586721 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4gdmp" Jan 01 09:56:18 crc kubenswrapper[4867]: I0101 09:56:18.591547 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 01 09:56:18 crc kubenswrapper[4867]: I0101 09:56:18.591708 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-x9f2t" Jan 01 09:56:18 crc kubenswrapper[4867]: I0101 09:56:18.591741 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 01 09:56:18 crc kubenswrapper[4867]: I0101 09:56:18.591925 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 01 09:56:18 crc kubenswrapper[4867]: I0101 09:56:18.602109 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-4gdmp"] Jan 01 09:56:18 crc kubenswrapper[4867]: I0101 09:56:18.620365 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27def929-31ca-4a9c-af0e-5830dddceab9-combined-ca-bundle\") pod \"keystone-db-sync-4gdmp\" (UID: \"27def929-31ca-4a9c-af0e-5830dddceab9\") " pod="openstack/keystone-db-sync-4gdmp" Jan 01 09:56:18 crc kubenswrapper[4867]: I0101 09:56:18.621304 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27def929-31ca-4a9c-af0e-5830dddceab9-config-data\") pod \"keystone-db-sync-4gdmp\" (UID: \"27def929-31ca-4a9c-af0e-5830dddceab9\") " pod="openstack/keystone-db-sync-4gdmp" Jan 01 09:56:18 crc kubenswrapper[4867]: I0101 09:56:18.621384 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltshg\" (UniqueName: \"kubernetes.io/projected/27def929-31ca-4a9c-af0e-5830dddceab9-kube-api-access-ltshg\") pod \"keystone-db-sync-4gdmp\" (UID: \"27def929-31ca-4a9c-af0e-5830dddceab9\") " pod="openstack/keystone-db-sync-4gdmp" Jan 01 09:56:18 crc kubenswrapper[4867]: I0101 09:56:18.722264 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27def929-31ca-4a9c-af0e-5830dddceab9-combined-ca-bundle\") pod \"keystone-db-sync-4gdmp\" (UID: \"27def929-31ca-4a9c-af0e-5830dddceab9\") " pod="openstack/keystone-db-sync-4gdmp" Jan 01 09:56:18 crc kubenswrapper[4867]: I0101 09:56:18.722391 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27def929-31ca-4a9c-af0e-5830dddceab9-config-data\") pod \"keystone-db-sync-4gdmp\" (UID: \"27def929-31ca-4a9c-af0e-5830dddceab9\") " pod="openstack/keystone-db-sync-4gdmp" Jan 01 09:56:18 crc kubenswrapper[4867]: I0101 09:56:18.722422 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltshg\" (UniqueName: \"kubernetes.io/projected/27def929-31ca-4a9c-af0e-5830dddceab9-kube-api-access-ltshg\") pod \"keystone-db-sync-4gdmp\" (UID: \"27def929-31ca-4a9c-af0e-5830dddceab9\") " pod="openstack/keystone-db-sync-4gdmp" Jan 01 09:56:18 crc kubenswrapper[4867]: I0101 09:56:18.727609 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27def929-31ca-4a9c-af0e-5830dddceab9-combined-ca-bundle\") pod \"keystone-db-sync-4gdmp\" (UID: \"27def929-31ca-4a9c-af0e-5830dddceab9\") " pod="openstack/keystone-db-sync-4gdmp" Jan 01 09:56:18 crc kubenswrapper[4867]: I0101 09:56:18.727800 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27def929-31ca-4a9c-af0e-5830dddceab9-config-data\") pod \"keystone-db-sync-4gdmp\" (UID: \"27def929-31ca-4a9c-af0e-5830dddceab9\") " pod="openstack/keystone-db-sync-4gdmp" Jan 01 09:56:18 crc kubenswrapper[4867]: I0101 09:56:18.739414 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltshg\" (UniqueName: \"kubernetes.io/projected/27def929-31ca-4a9c-af0e-5830dddceab9-kube-api-access-ltshg\") pod \"keystone-db-sync-4gdmp\" (UID: \"27def929-31ca-4a9c-af0e-5830dddceab9\") " pod="openstack/keystone-db-sync-4gdmp" Jan 01 09:56:18 crc kubenswrapper[4867]: I0101 09:56:18.967174 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4gdmp" Jan 01 09:56:19 crc kubenswrapper[4867]: I0101 09:56:19.394092 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-4gdmp"] Jan 01 09:56:19 crc kubenswrapper[4867]: W0101 09:56:19.409288 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27def929_31ca_4a9c_af0e_5830dddceab9.slice/crio-8a9e572a6de44f9b9fed3ae1e9491dceafb50d62f8b615d4170d456d702cf163 WatchSource:0}: Error finding container 8a9e572a6de44f9b9fed3ae1e9491dceafb50d62f8b615d4170d456d702cf163: Status 404 returned error can't find the container with id 8a9e572a6de44f9b9fed3ae1e9491dceafb50d62f8b615d4170d456d702cf163 Jan 01 09:56:19 crc kubenswrapper[4867]: I0101 09:56:19.683628 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4gdmp" event={"ID":"27def929-31ca-4a9c-af0e-5830dddceab9","Type":"ContainerStarted","Data":"8a9e572a6de44f9b9fed3ae1e9491dceafb50d62f8b615d4170d456d702cf163"} Jan 01 09:56:20 crc kubenswrapper[4867]: I0101 09:56:20.698311 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4gdmp" event={"ID":"27def929-31ca-4a9c-af0e-5830dddceab9","Type":"ContainerStarted","Data":"b7b218f3814014a085cdc41103d67acb07f08a89cc9ce5a9bbac65b7f38092fd"} Jan 01 09:56:20 crc kubenswrapper[4867]: I0101 09:56:20.753851 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-4gdmp" podStartSLOduration=2.75382158 podStartE2EDuration="2.75382158s" podCreationTimestamp="2026-01-01 09:56:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 09:56:20.727850614 +0000 UTC m=+5389.863119393" watchObservedRunningTime="2026-01-01 09:56:20.75382158 +0000 UTC m=+5389.889090389" Jan 01 09:56:21 crc kubenswrapper[4867]: I0101 09:56:21.710662 4867 generic.go:334] "Generic (PLEG): container finished" podID="27def929-31ca-4a9c-af0e-5830dddceab9" containerID="b7b218f3814014a085cdc41103d67acb07f08a89cc9ce5a9bbac65b7f38092fd" exitCode=0 Jan 01 09:56:21 crc kubenswrapper[4867]: I0101 09:56:21.710725 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4gdmp" event={"ID":"27def929-31ca-4a9c-af0e-5830dddceab9","Type":"ContainerDied","Data":"b7b218f3814014a085cdc41103d67acb07f08a89cc9ce5a9bbac65b7f38092fd"} Jan 01 09:56:23 crc kubenswrapper[4867]: I0101 09:56:23.056551 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4gdmp" Jan 01 09:56:23 crc kubenswrapper[4867]: I0101 09:56:23.207629 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltshg\" (UniqueName: \"kubernetes.io/projected/27def929-31ca-4a9c-af0e-5830dddceab9-kube-api-access-ltshg\") pod \"27def929-31ca-4a9c-af0e-5830dddceab9\" (UID: \"27def929-31ca-4a9c-af0e-5830dddceab9\") " Jan 01 09:56:23 crc kubenswrapper[4867]: I0101 09:56:23.207719 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27def929-31ca-4a9c-af0e-5830dddceab9-config-data\") pod \"27def929-31ca-4a9c-af0e-5830dddceab9\" (UID: \"27def929-31ca-4a9c-af0e-5830dddceab9\") " Jan 01 09:56:23 crc kubenswrapper[4867]: I0101 09:56:23.207774 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27def929-31ca-4a9c-af0e-5830dddceab9-combined-ca-bundle\") pod \"27def929-31ca-4a9c-af0e-5830dddceab9\" (UID: \"27def929-31ca-4a9c-af0e-5830dddceab9\") " Jan 01 09:56:23 crc kubenswrapper[4867]: I0101 09:56:23.214116 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27def929-31ca-4a9c-af0e-5830dddceab9-kube-api-access-ltshg" (OuterVolumeSpecName: "kube-api-access-ltshg") pod "27def929-31ca-4a9c-af0e-5830dddceab9" (UID: "27def929-31ca-4a9c-af0e-5830dddceab9"). InnerVolumeSpecName "kube-api-access-ltshg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:56:23 crc kubenswrapper[4867]: I0101 09:56:23.235062 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27def929-31ca-4a9c-af0e-5830dddceab9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27def929-31ca-4a9c-af0e-5830dddceab9" (UID: "27def929-31ca-4a9c-af0e-5830dddceab9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 09:56:23 crc kubenswrapper[4867]: I0101 09:56:23.271037 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27def929-31ca-4a9c-af0e-5830dddceab9-config-data" (OuterVolumeSpecName: "config-data") pod "27def929-31ca-4a9c-af0e-5830dddceab9" (UID: "27def929-31ca-4a9c-af0e-5830dddceab9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 09:56:23 crc kubenswrapper[4867]: I0101 09:56:23.310760 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltshg\" (UniqueName: \"kubernetes.io/projected/27def929-31ca-4a9c-af0e-5830dddceab9-kube-api-access-ltshg\") on node \"crc\" DevicePath \"\"" Jan 01 09:56:23 crc kubenswrapper[4867]: I0101 09:56:23.310842 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27def929-31ca-4a9c-af0e-5830dddceab9-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 09:56:23 crc kubenswrapper[4867]: I0101 09:56:23.310862 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27def929-31ca-4a9c-af0e-5830dddceab9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 09:56:23 crc kubenswrapper[4867]: I0101 09:56:23.739757 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4gdmp" event={"ID":"27def929-31ca-4a9c-af0e-5830dddceab9","Type":"ContainerDied","Data":"8a9e572a6de44f9b9fed3ae1e9491dceafb50d62f8b615d4170d456d702cf163"} Jan 01 09:56:23 crc kubenswrapper[4867]: I0101 09:56:23.739813 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a9e572a6de44f9b9fed3ae1e9491dceafb50d62f8b615d4170d456d702cf163" Jan 01 09:56:23 crc kubenswrapper[4867]: I0101 09:56:23.739828 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4gdmp" Jan 01 09:56:23 crc kubenswrapper[4867]: I0101 09:56:23.998521 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-nk2vs"] Jan 01 09:56:23 crc kubenswrapper[4867]: E0101 09:56:23.998872 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27def929-31ca-4a9c-af0e-5830dddceab9" containerName="keystone-db-sync" Jan 01 09:56:23 crc kubenswrapper[4867]: I0101 09:56:23.998898 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="27def929-31ca-4a9c-af0e-5830dddceab9" containerName="keystone-db-sync" Jan 01 09:56:23 crc kubenswrapper[4867]: I0101 09:56:23.999067 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="27def929-31ca-4a9c-af0e-5830dddceab9" containerName="keystone-db-sync" Jan 01 09:56:23 crc kubenswrapper[4867]: I0101 09:56:23.999710 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nk2vs" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.001761 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-x9f2t" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.005548 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.005572 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.005816 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.005859 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.022562 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69f667c44c-wpw5v"] Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.025764 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69f667c44c-wpw5v" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.037301 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nk2vs"] Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.053448 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69f667c44c-wpw5v"] Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.125611 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/436e2299-2fd9-4973-aa16-f5a02aa58c36-dns-svc\") pod \"dnsmasq-dns-69f667c44c-wpw5v\" (UID: \"436e2299-2fd9-4973-aa16-f5a02aa58c36\") " pod="openstack/dnsmasq-dns-69f667c44c-wpw5v" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.125655 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x75pn\" (UniqueName: \"kubernetes.io/projected/ed82f483-02ce-439b-8a62-869f4d32a77d-kube-api-access-x75pn\") pod \"keystone-bootstrap-nk2vs\" (UID: \"ed82f483-02ce-439b-8a62-869f4d32a77d\") " pod="openstack/keystone-bootstrap-nk2vs" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.125705 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/436e2299-2fd9-4973-aa16-f5a02aa58c36-ovsdbserver-sb\") pod \"dnsmasq-dns-69f667c44c-wpw5v\" (UID: \"436e2299-2fd9-4973-aa16-f5a02aa58c36\") " pod="openstack/dnsmasq-dns-69f667c44c-wpw5v" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.125721 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed82f483-02ce-439b-8a62-869f4d32a77d-scripts\") pod \"keystone-bootstrap-nk2vs\" (UID: \"ed82f483-02ce-439b-8a62-869f4d32a77d\") " pod="openstack/keystone-bootstrap-nk2vs" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.125742 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/436e2299-2fd9-4973-aa16-f5a02aa58c36-config\") pod \"dnsmasq-dns-69f667c44c-wpw5v\" (UID: \"436e2299-2fd9-4973-aa16-f5a02aa58c36\") " pod="openstack/dnsmasq-dns-69f667c44c-wpw5v" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.125760 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvf2f\" (UniqueName: \"kubernetes.io/projected/436e2299-2fd9-4973-aa16-f5a02aa58c36-kube-api-access-bvf2f\") pod \"dnsmasq-dns-69f667c44c-wpw5v\" (UID: \"436e2299-2fd9-4973-aa16-f5a02aa58c36\") " pod="openstack/dnsmasq-dns-69f667c44c-wpw5v" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.125779 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed82f483-02ce-439b-8a62-869f4d32a77d-combined-ca-bundle\") pod \"keystone-bootstrap-nk2vs\" (UID: \"ed82f483-02ce-439b-8a62-869f4d32a77d\") " pod="openstack/keystone-bootstrap-nk2vs" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.125811 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed82f483-02ce-439b-8a62-869f4d32a77d-config-data\") pod \"keystone-bootstrap-nk2vs\" (UID: \"ed82f483-02ce-439b-8a62-869f4d32a77d\") " pod="openstack/keystone-bootstrap-nk2vs" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.125835 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ed82f483-02ce-439b-8a62-869f4d32a77d-fernet-keys\") pod \"keystone-bootstrap-nk2vs\" (UID: \"ed82f483-02ce-439b-8a62-869f4d32a77d\") " pod="openstack/keystone-bootstrap-nk2vs" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.125855 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/436e2299-2fd9-4973-aa16-f5a02aa58c36-ovsdbserver-nb\") pod \"dnsmasq-dns-69f667c44c-wpw5v\" (UID: \"436e2299-2fd9-4973-aa16-f5a02aa58c36\") " pod="openstack/dnsmasq-dns-69f667c44c-wpw5v" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.125896 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ed82f483-02ce-439b-8a62-869f4d32a77d-credential-keys\") pod \"keystone-bootstrap-nk2vs\" (UID: \"ed82f483-02ce-439b-8a62-869f4d32a77d\") " pod="openstack/keystone-bootstrap-nk2vs" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.227041 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/436e2299-2fd9-4973-aa16-f5a02aa58c36-ovsdbserver-sb\") pod \"dnsmasq-dns-69f667c44c-wpw5v\" (UID: \"436e2299-2fd9-4973-aa16-f5a02aa58c36\") " pod="openstack/dnsmasq-dns-69f667c44c-wpw5v" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.227091 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed82f483-02ce-439b-8a62-869f4d32a77d-scripts\") pod \"keystone-bootstrap-nk2vs\" (UID: \"ed82f483-02ce-439b-8a62-869f4d32a77d\") " pod="openstack/keystone-bootstrap-nk2vs" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.227113 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/436e2299-2fd9-4973-aa16-f5a02aa58c36-config\") pod \"dnsmasq-dns-69f667c44c-wpw5v\" (UID: \"436e2299-2fd9-4973-aa16-f5a02aa58c36\") " pod="openstack/dnsmasq-dns-69f667c44c-wpw5v" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.227134 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvf2f\" (UniqueName: \"kubernetes.io/projected/436e2299-2fd9-4973-aa16-f5a02aa58c36-kube-api-access-bvf2f\") pod \"dnsmasq-dns-69f667c44c-wpw5v\" (UID: \"436e2299-2fd9-4973-aa16-f5a02aa58c36\") " pod="openstack/dnsmasq-dns-69f667c44c-wpw5v" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.227152 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed82f483-02ce-439b-8a62-869f4d32a77d-combined-ca-bundle\") pod \"keystone-bootstrap-nk2vs\" (UID: \"ed82f483-02ce-439b-8a62-869f4d32a77d\") " pod="openstack/keystone-bootstrap-nk2vs" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.227196 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed82f483-02ce-439b-8a62-869f4d32a77d-config-data\") pod \"keystone-bootstrap-nk2vs\" (UID: \"ed82f483-02ce-439b-8a62-869f4d32a77d\") " pod="openstack/keystone-bootstrap-nk2vs" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.227218 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ed82f483-02ce-439b-8a62-869f4d32a77d-fernet-keys\") pod \"keystone-bootstrap-nk2vs\" (UID: \"ed82f483-02ce-439b-8a62-869f4d32a77d\") " pod="openstack/keystone-bootstrap-nk2vs" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.227237 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/436e2299-2fd9-4973-aa16-f5a02aa58c36-ovsdbserver-nb\") pod \"dnsmasq-dns-69f667c44c-wpw5v\" (UID: \"436e2299-2fd9-4973-aa16-f5a02aa58c36\") " pod="openstack/dnsmasq-dns-69f667c44c-wpw5v" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.227298 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ed82f483-02ce-439b-8a62-869f4d32a77d-credential-keys\") pod \"keystone-bootstrap-nk2vs\" (UID: \"ed82f483-02ce-439b-8a62-869f4d32a77d\") " pod="openstack/keystone-bootstrap-nk2vs" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.227339 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/436e2299-2fd9-4973-aa16-f5a02aa58c36-dns-svc\") pod \"dnsmasq-dns-69f667c44c-wpw5v\" (UID: \"436e2299-2fd9-4973-aa16-f5a02aa58c36\") " pod="openstack/dnsmasq-dns-69f667c44c-wpw5v" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.227363 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x75pn\" (UniqueName: \"kubernetes.io/projected/ed82f483-02ce-439b-8a62-869f4d32a77d-kube-api-access-x75pn\") pod \"keystone-bootstrap-nk2vs\" (UID: \"ed82f483-02ce-439b-8a62-869f4d32a77d\") " pod="openstack/keystone-bootstrap-nk2vs" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.229265 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/436e2299-2fd9-4973-aa16-f5a02aa58c36-ovsdbserver-sb\") pod \"dnsmasq-dns-69f667c44c-wpw5v\" (UID: \"436e2299-2fd9-4973-aa16-f5a02aa58c36\") " pod="openstack/dnsmasq-dns-69f667c44c-wpw5v" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.230568 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/436e2299-2fd9-4973-aa16-f5a02aa58c36-ovsdbserver-nb\") pod \"dnsmasq-dns-69f667c44c-wpw5v\" (UID: \"436e2299-2fd9-4973-aa16-f5a02aa58c36\") " pod="openstack/dnsmasq-dns-69f667c44c-wpw5v" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.230747 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/436e2299-2fd9-4973-aa16-f5a02aa58c36-config\") pod \"dnsmasq-dns-69f667c44c-wpw5v\" (UID: \"436e2299-2fd9-4973-aa16-f5a02aa58c36\") " pod="openstack/dnsmasq-dns-69f667c44c-wpw5v" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.230943 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/436e2299-2fd9-4973-aa16-f5a02aa58c36-dns-svc\") pod \"dnsmasq-dns-69f667c44c-wpw5v\" (UID: \"436e2299-2fd9-4973-aa16-f5a02aa58c36\") " pod="openstack/dnsmasq-dns-69f667c44c-wpw5v" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.234118 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ed82f483-02ce-439b-8a62-869f4d32a77d-fernet-keys\") pod \"keystone-bootstrap-nk2vs\" (UID: \"ed82f483-02ce-439b-8a62-869f4d32a77d\") " pod="openstack/keystone-bootstrap-nk2vs" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.234141 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed82f483-02ce-439b-8a62-869f4d32a77d-combined-ca-bundle\") pod \"keystone-bootstrap-nk2vs\" (UID: \"ed82f483-02ce-439b-8a62-869f4d32a77d\") " pod="openstack/keystone-bootstrap-nk2vs" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.236234 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed82f483-02ce-439b-8a62-869f4d32a77d-scripts\") pod \"keystone-bootstrap-nk2vs\" (UID: \"ed82f483-02ce-439b-8a62-869f4d32a77d\") " pod="openstack/keystone-bootstrap-nk2vs" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.239669 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed82f483-02ce-439b-8a62-869f4d32a77d-config-data\") pod \"keystone-bootstrap-nk2vs\" (UID: \"ed82f483-02ce-439b-8a62-869f4d32a77d\") " pod="openstack/keystone-bootstrap-nk2vs" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.243973 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x75pn\" (UniqueName: \"kubernetes.io/projected/ed82f483-02ce-439b-8a62-869f4d32a77d-kube-api-access-x75pn\") pod \"keystone-bootstrap-nk2vs\" (UID: \"ed82f483-02ce-439b-8a62-869f4d32a77d\") " pod="openstack/keystone-bootstrap-nk2vs" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.245669 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ed82f483-02ce-439b-8a62-869f4d32a77d-credential-keys\") pod \"keystone-bootstrap-nk2vs\" (UID: \"ed82f483-02ce-439b-8a62-869f4d32a77d\") " pod="openstack/keystone-bootstrap-nk2vs" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.246163 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvf2f\" (UniqueName: \"kubernetes.io/projected/436e2299-2fd9-4973-aa16-f5a02aa58c36-kube-api-access-bvf2f\") pod \"dnsmasq-dns-69f667c44c-wpw5v\" (UID: \"436e2299-2fd9-4973-aa16-f5a02aa58c36\") " pod="openstack/dnsmasq-dns-69f667c44c-wpw5v" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.326670 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nk2vs" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.362297 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69f667c44c-wpw5v" Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.787165 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nk2vs"] Jan 01 09:56:24 crc kubenswrapper[4867]: W0101 09:56:24.789177 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded82f483_02ce_439b_8a62_869f4d32a77d.slice/crio-40616fa01b8e0f6448b653dc0415d785a127a08ece784d711ec37e85ed04c555 WatchSource:0}: Error finding container 40616fa01b8e0f6448b653dc0415d785a127a08ece784d711ec37e85ed04c555: Status 404 returned error can't find the container with id 40616fa01b8e0f6448b653dc0415d785a127a08ece784d711ec37e85ed04c555 Jan 01 09:56:24 crc kubenswrapper[4867]: I0101 09:56:24.834817 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69f667c44c-wpw5v"] Jan 01 09:56:24 crc kubenswrapper[4867]: W0101 09:56:24.836698 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod436e2299_2fd9_4973_aa16_f5a02aa58c36.slice/crio-fc37bc3d2af8957f58f5b08bb81b3fdea332522478931d9f66901d073db13cb0 WatchSource:0}: Error finding container fc37bc3d2af8957f58f5b08bb81b3fdea332522478931d9f66901d073db13cb0: Status 404 returned error can't find the container with id fc37bc3d2af8957f58f5b08bb81b3fdea332522478931d9f66901d073db13cb0 Jan 01 09:56:25 crc kubenswrapper[4867]: I0101 09:56:25.760374 4867 generic.go:334] "Generic (PLEG): container finished" podID="436e2299-2fd9-4973-aa16-f5a02aa58c36" containerID="d97c17e2c564dea69374f0df4da2754ab8c83af70274438219c2cac3c94bf2a0" exitCode=0 Jan 01 09:56:25 crc kubenswrapper[4867]: I0101 09:56:25.760430 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69f667c44c-wpw5v" event={"ID":"436e2299-2fd9-4973-aa16-f5a02aa58c36","Type":"ContainerDied","Data":"d97c17e2c564dea69374f0df4da2754ab8c83af70274438219c2cac3c94bf2a0"} Jan 01 09:56:25 crc kubenswrapper[4867]: I0101 09:56:25.760768 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69f667c44c-wpw5v" event={"ID":"436e2299-2fd9-4973-aa16-f5a02aa58c36","Type":"ContainerStarted","Data":"fc37bc3d2af8957f58f5b08bb81b3fdea332522478931d9f66901d073db13cb0"} Jan 01 09:56:25 crc kubenswrapper[4867]: I0101 09:56:25.764757 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nk2vs" event={"ID":"ed82f483-02ce-439b-8a62-869f4d32a77d","Type":"ContainerStarted","Data":"3384bd80934c491a9f17feb7c4920f13a50a2d6bc000a32a889d8e5291248894"} Jan 01 09:56:25 crc kubenswrapper[4867]: I0101 09:56:25.765033 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nk2vs" event={"ID":"ed82f483-02ce-439b-8a62-869f4d32a77d","Type":"ContainerStarted","Data":"40616fa01b8e0f6448b653dc0415d785a127a08ece784d711ec37e85ed04c555"} Jan 01 09:56:25 crc kubenswrapper[4867]: I0101 09:56:25.836473 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-nk2vs" podStartSLOduration=2.836449659 podStartE2EDuration="2.836449659s" podCreationTimestamp="2026-01-01 09:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 09:56:25.83119949 +0000 UTC m=+5394.966468269" watchObservedRunningTime="2026-01-01 09:56:25.836449659 +0000 UTC m=+5394.971718448" Jan 01 09:56:26 crc kubenswrapper[4867]: I0101 09:56:26.775683 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69f667c44c-wpw5v" event={"ID":"436e2299-2fd9-4973-aa16-f5a02aa58c36","Type":"ContainerStarted","Data":"9f60f9940ab18cfcf959e730f9a9a1dd8a3df8e715391f8bbbffbc21e5f96a51"} Jan 01 09:56:26 crc kubenswrapper[4867]: I0101 09:56:26.776123 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69f667c44c-wpw5v" Jan 01 09:56:26 crc kubenswrapper[4867]: I0101 09:56:26.813231 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69f667c44c-wpw5v" podStartSLOduration=3.8132031299999998 podStartE2EDuration="3.81320313s" podCreationTimestamp="2026-01-01 09:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 09:56:26.801953072 +0000 UTC m=+5395.937221901" watchObservedRunningTime="2026-01-01 09:56:26.81320313 +0000 UTC m=+5395.948471939" Jan 01 09:56:28 crc kubenswrapper[4867]: I0101 09:56:28.805650 4867 generic.go:334] "Generic (PLEG): container finished" podID="ed82f483-02ce-439b-8a62-869f4d32a77d" containerID="3384bd80934c491a9f17feb7c4920f13a50a2d6bc000a32a889d8e5291248894" exitCode=0 Jan 01 09:56:28 crc kubenswrapper[4867]: I0101 09:56:28.806111 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nk2vs" event={"ID":"ed82f483-02ce-439b-8a62-869f4d32a77d","Type":"ContainerDied","Data":"3384bd80934c491a9f17feb7c4920f13a50a2d6bc000a32a889d8e5291248894"} Jan 01 09:56:30 crc kubenswrapper[4867]: I0101 09:56:30.283660 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nk2vs" Jan 01 09:56:30 crc kubenswrapper[4867]: I0101 09:56:30.438367 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ed82f483-02ce-439b-8a62-869f4d32a77d-fernet-keys\") pod \"ed82f483-02ce-439b-8a62-869f4d32a77d\" (UID: \"ed82f483-02ce-439b-8a62-869f4d32a77d\") " Jan 01 09:56:30 crc kubenswrapper[4867]: I0101 09:56:30.438512 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ed82f483-02ce-439b-8a62-869f4d32a77d-credential-keys\") pod \"ed82f483-02ce-439b-8a62-869f4d32a77d\" (UID: \"ed82f483-02ce-439b-8a62-869f4d32a77d\") " Jan 01 09:56:30 crc kubenswrapper[4867]: I0101 09:56:30.438592 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed82f483-02ce-439b-8a62-869f4d32a77d-combined-ca-bundle\") pod \"ed82f483-02ce-439b-8a62-869f4d32a77d\" (UID: \"ed82f483-02ce-439b-8a62-869f4d32a77d\") " Jan 01 09:56:30 crc kubenswrapper[4867]: I0101 09:56:30.438722 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed82f483-02ce-439b-8a62-869f4d32a77d-scripts\") pod \"ed82f483-02ce-439b-8a62-869f4d32a77d\" (UID: \"ed82f483-02ce-439b-8a62-869f4d32a77d\") " Jan 01 09:56:30 crc kubenswrapper[4867]: I0101 09:56:30.438766 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed82f483-02ce-439b-8a62-869f4d32a77d-config-data\") pod \"ed82f483-02ce-439b-8a62-869f4d32a77d\" (UID: \"ed82f483-02ce-439b-8a62-869f4d32a77d\") " Jan 01 09:56:30 crc kubenswrapper[4867]: I0101 09:56:30.438816 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x75pn\" (UniqueName: \"kubernetes.io/projected/ed82f483-02ce-439b-8a62-869f4d32a77d-kube-api-access-x75pn\") pod \"ed82f483-02ce-439b-8a62-869f4d32a77d\" (UID: \"ed82f483-02ce-439b-8a62-869f4d32a77d\") " Jan 01 09:56:30 crc kubenswrapper[4867]: I0101 09:56:30.447373 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed82f483-02ce-439b-8a62-869f4d32a77d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "ed82f483-02ce-439b-8a62-869f4d32a77d" (UID: "ed82f483-02ce-439b-8a62-869f4d32a77d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 09:56:30 crc kubenswrapper[4867]: I0101 09:56:30.447465 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed82f483-02ce-439b-8a62-869f4d32a77d-kube-api-access-x75pn" (OuterVolumeSpecName: "kube-api-access-x75pn") pod "ed82f483-02ce-439b-8a62-869f4d32a77d" (UID: "ed82f483-02ce-439b-8a62-869f4d32a77d"). InnerVolumeSpecName "kube-api-access-x75pn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:56:30 crc kubenswrapper[4867]: I0101 09:56:30.447853 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed82f483-02ce-439b-8a62-869f4d32a77d-scripts" (OuterVolumeSpecName: "scripts") pod "ed82f483-02ce-439b-8a62-869f4d32a77d" (UID: "ed82f483-02ce-439b-8a62-869f4d32a77d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 09:56:30 crc kubenswrapper[4867]: I0101 09:56:30.451267 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed82f483-02ce-439b-8a62-869f4d32a77d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ed82f483-02ce-439b-8a62-869f4d32a77d" (UID: "ed82f483-02ce-439b-8a62-869f4d32a77d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 09:56:30 crc kubenswrapper[4867]: I0101 09:56:30.470223 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed82f483-02ce-439b-8a62-869f4d32a77d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed82f483-02ce-439b-8a62-869f4d32a77d" (UID: "ed82f483-02ce-439b-8a62-869f4d32a77d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 09:56:30 crc kubenswrapper[4867]: I0101 09:56:30.486753 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed82f483-02ce-439b-8a62-869f4d32a77d-config-data" (OuterVolumeSpecName: "config-data") pod "ed82f483-02ce-439b-8a62-869f4d32a77d" (UID: "ed82f483-02ce-439b-8a62-869f4d32a77d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 09:56:30 crc kubenswrapper[4867]: I0101 09:56:30.541851 4867 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ed82f483-02ce-439b-8a62-869f4d32a77d-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 01 09:56:30 crc kubenswrapper[4867]: I0101 09:56:30.541933 4867 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ed82f483-02ce-439b-8a62-869f4d32a77d-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 01 09:56:30 crc kubenswrapper[4867]: I0101 09:56:30.541954 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed82f483-02ce-439b-8a62-869f4d32a77d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 09:56:30 crc kubenswrapper[4867]: I0101 09:56:30.541971 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed82f483-02ce-439b-8a62-869f4d32a77d-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 09:56:30 crc kubenswrapper[4867]: I0101 09:56:30.541987 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed82f483-02ce-439b-8a62-869f4d32a77d-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 09:56:30 crc kubenswrapper[4867]: I0101 09:56:30.542004 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x75pn\" (UniqueName: \"kubernetes.io/projected/ed82f483-02ce-439b-8a62-869f4d32a77d-kube-api-access-x75pn\") on node \"crc\" DevicePath \"\"" Jan 01 09:56:30 crc kubenswrapper[4867]: I0101 09:56:30.827018 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nk2vs" event={"ID":"ed82f483-02ce-439b-8a62-869f4d32a77d","Type":"ContainerDied","Data":"40616fa01b8e0f6448b653dc0415d785a127a08ece784d711ec37e85ed04c555"} Jan 01 09:56:30 crc kubenswrapper[4867]: I0101 09:56:30.827066 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40616fa01b8e0f6448b653dc0415d785a127a08ece784d711ec37e85ed04c555" Jan 01 09:56:30 crc kubenswrapper[4867]: I0101 09:56:30.827134 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nk2vs" Jan 01 09:56:30 crc kubenswrapper[4867]: I0101 09:56:30.912545 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-nk2vs"] Jan 01 09:56:30 crc kubenswrapper[4867]: I0101 09:56:30.918309 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-nk2vs"] Jan 01 09:56:31 crc kubenswrapper[4867]: I0101 09:56:31.011091 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-z9c4q"] Jan 01 09:56:31 crc kubenswrapper[4867]: E0101 09:56:31.011691 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed82f483-02ce-439b-8a62-869f4d32a77d" containerName="keystone-bootstrap" Jan 01 09:56:31 crc kubenswrapper[4867]: I0101 09:56:31.011779 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed82f483-02ce-439b-8a62-869f4d32a77d" containerName="keystone-bootstrap" Jan 01 09:56:31 crc kubenswrapper[4867]: I0101 09:56:31.012071 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed82f483-02ce-439b-8a62-869f4d32a77d" containerName="keystone-bootstrap" Jan 01 09:56:31 crc kubenswrapper[4867]: I0101 09:56:31.012823 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-z9c4q" Jan 01 09:56:31 crc kubenswrapper[4867]: I0101 09:56:31.015740 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 01 09:56:31 crc kubenswrapper[4867]: I0101 09:56:31.016021 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 01 09:56:31 crc kubenswrapper[4867]: I0101 09:56:31.016100 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 01 09:56:31 crc kubenswrapper[4867]: I0101 09:56:31.016616 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 01 09:56:31 crc kubenswrapper[4867]: I0101 09:56:31.018697 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-x9f2t" Jan 01 09:56:31 crc kubenswrapper[4867]: I0101 09:56:31.026469 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-z9c4q"] Jan 01 09:56:31 crc kubenswrapper[4867]: I0101 09:56:31.142340 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed82f483-02ce-439b-8a62-869f4d32a77d" path="/var/lib/kubelet/pods/ed82f483-02ce-439b-8a62-869f4d32a77d/volumes" Jan 01 09:56:31 crc kubenswrapper[4867]: I0101 09:56:31.153493 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92572092-af40-4fa8-973d-2d38bce43919-combined-ca-bundle\") pod \"keystone-bootstrap-z9c4q\" (UID: \"92572092-af40-4fa8-973d-2d38bce43919\") " pod="openstack/keystone-bootstrap-z9c4q" Jan 01 09:56:31 crc kubenswrapper[4867]: I0101 09:56:31.153596 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjzfr\" (UniqueName: \"kubernetes.io/projected/92572092-af40-4fa8-973d-2d38bce43919-kube-api-access-rjzfr\") pod \"keystone-bootstrap-z9c4q\" (UID: \"92572092-af40-4fa8-973d-2d38bce43919\") " pod="openstack/keystone-bootstrap-z9c4q" Jan 01 09:56:31 crc kubenswrapper[4867]: I0101 09:56:31.153713 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/92572092-af40-4fa8-973d-2d38bce43919-credential-keys\") pod \"keystone-bootstrap-z9c4q\" (UID: \"92572092-af40-4fa8-973d-2d38bce43919\") " pod="openstack/keystone-bootstrap-z9c4q" Jan 01 09:56:31 crc kubenswrapper[4867]: I0101 09:56:31.153798 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/92572092-af40-4fa8-973d-2d38bce43919-fernet-keys\") pod \"keystone-bootstrap-z9c4q\" (UID: \"92572092-af40-4fa8-973d-2d38bce43919\") " pod="openstack/keystone-bootstrap-z9c4q" Jan 01 09:56:31 crc kubenswrapper[4867]: I0101 09:56:31.153862 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92572092-af40-4fa8-973d-2d38bce43919-scripts\") pod \"keystone-bootstrap-z9c4q\" (UID: \"92572092-af40-4fa8-973d-2d38bce43919\") " pod="openstack/keystone-bootstrap-z9c4q" Jan 01 09:56:31 crc kubenswrapper[4867]: I0101 09:56:31.153915 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92572092-af40-4fa8-973d-2d38bce43919-config-data\") pod \"keystone-bootstrap-z9c4q\" (UID: \"92572092-af40-4fa8-973d-2d38bce43919\") " pod="openstack/keystone-bootstrap-z9c4q" Jan 01 09:56:31 crc kubenswrapper[4867]: I0101 09:56:31.255871 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/92572092-af40-4fa8-973d-2d38bce43919-credential-keys\") pod \"keystone-bootstrap-z9c4q\" (UID: \"92572092-af40-4fa8-973d-2d38bce43919\") " pod="openstack/keystone-bootstrap-z9c4q" Jan 01 09:56:31 crc kubenswrapper[4867]: I0101 09:56:31.255993 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/92572092-af40-4fa8-973d-2d38bce43919-fernet-keys\") pod \"keystone-bootstrap-z9c4q\" (UID: \"92572092-af40-4fa8-973d-2d38bce43919\") " pod="openstack/keystone-bootstrap-z9c4q" Jan 01 09:56:31 crc kubenswrapper[4867]: I0101 09:56:31.256223 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92572092-af40-4fa8-973d-2d38bce43919-scripts\") pod \"keystone-bootstrap-z9c4q\" (UID: \"92572092-af40-4fa8-973d-2d38bce43919\") " pod="openstack/keystone-bootstrap-z9c4q" Jan 01 09:56:31 crc kubenswrapper[4867]: I0101 09:56:31.256252 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92572092-af40-4fa8-973d-2d38bce43919-config-data\") pod \"keystone-bootstrap-z9c4q\" (UID: \"92572092-af40-4fa8-973d-2d38bce43919\") " pod="openstack/keystone-bootstrap-z9c4q" Jan 01 09:56:31 crc kubenswrapper[4867]: I0101 09:56:31.256296 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92572092-af40-4fa8-973d-2d38bce43919-combined-ca-bundle\") pod \"keystone-bootstrap-z9c4q\" (UID: \"92572092-af40-4fa8-973d-2d38bce43919\") " pod="openstack/keystone-bootstrap-z9c4q" Jan 01 09:56:31 crc kubenswrapper[4867]: I0101 09:56:31.256341 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjzfr\" (UniqueName: \"kubernetes.io/projected/92572092-af40-4fa8-973d-2d38bce43919-kube-api-access-rjzfr\") pod \"keystone-bootstrap-z9c4q\" (UID: \"92572092-af40-4fa8-973d-2d38bce43919\") " pod="openstack/keystone-bootstrap-z9c4q" Jan 01 09:56:31 crc kubenswrapper[4867]: I0101 09:56:31.258356 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 01 09:56:31 crc kubenswrapper[4867]: I0101 09:56:31.259296 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 01 09:56:31 crc kubenswrapper[4867]: I0101 09:56:31.259345 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 01 09:56:31 crc kubenswrapper[4867]: I0101 09:56:31.266746 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92572092-af40-4fa8-973d-2d38bce43919-combined-ca-bundle\") pod \"keystone-bootstrap-z9c4q\" (UID: \"92572092-af40-4fa8-973d-2d38bce43919\") " pod="openstack/keystone-bootstrap-z9c4q" Jan 01 09:56:31 crc kubenswrapper[4867]: I0101 09:56:31.269647 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/92572092-af40-4fa8-973d-2d38bce43919-credential-keys\") pod \"keystone-bootstrap-z9c4q\" (UID: \"92572092-af40-4fa8-973d-2d38bce43919\") " pod="openstack/keystone-bootstrap-z9c4q" Jan 01 09:56:31 crc kubenswrapper[4867]: I0101 09:56:31.269989 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/92572092-af40-4fa8-973d-2d38bce43919-fernet-keys\") pod \"keystone-bootstrap-z9c4q\" (UID: \"92572092-af40-4fa8-973d-2d38bce43919\") " pod="openstack/keystone-bootstrap-z9c4q" Jan 01 09:56:31 crc kubenswrapper[4867]: I0101 09:56:31.270145 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92572092-af40-4fa8-973d-2d38bce43919-config-data\") pod \"keystone-bootstrap-z9c4q\" (UID: \"92572092-af40-4fa8-973d-2d38bce43919\") " pod="openstack/keystone-bootstrap-z9c4q" Jan 01 09:56:31 crc kubenswrapper[4867]: I0101 09:56:31.270824 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92572092-af40-4fa8-973d-2d38bce43919-scripts\") pod \"keystone-bootstrap-z9c4q\" (UID: \"92572092-af40-4fa8-973d-2d38bce43919\") " pod="openstack/keystone-bootstrap-z9c4q" Jan 01 09:56:31 crc kubenswrapper[4867]: I0101 09:56:31.279415 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjzfr\" (UniqueName: \"kubernetes.io/projected/92572092-af40-4fa8-973d-2d38bce43919-kube-api-access-rjzfr\") pod \"keystone-bootstrap-z9c4q\" (UID: \"92572092-af40-4fa8-973d-2d38bce43919\") " pod="openstack/keystone-bootstrap-z9c4q" Jan 01 09:56:31 crc kubenswrapper[4867]: I0101 09:56:31.333873 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-x9f2t" Jan 01 09:56:31 crc kubenswrapper[4867]: I0101 09:56:31.341955 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-z9c4q" Jan 01 09:56:31 crc kubenswrapper[4867]: I0101 09:56:31.797742 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-z9c4q"] Jan 01 09:56:31 crc kubenswrapper[4867]: I0101 09:56:31.817218 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 01 09:56:31 crc kubenswrapper[4867]: I0101 09:56:31.839504 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-z9c4q" event={"ID":"92572092-af40-4fa8-973d-2d38bce43919","Type":"ContainerStarted","Data":"fdea70247698d7c416d61934899769385204ff8365a2aed97b01ec89bb4d31d0"} Jan 01 09:56:32 crc kubenswrapper[4867]: I0101 09:56:32.850066 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-z9c4q" event={"ID":"92572092-af40-4fa8-973d-2d38bce43919","Type":"ContainerStarted","Data":"b0b07928e39c6cd3d25763e0b7e8d2f448d6b6961438d68280ed4e1b031c8c8a"} Jan 01 09:56:32 crc kubenswrapper[4867]: I0101 09:56:32.882139 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-z9c4q" podStartSLOduration=2.882121212 podStartE2EDuration="2.882121212s" podCreationTimestamp="2026-01-01 09:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 09:56:32.875981818 +0000 UTC m=+5402.011250617" watchObservedRunningTime="2026-01-01 09:56:32.882121212 +0000 UTC m=+5402.017389991" Jan 01 09:56:34 crc kubenswrapper[4867]: I0101 09:56:34.365693 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-69f667c44c-wpw5v" Jan 01 09:56:34 crc kubenswrapper[4867]: I0101 09:56:34.460490 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69db5595f9-9tdw9"] Jan 01 09:56:34 crc kubenswrapper[4867]: I0101 09:56:34.460764 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-69db5595f9-9tdw9" podUID="a5017d29-c559-474f-b05c-f54aa95fc6a6" containerName="dnsmasq-dns" containerID="cri-o://b69fde5318f8057f95070322b0e79a50405226e1619d9431cb6fe75616b59aac" gracePeriod=10 Jan 01 09:56:34 crc kubenswrapper[4867]: I0101 09:56:34.872027 4867 generic.go:334] "Generic (PLEG): container finished" podID="a5017d29-c559-474f-b05c-f54aa95fc6a6" containerID="b69fde5318f8057f95070322b0e79a50405226e1619d9431cb6fe75616b59aac" exitCode=0 Jan 01 09:56:34 crc kubenswrapper[4867]: I0101 09:56:34.872509 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69db5595f9-9tdw9" event={"ID":"a5017d29-c559-474f-b05c-f54aa95fc6a6","Type":"ContainerDied","Data":"b69fde5318f8057f95070322b0e79a50405226e1619d9431cb6fe75616b59aac"} Jan 01 09:56:34 crc kubenswrapper[4867]: I0101 09:56:34.873941 4867 generic.go:334] "Generic (PLEG): container finished" podID="92572092-af40-4fa8-973d-2d38bce43919" containerID="b0b07928e39c6cd3d25763e0b7e8d2f448d6b6961438d68280ed4e1b031c8c8a" exitCode=0 Jan 01 09:56:34 crc kubenswrapper[4867]: I0101 09:56:34.873989 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-z9c4q" event={"ID":"92572092-af40-4fa8-973d-2d38bce43919","Type":"ContainerDied","Data":"b0b07928e39c6cd3d25763e0b7e8d2f448d6b6961438d68280ed4e1b031c8c8a"} Jan 01 09:56:34 crc kubenswrapper[4867]: I0101 09:56:34.939728 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69db5595f9-9tdw9" Jan 01 09:56:35 crc kubenswrapper[4867]: I0101 09:56:35.027652 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5017d29-c559-474f-b05c-f54aa95fc6a6-dns-svc\") pod \"a5017d29-c559-474f-b05c-f54aa95fc6a6\" (UID: \"a5017d29-c559-474f-b05c-f54aa95fc6a6\") " Jan 01 09:56:35 crc kubenswrapper[4867]: I0101 09:56:35.027733 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a5017d29-c559-474f-b05c-f54aa95fc6a6-ovsdbserver-nb\") pod \"a5017d29-c559-474f-b05c-f54aa95fc6a6\" (UID: \"a5017d29-c559-474f-b05c-f54aa95fc6a6\") " Jan 01 09:56:35 crc kubenswrapper[4867]: I0101 09:56:35.027850 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5017d29-c559-474f-b05c-f54aa95fc6a6-ovsdbserver-sb\") pod \"a5017d29-c559-474f-b05c-f54aa95fc6a6\" (UID: \"a5017d29-c559-474f-b05c-f54aa95fc6a6\") " Jan 01 09:56:35 crc kubenswrapper[4867]: I0101 09:56:35.027988 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpgtb\" (UniqueName: \"kubernetes.io/projected/a5017d29-c559-474f-b05c-f54aa95fc6a6-kube-api-access-mpgtb\") pod \"a5017d29-c559-474f-b05c-f54aa95fc6a6\" (UID: \"a5017d29-c559-474f-b05c-f54aa95fc6a6\") " Jan 01 09:56:35 crc kubenswrapper[4867]: I0101 09:56:35.028652 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5017d29-c559-474f-b05c-f54aa95fc6a6-config\") pod \"a5017d29-c559-474f-b05c-f54aa95fc6a6\" (UID: \"a5017d29-c559-474f-b05c-f54aa95fc6a6\") " Jan 01 09:56:35 crc kubenswrapper[4867]: I0101 09:56:35.048131 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5017d29-c559-474f-b05c-f54aa95fc6a6-kube-api-access-mpgtb" (OuterVolumeSpecName: "kube-api-access-mpgtb") pod "a5017d29-c559-474f-b05c-f54aa95fc6a6" (UID: "a5017d29-c559-474f-b05c-f54aa95fc6a6"). InnerVolumeSpecName "kube-api-access-mpgtb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:56:35 crc kubenswrapper[4867]: I0101 09:56:35.129497 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5017d29-c559-474f-b05c-f54aa95fc6a6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a5017d29-c559-474f-b05c-f54aa95fc6a6" (UID: "a5017d29-c559-474f-b05c-f54aa95fc6a6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 09:56:35 crc kubenswrapper[4867]: I0101 09:56:35.130725 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpgtb\" (UniqueName: \"kubernetes.io/projected/a5017d29-c559-474f-b05c-f54aa95fc6a6-kube-api-access-mpgtb\") on node \"crc\" DevicePath \"\"" Jan 01 09:56:35 crc kubenswrapper[4867]: I0101 09:56:35.130750 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5017d29-c559-474f-b05c-f54aa95fc6a6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 01 09:56:35 crc kubenswrapper[4867]: I0101 09:56:35.137352 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5017d29-c559-474f-b05c-f54aa95fc6a6-config" (OuterVolumeSpecName: "config") pod "a5017d29-c559-474f-b05c-f54aa95fc6a6" (UID: "a5017d29-c559-474f-b05c-f54aa95fc6a6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 09:56:35 crc kubenswrapper[4867]: I0101 09:56:35.145923 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5017d29-c559-474f-b05c-f54aa95fc6a6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a5017d29-c559-474f-b05c-f54aa95fc6a6" (UID: "a5017d29-c559-474f-b05c-f54aa95fc6a6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 09:56:35 crc kubenswrapper[4867]: I0101 09:56:35.155271 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5017d29-c559-474f-b05c-f54aa95fc6a6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a5017d29-c559-474f-b05c-f54aa95fc6a6" (UID: "a5017d29-c559-474f-b05c-f54aa95fc6a6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 09:56:35 crc kubenswrapper[4867]: I0101 09:56:35.232807 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5017d29-c559-474f-b05c-f54aa95fc6a6-config\") on node \"crc\" DevicePath \"\"" Jan 01 09:56:35 crc kubenswrapper[4867]: I0101 09:56:35.232859 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5017d29-c559-474f-b05c-f54aa95fc6a6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 01 09:56:35 crc kubenswrapper[4867]: I0101 09:56:35.232878 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a5017d29-c559-474f-b05c-f54aa95fc6a6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 01 09:56:35 crc kubenswrapper[4867]: I0101 09:56:35.888583 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69db5595f9-9tdw9" event={"ID":"a5017d29-c559-474f-b05c-f54aa95fc6a6","Type":"ContainerDied","Data":"8a8cc7a55c72c5882752564c5eb3df3c4bf5e172832281fd6322de600649d941"} Jan 01 09:56:35 crc kubenswrapper[4867]: I0101 09:56:35.888794 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69db5595f9-9tdw9" Jan 01 09:56:35 crc kubenswrapper[4867]: I0101 09:56:35.889043 4867 scope.go:117] "RemoveContainer" containerID="b69fde5318f8057f95070322b0e79a50405226e1619d9431cb6fe75616b59aac" Jan 01 09:56:35 crc kubenswrapper[4867]: I0101 09:56:35.932639 4867 scope.go:117] "RemoveContainer" containerID="061b19fc4abd885559de0a0d33eb6624450c8db6c671f153ef4897101b33665b" Jan 01 09:56:35 crc kubenswrapper[4867]: I0101 09:56:35.968759 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69db5595f9-9tdw9"] Jan 01 09:56:35 crc kubenswrapper[4867]: I0101 09:56:35.978353 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-69db5595f9-9tdw9"] Jan 01 09:56:36 crc kubenswrapper[4867]: I0101 09:56:36.404121 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-z9c4q" Jan 01 09:56:36 crc kubenswrapper[4867]: I0101 09:56:36.562727 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/92572092-af40-4fa8-973d-2d38bce43919-credential-keys\") pod \"92572092-af40-4fa8-973d-2d38bce43919\" (UID: \"92572092-af40-4fa8-973d-2d38bce43919\") " Jan 01 09:56:36 crc kubenswrapper[4867]: I0101 09:56:36.562800 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92572092-af40-4fa8-973d-2d38bce43919-combined-ca-bundle\") pod \"92572092-af40-4fa8-973d-2d38bce43919\" (UID: \"92572092-af40-4fa8-973d-2d38bce43919\") " Jan 01 09:56:36 crc kubenswrapper[4867]: I0101 09:56:36.563038 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjzfr\" (UniqueName: \"kubernetes.io/projected/92572092-af40-4fa8-973d-2d38bce43919-kube-api-access-rjzfr\") pod \"92572092-af40-4fa8-973d-2d38bce43919\" (UID: \"92572092-af40-4fa8-973d-2d38bce43919\") " Jan 01 09:56:36 crc kubenswrapper[4867]: I0101 09:56:36.563110 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/92572092-af40-4fa8-973d-2d38bce43919-fernet-keys\") pod \"92572092-af40-4fa8-973d-2d38bce43919\" (UID: \"92572092-af40-4fa8-973d-2d38bce43919\") " Jan 01 09:56:36 crc kubenswrapper[4867]: I0101 09:56:36.563196 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92572092-af40-4fa8-973d-2d38bce43919-scripts\") pod \"92572092-af40-4fa8-973d-2d38bce43919\" (UID: \"92572092-af40-4fa8-973d-2d38bce43919\") " Jan 01 09:56:36 crc kubenswrapper[4867]: I0101 09:56:36.563230 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92572092-af40-4fa8-973d-2d38bce43919-config-data\") pod \"92572092-af40-4fa8-973d-2d38bce43919\" (UID: \"92572092-af40-4fa8-973d-2d38bce43919\") " Jan 01 09:56:36 crc kubenswrapper[4867]: I0101 09:56:36.573015 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92572092-af40-4fa8-973d-2d38bce43919-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "92572092-af40-4fa8-973d-2d38bce43919" (UID: "92572092-af40-4fa8-973d-2d38bce43919"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 09:56:36 crc kubenswrapper[4867]: I0101 09:56:36.573068 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92572092-af40-4fa8-973d-2d38bce43919-scripts" (OuterVolumeSpecName: "scripts") pod "92572092-af40-4fa8-973d-2d38bce43919" (UID: "92572092-af40-4fa8-973d-2d38bce43919"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 09:56:36 crc kubenswrapper[4867]: I0101 09:56:36.573609 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92572092-af40-4fa8-973d-2d38bce43919-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "92572092-af40-4fa8-973d-2d38bce43919" (UID: "92572092-af40-4fa8-973d-2d38bce43919"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 09:56:36 crc kubenswrapper[4867]: I0101 09:56:36.580855 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92572092-af40-4fa8-973d-2d38bce43919-kube-api-access-rjzfr" (OuterVolumeSpecName: "kube-api-access-rjzfr") pod "92572092-af40-4fa8-973d-2d38bce43919" (UID: "92572092-af40-4fa8-973d-2d38bce43919"). InnerVolumeSpecName "kube-api-access-rjzfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:56:36 crc kubenswrapper[4867]: I0101 09:56:36.610624 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92572092-af40-4fa8-973d-2d38bce43919-config-data" (OuterVolumeSpecName: "config-data") pod "92572092-af40-4fa8-973d-2d38bce43919" (UID: "92572092-af40-4fa8-973d-2d38bce43919"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 09:56:36 crc kubenswrapper[4867]: I0101 09:56:36.619581 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92572092-af40-4fa8-973d-2d38bce43919-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92572092-af40-4fa8-973d-2d38bce43919" (UID: "92572092-af40-4fa8-973d-2d38bce43919"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 09:56:36 crc kubenswrapper[4867]: I0101 09:56:36.667172 4867 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/92572092-af40-4fa8-973d-2d38bce43919-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 01 09:56:36 crc kubenswrapper[4867]: I0101 09:56:36.667215 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92572092-af40-4fa8-973d-2d38bce43919-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 09:56:36 crc kubenswrapper[4867]: I0101 09:56:36.667233 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjzfr\" (UniqueName: \"kubernetes.io/projected/92572092-af40-4fa8-973d-2d38bce43919-kube-api-access-rjzfr\") on node \"crc\" DevicePath \"\"" Jan 01 09:56:36 crc kubenswrapper[4867]: I0101 09:56:36.667249 4867 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/92572092-af40-4fa8-973d-2d38bce43919-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 01 09:56:36 crc kubenswrapper[4867]: I0101 09:56:36.667304 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92572092-af40-4fa8-973d-2d38bce43919-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 09:56:36 crc kubenswrapper[4867]: I0101 09:56:36.667388 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92572092-af40-4fa8-973d-2d38bce43919-scripts\") on node \"crc\" DevicePath \"\"" Jan 01 09:56:36 crc kubenswrapper[4867]: I0101 09:56:36.904954 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-z9c4q" event={"ID":"92572092-af40-4fa8-973d-2d38bce43919","Type":"ContainerDied","Data":"fdea70247698d7c416d61934899769385204ff8365a2aed97b01ec89bb4d31d0"} Jan 01 09:56:36 crc kubenswrapper[4867]: I0101 09:56:36.905055 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdea70247698d7c416d61934899769385204ff8365a2aed97b01ec89bb4d31d0" Jan 01 09:56:36 crc kubenswrapper[4867]: I0101 09:56:36.904968 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-z9c4q" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.017560 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-ffdb4dfc7-dbp8r"] Jan 01 09:56:37 crc kubenswrapper[4867]: E0101 09:56:37.018083 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5017d29-c559-474f-b05c-f54aa95fc6a6" containerName="dnsmasq-dns" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.018113 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5017d29-c559-474f-b05c-f54aa95fc6a6" containerName="dnsmasq-dns" Jan 01 09:56:37 crc kubenswrapper[4867]: E0101 09:56:37.018163 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92572092-af40-4fa8-973d-2d38bce43919" containerName="keystone-bootstrap" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.018176 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="92572092-af40-4fa8-973d-2d38bce43919" containerName="keystone-bootstrap" Jan 01 09:56:37 crc kubenswrapper[4867]: E0101 09:56:37.018208 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5017d29-c559-474f-b05c-f54aa95fc6a6" containerName="init" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.018218 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5017d29-c559-474f-b05c-f54aa95fc6a6" containerName="init" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.018450 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="92572092-af40-4fa8-973d-2d38bce43919" containerName="keystone-bootstrap" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.018477 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5017d29-c559-474f-b05c-f54aa95fc6a6" containerName="dnsmasq-dns" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.019244 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ffdb4dfc7-dbp8r" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.023111 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.023505 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-domains" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.023836 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.027341 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-x9f2t" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.027736 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.027917 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.027911 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.057598 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-ffdb4dfc7-dbp8r"] Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.139113 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5017d29-c559-474f-b05c-f54aa95fc6a6" path="/var/lib/kubelet/pods/a5017d29-c559-474f-b05c-f54aa95fc6a6/volumes" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.175550 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82018793-9d72-4fd1-b828-368c2ed205d9-public-tls-certs\") pod \"keystone-ffdb4dfc7-dbp8r\" (UID: \"82018793-9d72-4fd1-b828-368c2ed205d9\") " pod="openstack/keystone-ffdb4dfc7-dbp8r" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.175609 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82018793-9d72-4fd1-b828-368c2ed205d9-internal-tls-certs\") pod \"keystone-ffdb4dfc7-dbp8r\" (UID: \"82018793-9d72-4fd1-b828-368c2ed205d9\") " pod="openstack/keystone-ffdb4dfc7-dbp8r" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.175673 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82018793-9d72-4fd1-b828-368c2ed205d9-scripts\") pod \"keystone-ffdb4dfc7-dbp8r\" (UID: \"82018793-9d72-4fd1-b828-368c2ed205d9\") " pod="openstack/keystone-ffdb4dfc7-dbp8r" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.175699 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/82018793-9d72-4fd1-b828-368c2ed205d9-fernet-keys\") pod \"keystone-ffdb4dfc7-dbp8r\" (UID: \"82018793-9d72-4fd1-b828-368c2ed205d9\") " pod="openstack/keystone-ffdb4dfc7-dbp8r" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.175735 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82018793-9d72-4fd1-b828-368c2ed205d9-combined-ca-bundle\") pod \"keystone-ffdb4dfc7-dbp8r\" (UID: \"82018793-9d72-4fd1-b828-368c2ed205d9\") " pod="openstack/keystone-ffdb4dfc7-dbp8r" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.175770 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"keystone-domains\" (UniqueName: \"kubernetes.io/secret/82018793-9d72-4fd1-b828-368c2ed205d9-keystone-domains\") pod \"keystone-ffdb4dfc7-dbp8r\" (UID: \"82018793-9d72-4fd1-b828-368c2ed205d9\") " pod="openstack/keystone-ffdb4dfc7-dbp8r" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.175792 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr5fl\" (UniqueName: \"kubernetes.io/projected/82018793-9d72-4fd1-b828-368c2ed205d9-kube-api-access-jr5fl\") pod \"keystone-ffdb4dfc7-dbp8r\" (UID: \"82018793-9d72-4fd1-b828-368c2ed205d9\") " pod="openstack/keystone-ffdb4dfc7-dbp8r" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.175836 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/82018793-9d72-4fd1-b828-368c2ed205d9-credential-keys\") pod \"keystone-ffdb4dfc7-dbp8r\" (UID: \"82018793-9d72-4fd1-b828-368c2ed205d9\") " pod="openstack/keystone-ffdb4dfc7-dbp8r" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.175875 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82018793-9d72-4fd1-b828-368c2ed205d9-config-data\") pod \"keystone-ffdb4dfc7-dbp8r\" (UID: \"82018793-9d72-4fd1-b828-368c2ed205d9\") " pod="openstack/keystone-ffdb4dfc7-dbp8r" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.311281 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82018793-9d72-4fd1-b828-368c2ed205d9-scripts\") pod \"keystone-ffdb4dfc7-dbp8r\" (UID: \"82018793-9d72-4fd1-b828-368c2ed205d9\") " pod="openstack/keystone-ffdb4dfc7-dbp8r" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.311544 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/82018793-9d72-4fd1-b828-368c2ed205d9-fernet-keys\") pod \"keystone-ffdb4dfc7-dbp8r\" (UID: \"82018793-9d72-4fd1-b828-368c2ed205d9\") " pod="openstack/keystone-ffdb4dfc7-dbp8r" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.311690 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82018793-9d72-4fd1-b828-368c2ed205d9-combined-ca-bundle\") pod \"keystone-ffdb4dfc7-dbp8r\" (UID: \"82018793-9d72-4fd1-b828-368c2ed205d9\") " pod="openstack/keystone-ffdb4dfc7-dbp8r" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.311806 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"keystone-domains\" (UniqueName: \"kubernetes.io/secret/82018793-9d72-4fd1-b828-368c2ed205d9-keystone-domains\") pod \"keystone-ffdb4dfc7-dbp8r\" (UID: \"82018793-9d72-4fd1-b828-368c2ed205d9\") " pod="openstack/keystone-ffdb4dfc7-dbp8r" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.311919 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr5fl\" (UniqueName: \"kubernetes.io/projected/82018793-9d72-4fd1-b828-368c2ed205d9-kube-api-access-jr5fl\") pod \"keystone-ffdb4dfc7-dbp8r\" (UID: \"82018793-9d72-4fd1-b828-368c2ed205d9\") " pod="openstack/keystone-ffdb4dfc7-dbp8r" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.312025 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/82018793-9d72-4fd1-b828-368c2ed205d9-credential-keys\") pod \"keystone-ffdb4dfc7-dbp8r\" (UID: \"82018793-9d72-4fd1-b828-368c2ed205d9\") " pod="openstack/keystone-ffdb4dfc7-dbp8r" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.312110 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82018793-9d72-4fd1-b828-368c2ed205d9-config-data\") pod \"keystone-ffdb4dfc7-dbp8r\" (UID: \"82018793-9d72-4fd1-b828-368c2ed205d9\") " pod="openstack/keystone-ffdb4dfc7-dbp8r" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.312197 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82018793-9d72-4fd1-b828-368c2ed205d9-public-tls-certs\") pod \"keystone-ffdb4dfc7-dbp8r\" (UID: \"82018793-9d72-4fd1-b828-368c2ed205d9\") " pod="openstack/keystone-ffdb4dfc7-dbp8r" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.312272 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82018793-9d72-4fd1-b828-368c2ed205d9-internal-tls-certs\") pod \"keystone-ffdb4dfc7-dbp8r\" (UID: \"82018793-9d72-4fd1-b828-368c2ed205d9\") " pod="openstack/keystone-ffdb4dfc7-dbp8r" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.316955 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82018793-9d72-4fd1-b828-368c2ed205d9-internal-tls-certs\") pod \"keystone-ffdb4dfc7-dbp8r\" (UID: \"82018793-9d72-4fd1-b828-368c2ed205d9\") " pod="openstack/keystone-ffdb4dfc7-dbp8r" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.316984 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82018793-9d72-4fd1-b828-368c2ed205d9-combined-ca-bundle\") pod \"keystone-ffdb4dfc7-dbp8r\" (UID: \"82018793-9d72-4fd1-b828-368c2ed205d9\") " pod="openstack/keystone-ffdb4dfc7-dbp8r" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.317415 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"keystone-domains\" (UniqueName: \"kubernetes.io/secret/82018793-9d72-4fd1-b828-368c2ed205d9-keystone-domains\") pod \"keystone-ffdb4dfc7-dbp8r\" (UID: \"82018793-9d72-4fd1-b828-368c2ed205d9\") " pod="openstack/keystone-ffdb4dfc7-dbp8r" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.317427 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/82018793-9d72-4fd1-b828-368c2ed205d9-credential-keys\") pod \"keystone-ffdb4dfc7-dbp8r\" (UID: \"82018793-9d72-4fd1-b828-368c2ed205d9\") " pod="openstack/keystone-ffdb4dfc7-dbp8r" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.317630 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82018793-9d72-4fd1-b828-368c2ed205d9-scripts\") pod \"keystone-ffdb4dfc7-dbp8r\" (UID: \"82018793-9d72-4fd1-b828-368c2ed205d9\") " pod="openstack/keystone-ffdb4dfc7-dbp8r" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.318255 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82018793-9d72-4fd1-b828-368c2ed205d9-config-data\") pod \"keystone-ffdb4dfc7-dbp8r\" (UID: \"82018793-9d72-4fd1-b828-368c2ed205d9\") " pod="openstack/keystone-ffdb4dfc7-dbp8r" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.319374 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82018793-9d72-4fd1-b828-368c2ed205d9-public-tls-certs\") pod \"keystone-ffdb4dfc7-dbp8r\" (UID: \"82018793-9d72-4fd1-b828-368c2ed205d9\") " pod="openstack/keystone-ffdb4dfc7-dbp8r" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.326001 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/82018793-9d72-4fd1-b828-368c2ed205d9-fernet-keys\") pod \"keystone-ffdb4dfc7-dbp8r\" (UID: \"82018793-9d72-4fd1-b828-368c2ed205d9\") " pod="openstack/keystone-ffdb4dfc7-dbp8r" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.329438 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr5fl\" (UniqueName: \"kubernetes.io/projected/82018793-9d72-4fd1-b828-368c2ed205d9-kube-api-access-jr5fl\") pod \"keystone-ffdb4dfc7-dbp8r\" (UID: \"82018793-9d72-4fd1-b828-368c2ed205d9\") " pod="openstack/keystone-ffdb4dfc7-dbp8r" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.357145 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ffdb4dfc7-dbp8r" Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.802496 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-ffdb4dfc7-dbp8r"] Jan 01 09:56:37 crc kubenswrapper[4867]: I0101 09:56:37.927951 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ffdb4dfc7-dbp8r" event={"ID":"82018793-9d72-4fd1-b828-368c2ed205d9","Type":"ContainerStarted","Data":"f7004dd54911e9a7f2dc177dc30e15a830a51f98b2bc22219732682431cfd8d8"} Jan 01 09:56:38 crc kubenswrapper[4867]: I0101 09:56:38.937498 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ffdb4dfc7-dbp8r" event={"ID":"82018793-9d72-4fd1-b828-368c2ed205d9","Type":"ContainerStarted","Data":"4c1fd0110fc8d806ca670a3f9243399542d7280acedf72c84a410c954911fbec"} Jan 01 09:56:38 crc kubenswrapper[4867]: I0101 09:56:38.938061 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-ffdb4dfc7-dbp8r" Jan 01 09:56:38 crc kubenswrapper[4867]: I0101 09:56:38.956536 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-ffdb4dfc7-dbp8r" podStartSLOduration=2.956511061 podStartE2EDuration="2.956511061s" podCreationTimestamp="2026-01-01 09:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 09:56:38.954867614 +0000 UTC m=+5408.090136473" watchObservedRunningTime="2026-01-01 09:56:38.956511061 +0000 UTC m=+5408.091779880" Jan 01 09:57:08 crc kubenswrapper[4867]: I0101 09:57:08.901134 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-ffdb4dfc7-dbp8r" Jan 01 09:57:12 crc kubenswrapper[4867]: I0101 09:57:12.927079 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 01 09:57:12 crc kubenswrapper[4867]: I0101 09:57:12.929977 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 01 09:57:12 crc kubenswrapper[4867]: I0101 09:57:12.931938 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-pvp6n" Jan 01 09:57:12 crc kubenswrapper[4867]: I0101 09:57:12.933152 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 01 09:57:12 crc kubenswrapper[4867]: I0101 09:57:12.933167 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 01 09:57:12 crc kubenswrapper[4867]: I0101 09:57:12.942174 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 01 09:57:12 crc kubenswrapper[4867]: I0101 09:57:12.965445 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 01 09:57:12 crc kubenswrapper[4867]: E0101 09:57:12.966118 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-vrdkz openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[combined-ca-bundle kube-api-access-vrdkz openstack-config openstack-config-secret]: context canceled" pod="openstack/openstackclient" podUID="bbb0bc29-6ea8-41ce-ba5a-d53bcf72014e" Jan 01 09:57:12 crc kubenswrapper[4867]: I0101 09:57:12.983360 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 01 09:57:12 crc kubenswrapper[4867]: I0101 09:57:12.992932 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 01 09:57:12 crc kubenswrapper[4867]: I0101 09:57:12.996307 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 01 09:57:13 crc kubenswrapper[4867]: I0101 09:57:13.003940 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 01 09:57:13 crc kubenswrapper[4867]: I0101 09:57:13.139191 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbb0bc29-6ea8-41ce-ba5a-d53bcf72014e" path="/var/lib/kubelet/pods/bbb0bc29-6ea8-41ce-ba5a-d53bcf72014e/volumes" Jan 01 09:57:13 crc kubenswrapper[4867]: I0101 09:57:13.193594 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b30be34-6c47-4a46-92fd-e6629f548214-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6b30be34-6c47-4a46-92fd-e6629f548214\") " pod="openstack/openstackclient" Jan 01 09:57:13 crc kubenswrapper[4867]: I0101 09:57:13.193638 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4jph\" (UniqueName: \"kubernetes.io/projected/6b30be34-6c47-4a46-92fd-e6629f548214-kube-api-access-p4jph\") pod \"openstackclient\" (UID: \"6b30be34-6c47-4a46-92fd-e6629f548214\") " pod="openstack/openstackclient" Jan 01 09:57:13 crc kubenswrapper[4867]: I0101 09:57:13.193699 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6b30be34-6c47-4a46-92fd-e6629f548214-openstack-config-secret\") pod \"openstackclient\" (UID: \"6b30be34-6c47-4a46-92fd-e6629f548214\") " pod="openstack/openstackclient" Jan 01 09:57:13 crc kubenswrapper[4867]: I0101 09:57:13.193766 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6b30be34-6c47-4a46-92fd-e6629f548214-openstack-config\") pod \"openstackclient\" (UID: \"6b30be34-6c47-4a46-92fd-e6629f548214\") " pod="openstack/openstackclient" Jan 01 09:57:13 crc kubenswrapper[4867]: I0101 09:57:13.295244 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b30be34-6c47-4a46-92fd-e6629f548214-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6b30be34-6c47-4a46-92fd-e6629f548214\") " pod="openstack/openstackclient" Jan 01 09:57:13 crc kubenswrapper[4867]: I0101 09:57:13.295307 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4jph\" (UniqueName: \"kubernetes.io/projected/6b30be34-6c47-4a46-92fd-e6629f548214-kube-api-access-p4jph\") pod \"openstackclient\" (UID: \"6b30be34-6c47-4a46-92fd-e6629f548214\") " pod="openstack/openstackclient" Jan 01 09:57:13 crc kubenswrapper[4867]: I0101 09:57:13.295403 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6b30be34-6c47-4a46-92fd-e6629f548214-openstack-config-secret\") pod \"openstackclient\" (UID: \"6b30be34-6c47-4a46-92fd-e6629f548214\") " pod="openstack/openstackclient" Jan 01 09:57:13 crc kubenswrapper[4867]: I0101 09:57:13.295458 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6b30be34-6c47-4a46-92fd-e6629f548214-openstack-config\") pod \"openstackclient\" (UID: \"6b30be34-6c47-4a46-92fd-e6629f548214\") " pod="openstack/openstackclient" Jan 01 09:57:13 crc kubenswrapper[4867]: I0101 09:57:13.296423 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6b30be34-6c47-4a46-92fd-e6629f548214-openstack-config\") pod \"openstackclient\" (UID: \"6b30be34-6c47-4a46-92fd-e6629f548214\") " pod="openstack/openstackclient" Jan 01 09:57:13 crc kubenswrapper[4867]: I0101 09:57:13.301614 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b30be34-6c47-4a46-92fd-e6629f548214-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6b30be34-6c47-4a46-92fd-e6629f548214\") " pod="openstack/openstackclient" Jan 01 09:57:13 crc kubenswrapper[4867]: I0101 09:57:13.301957 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6b30be34-6c47-4a46-92fd-e6629f548214-openstack-config-secret\") pod \"openstackclient\" (UID: \"6b30be34-6c47-4a46-92fd-e6629f548214\") " pod="openstack/openstackclient" Jan 01 09:57:13 crc kubenswrapper[4867]: I0101 09:57:13.319407 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4jph\" (UniqueName: \"kubernetes.io/projected/6b30be34-6c47-4a46-92fd-e6629f548214-kube-api-access-p4jph\") pod \"openstackclient\" (UID: \"6b30be34-6c47-4a46-92fd-e6629f548214\") " pod="openstack/openstackclient" Jan 01 09:57:13 crc kubenswrapper[4867]: I0101 09:57:13.324451 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 01 09:57:13 crc kubenswrapper[4867]: I0101 09:57:13.396747 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 01 09:57:13 crc kubenswrapper[4867]: I0101 09:57:13.399292 4867 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="bbb0bc29-6ea8-41ce-ba5a-d53bcf72014e" podUID="6b30be34-6c47-4a46-92fd-e6629f548214" Jan 01 09:57:13 crc kubenswrapper[4867]: I0101 09:57:13.618550 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 01 09:57:14 crc kubenswrapper[4867]: I0101 09:57:14.066004 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 01 09:57:14 crc kubenswrapper[4867]: W0101 09:57:14.074313 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b30be34_6c47_4a46_92fd_e6629f548214.slice/crio-caeb365963f41dff50a0ec75ebfb837e2d52bd7255b78b994590384a570697b4 WatchSource:0}: Error finding container caeb365963f41dff50a0ec75ebfb837e2d52bd7255b78b994590384a570697b4: Status 404 returned error can't find the container with id caeb365963f41dff50a0ec75ebfb837e2d52bd7255b78b994590384a570697b4 Jan 01 09:57:14 crc kubenswrapper[4867]: I0101 09:57:14.339100 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"6b30be34-6c47-4a46-92fd-e6629f548214","Type":"ContainerStarted","Data":"5c72464a42f8860143fb3de199a4a241d178f47abcc3947eb90537371eb40d49"} Jan 01 09:57:14 crc kubenswrapper[4867]: I0101 09:57:14.339645 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"6b30be34-6c47-4a46-92fd-e6629f548214","Type":"ContainerStarted","Data":"caeb365963f41dff50a0ec75ebfb837e2d52bd7255b78b994590384a570697b4"} Jan 01 09:57:14 crc kubenswrapper[4867]: I0101 09:57:14.339122 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 01 09:57:14 crc kubenswrapper[4867]: I0101 09:57:14.357539 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.357512236 podStartE2EDuration="2.357512236s" podCreationTimestamp="2026-01-01 09:57:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 09:57:14.357049763 +0000 UTC m=+5443.492318572" watchObservedRunningTime="2026-01-01 09:57:14.357512236 +0000 UTC m=+5443.492781015" Jan 01 09:57:14 crc kubenswrapper[4867]: I0101 09:57:14.393589 4867 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="bbb0bc29-6ea8-41ce-ba5a-d53bcf72014e" podUID="6b30be34-6c47-4a46-92fd-e6629f548214" Jan 01 09:57:41 crc kubenswrapper[4867]: I0101 09:57:41.425665 4867 scope.go:117] "RemoveContainer" containerID="9164dc275131bfab97eb061c268bea17301a1db2cd92624742201660a96712b1" Jan 01 09:57:41 crc kubenswrapper[4867]: I0101 09:57:41.460668 4867 scope.go:117] "RemoveContainer" containerID="fdb407e475d05c2d4596ab2464c48035358f4a7fb329d521383a4bbd4c0fb30f" Jan 01 09:57:41 crc kubenswrapper[4867]: I0101 09:57:41.534359 4867 scope.go:117] "RemoveContainer" containerID="032c93d1d4384cbbe4e53e201cbad1a2439b39618711a0b8338cd2beb7c63196" Jan 01 09:57:41 crc kubenswrapper[4867]: I0101 09:57:41.573456 4867 scope.go:117] "RemoveContainer" containerID="a437daf25fe445984b12e79ecd4b77dc56eff0b79ee371c7b3b6225531125def" Jan 01 09:57:41 crc kubenswrapper[4867]: I0101 09:57:41.624537 4867 scope.go:117] "RemoveContainer" containerID="4947f14ef0a0cb9092112f6e82f0fff713fb1daa98d3700e718b2213657e04a6" Jan 01 09:57:41 crc kubenswrapper[4867]: I0101 09:57:41.657002 4867 scope.go:117] "RemoveContainer" containerID="55fa3562c18f66503d11a25ab40feeb6f9b7d1d30835bebde7245c2edd5a77fb" Jan 01 09:57:51 crc kubenswrapper[4867]: I0101 09:57:51.331295 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 09:57:51 crc kubenswrapper[4867]: I0101 09:57:51.331999 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 09:58:21 crc kubenswrapper[4867]: I0101 09:58:21.331347 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 09:58:21 crc kubenswrapper[4867]: I0101 09:58:21.332126 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 09:58:51 crc kubenswrapper[4867]: I0101 09:58:51.331312 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 09:58:51 crc kubenswrapper[4867]: I0101 09:58:51.332088 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 09:58:51 crc kubenswrapper[4867]: I0101 09:58:51.332152 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-69jph" Jan 01 09:58:51 crc kubenswrapper[4867]: I0101 09:58:51.332992 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"41d797c5a9ee389d0167543c461e0396ce5911531f543d8083183360c9bf4c88"} pod="openshift-machine-config-operator/machine-config-daemon-69jph" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 01 09:58:51 crc kubenswrapper[4867]: I0101 09:58:51.333051 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" containerID="cri-o://41d797c5a9ee389d0167543c461e0396ce5911531f543d8083183360c9bf4c88" gracePeriod=600 Jan 01 09:58:52 crc kubenswrapper[4867]: I0101 09:58:52.353795 4867 generic.go:334] "Generic (PLEG): container finished" podID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerID="41d797c5a9ee389d0167543c461e0396ce5911531f543d8083183360c9bf4c88" exitCode=0 Jan 01 09:58:52 crc kubenswrapper[4867]: I0101 09:58:52.353854 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerDied","Data":"41d797c5a9ee389d0167543c461e0396ce5911531f543d8083183360c9bf4c88"} Jan 01 09:58:52 crc kubenswrapper[4867]: I0101 09:58:52.354448 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerStarted","Data":"4263e693e008f2e7c549f6d1a594c0a089e1f59ba3be2adb0d4b5d5430950a46"} Jan 01 09:58:52 crc kubenswrapper[4867]: I0101 09:58:52.354471 4867 scope.go:117] "RemoveContainer" containerID="5c8950a1f682766a61154106a21527175eab09dc1f0de8f7e4d1ac387a869c79" Jan 01 09:59:24 crc kubenswrapper[4867]: I0101 09:59:24.076675 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-vwjmr"] Jan 01 09:59:24 crc kubenswrapper[4867]: I0101 09:59:24.087305 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-vwjmr"] Jan 01 09:59:25 crc kubenswrapper[4867]: I0101 09:59:25.146584 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ddf0bf9-9447-4286-af8b-615436da38bf" path="/var/lib/kubelet/pods/2ddf0bf9-9447-4286-af8b-615436da38bf/volumes" Jan 01 09:59:37 crc kubenswrapper[4867]: I0101 09:59:37.456638 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jrx2d"] Jan 01 09:59:37 crc kubenswrapper[4867]: I0101 09:59:37.459353 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jrx2d" Jan 01 09:59:37 crc kubenswrapper[4867]: I0101 09:59:37.488154 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jrx2d"] Jan 01 09:59:37 crc kubenswrapper[4867]: I0101 09:59:37.558018 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f813ef05-9064-465b-8e51-37e08d0d9021-utilities\") pod \"community-operators-jrx2d\" (UID: \"f813ef05-9064-465b-8e51-37e08d0d9021\") " pod="openshift-marketplace/community-operators-jrx2d" Jan 01 09:59:37 crc kubenswrapper[4867]: I0101 09:59:37.558094 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48mhr\" (UniqueName: \"kubernetes.io/projected/f813ef05-9064-465b-8e51-37e08d0d9021-kube-api-access-48mhr\") pod \"community-operators-jrx2d\" (UID: \"f813ef05-9064-465b-8e51-37e08d0d9021\") " pod="openshift-marketplace/community-operators-jrx2d" Jan 01 09:59:37 crc kubenswrapper[4867]: I0101 09:59:37.558211 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f813ef05-9064-465b-8e51-37e08d0d9021-catalog-content\") pod \"community-operators-jrx2d\" (UID: \"f813ef05-9064-465b-8e51-37e08d0d9021\") " pod="openshift-marketplace/community-operators-jrx2d" Jan 01 09:59:37 crc kubenswrapper[4867]: I0101 09:59:37.659735 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f813ef05-9064-465b-8e51-37e08d0d9021-utilities\") pod \"community-operators-jrx2d\" (UID: \"f813ef05-9064-465b-8e51-37e08d0d9021\") " pod="openshift-marketplace/community-operators-jrx2d" Jan 01 09:59:37 crc kubenswrapper[4867]: I0101 09:59:37.659806 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48mhr\" (UniqueName: \"kubernetes.io/projected/f813ef05-9064-465b-8e51-37e08d0d9021-kube-api-access-48mhr\") pod \"community-operators-jrx2d\" (UID: \"f813ef05-9064-465b-8e51-37e08d0d9021\") " pod="openshift-marketplace/community-operators-jrx2d" Jan 01 09:59:37 crc kubenswrapper[4867]: I0101 09:59:37.659857 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f813ef05-9064-465b-8e51-37e08d0d9021-catalog-content\") pod \"community-operators-jrx2d\" (UID: \"f813ef05-9064-465b-8e51-37e08d0d9021\") " pod="openshift-marketplace/community-operators-jrx2d" Jan 01 09:59:37 crc kubenswrapper[4867]: I0101 09:59:37.660404 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f813ef05-9064-465b-8e51-37e08d0d9021-utilities\") pod \"community-operators-jrx2d\" (UID: \"f813ef05-9064-465b-8e51-37e08d0d9021\") " pod="openshift-marketplace/community-operators-jrx2d" Jan 01 09:59:37 crc kubenswrapper[4867]: I0101 09:59:37.660499 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f813ef05-9064-465b-8e51-37e08d0d9021-catalog-content\") pod \"community-operators-jrx2d\" (UID: \"f813ef05-9064-465b-8e51-37e08d0d9021\") " pod="openshift-marketplace/community-operators-jrx2d" Jan 01 09:59:37 crc kubenswrapper[4867]: I0101 09:59:37.689757 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48mhr\" (UniqueName: \"kubernetes.io/projected/f813ef05-9064-465b-8e51-37e08d0d9021-kube-api-access-48mhr\") pod \"community-operators-jrx2d\" (UID: \"f813ef05-9064-465b-8e51-37e08d0d9021\") " pod="openshift-marketplace/community-operators-jrx2d" Jan 01 09:59:37 crc kubenswrapper[4867]: I0101 09:59:37.787394 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jrx2d" Jan 01 09:59:38 crc kubenswrapper[4867]: I0101 09:59:38.346149 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jrx2d"] Jan 01 09:59:38 crc kubenswrapper[4867]: I0101 09:59:38.877423 4867 generic.go:334] "Generic (PLEG): container finished" podID="f813ef05-9064-465b-8e51-37e08d0d9021" containerID="28529004a7ec64b2187fc8bf650bafe9239d2ac9bd1f98eb34807766989bbd1e" exitCode=0 Jan 01 09:59:38 crc kubenswrapper[4867]: I0101 09:59:38.877487 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrx2d" event={"ID":"f813ef05-9064-465b-8e51-37e08d0d9021","Type":"ContainerDied","Data":"28529004a7ec64b2187fc8bf650bafe9239d2ac9bd1f98eb34807766989bbd1e"} Jan 01 09:59:38 crc kubenswrapper[4867]: I0101 09:59:38.877524 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrx2d" event={"ID":"f813ef05-9064-465b-8e51-37e08d0d9021","Type":"ContainerStarted","Data":"28ad38b98e98e56b0e6e77a8f6d5ad27e797a54b2d74c2f406a85c018b3d6996"} Jan 01 09:59:39 crc kubenswrapper[4867]: I0101 09:59:39.449281 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2k4nq/must-gather-94tz9"] Jan 01 09:59:39 crc kubenswrapper[4867]: I0101 09:59:39.451580 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2k4nq/must-gather-94tz9" Jan 01 09:59:39 crc kubenswrapper[4867]: I0101 09:59:39.456024 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2k4nq/must-gather-94tz9"] Jan 01 09:59:39 crc kubenswrapper[4867]: I0101 09:59:39.458941 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-2k4nq"/"default-dockercfg-n9twm" Jan 01 09:59:39 crc kubenswrapper[4867]: I0101 09:59:39.459746 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-2k4nq"/"kube-root-ca.crt" Jan 01 09:59:39 crc kubenswrapper[4867]: I0101 09:59:39.459971 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-2k4nq"/"openshift-service-ca.crt" Jan 01 09:59:39 crc kubenswrapper[4867]: I0101 09:59:39.628999 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/456bdd85-1e92-4149-a408-d7bef1042b83-must-gather-output\") pod \"must-gather-94tz9\" (UID: \"456bdd85-1e92-4149-a408-d7bef1042b83\") " pod="openshift-must-gather-2k4nq/must-gather-94tz9" Jan 01 09:59:39 crc kubenswrapper[4867]: I0101 09:59:39.629053 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl5vm\" (UniqueName: \"kubernetes.io/projected/456bdd85-1e92-4149-a408-d7bef1042b83-kube-api-access-rl5vm\") pod \"must-gather-94tz9\" (UID: \"456bdd85-1e92-4149-a408-d7bef1042b83\") " pod="openshift-must-gather-2k4nq/must-gather-94tz9" Jan 01 09:59:39 crc kubenswrapper[4867]: I0101 09:59:39.730833 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/456bdd85-1e92-4149-a408-d7bef1042b83-must-gather-output\") pod \"must-gather-94tz9\" (UID: \"456bdd85-1e92-4149-a408-d7bef1042b83\") " pod="openshift-must-gather-2k4nq/must-gather-94tz9" Jan 01 09:59:39 crc kubenswrapper[4867]: I0101 09:59:39.730893 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl5vm\" (UniqueName: \"kubernetes.io/projected/456bdd85-1e92-4149-a408-d7bef1042b83-kube-api-access-rl5vm\") pod \"must-gather-94tz9\" (UID: \"456bdd85-1e92-4149-a408-d7bef1042b83\") " pod="openshift-must-gather-2k4nq/must-gather-94tz9" Jan 01 09:59:39 crc kubenswrapper[4867]: I0101 09:59:39.731329 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/456bdd85-1e92-4149-a408-d7bef1042b83-must-gather-output\") pod \"must-gather-94tz9\" (UID: \"456bdd85-1e92-4149-a408-d7bef1042b83\") " pod="openshift-must-gather-2k4nq/must-gather-94tz9" Jan 01 09:59:39 crc kubenswrapper[4867]: I0101 09:59:39.759564 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl5vm\" (UniqueName: \"kubernetes.io/projected/456bdd85-1e92-4149-a408-d7bef1042b83-kube-api-access-rl5vm\") pod \"must-gather-94tz9\" (UID: \"456bdd85-1e92-4149-a408-d7bef1042b83\") " pod="openshift-must-gather-2k4nq/must-gather-94tz9" Jan 01 09:59:39 crc kubenswrapper[4867]: I0101 09:59:39.771656 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2k4nq/must-gather-94tz9" Jan 01 09:59:39 crc kubenswrapper[4867]: I0101 09:59:39.887132 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrx2d" event={"ID":"f813ef05-9064-465b-8e51-37e08d0d9021","Type":"ContainerStarted","Data":"bb1ddd0290e1d6439d1200427f88a143eaae5e10cb0bdff55634a6cbd349b721"} Jan 01 09:59:40 crc kubenswrapper[4867]: I0101 09:59:40.195871 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2k4nq/must-gather-94tz9"] Jan 01 09:59:40 crc kubenswrapper[4867]: W0101 09:59:40.198285 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod456bdd85_1e92_4149_a408_d7bef1042b83.slice/crio-5884d441f6ff898d0a72d9058507ed0e5173090f4fb5468ea11f6c5d9efa901c WatchSource:0}: Error finding container 5884d441f6ff898d0a72d9058507ed0e5173090f4fb5468ea11f6c5d9efa901c: Status 404 returned error can't find the container with id 5884d441f6ff898d0a72d9058507ed0e5173090f4fb5468ea11f6c5d9efa901c Jan 01 09:59:40 crc kubenswrapper[4867]: I0101 09:59:40.895700 4867 generic.go:334] "Generic (PLEG): container finished" podID="f813ef05-9064-465b-8e51-37e08d0d9021" containerID="bb1ddd0290e1d6439d1200427f88a143eaae5e10cb0bdff55634a6cbd349b721" exitCode=0 Jan 01 09:59:40 crc kubenswrapper[4867]: I0101 09:59:40.895784 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrx2d" event={"ID":"f813ef05-9064-465b-8e51-37e08d0d9021","Type":"ContainerDied","Data":"bb1ddd0290e1d6439d1200427f88a143eaae5e10cb0bdff55634a6cbd349b721"} Jan 01 09:59:40 crc kubenswrapper[4867]: I0101 09:59:40.897413 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2k4nq/must-gather-94tz9" event={"ID":"456bdd85-1e92-4149-a408-d7bef1042b83","Type":"ContainerStarted","Data":"5884d441f6ff898d0a72d9058507ed0e5173090f4fb5468ea11f6c5d9efa901c"} Jan 01 09:59:41 crc kubenswrapper[4867]: I0101 09:59:41.819927 4867 scope.go:117] "RemoveContainer" containerID="3df9c6b2e5f8e1ff7dc37965e8ea9f56fcea60ce0a7fdc4c996226505b0311e6" Jan 01 09:59:41 crc kubenswrapper[4867]: I0101 09:59:41.911648 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrx2d" event={"ID":"f813ef05-9064-465b-8e51-37e08d0d9021","Type":"ContainerStarted","Data":"86aa91873a27ed06c321e9bc354ddfa5bebfcf53e306daba09c33a433592ac63"} Jan 01 09:59:41 crc kubenswrapper[4867]: I0101 09:59:41.931537 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jrx2d" podStartSLOduration=2.4400072010000002 podStartE2EDuration="4.931517151s" podCreationTimestamp="2026-01-01 09:59:37 +0000 UTC" firstStartedPulling="2026-01-01 09:59:38.88031718 +0000 UTC m=+5588.015585949" lastFinishedPulling="2026-01-01 09:59:41.37182713 +0000 UTC m=+5590.507095899" observedRunningTime="2026-01-01 09:59:41.928210487 +0000 UTC m=+5591.063479256" watchObservedRunningTime="2026-01-01 09:59:41.931517151 +0000 UTC m=+5591.066785910" Jan 01 09:59:47 crc kubenswrapper[4867]: I0101 09:59:47.787959 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jrx2d" Jan 01 09:59:47 crc kubenswrapper[4867]: I0101 09:59:47.788530 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jrx2d" Jan 01 09:59:47 crc kubenswrapper[4867]: I0101 09:59:47.835425 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jrx2d" Jan 01 09:59:47 crc kubenswrapper[4867]: I0101 09:59:47.981452 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2k4nq/must-gather-94tz9" event={"ID":"456bdd85-1e92-4149-a408-d7bef1042b83","Type":"ContainerStarted","Data":"fb5b1393037d09e361526f850f3df04cae305fd854e3fb80beca486da9d2f19f"} Jan 01 09:59:47 crc kubenswrapper[4867]: I0101 09:59:47.981520 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2k4nq/must-gather-94tz9" event={"ID":"456bdd85-1e92-4149-a408-d7bef1042b83","Type":"ContainerStarted","Data":"4c48ab831b665243d1d7aeec48b920e76f81154a1ad0d396ab2f8adfccb985a8"} Jan 01 09:59:48 crc kubenswrapper[4867]: I0101 09:59:48.010519 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-2k4nq/must-gather-94tz9" podStartSLOduration=2.383949374 podStartE2EDuration="9.010491339s" podCreationTimestamp="2026-01-01 09:59:39 +0000 UTC" firstStartedPulling="2026-01-01 09:59:40.199931728 +0000 UTC m=+5589.335200527" lastFinishedPulling="2026-01-01 09:59:46.826473723 +0000 UTC m=+5595.961742492" observedRunningTime="2026-01-01 09:59:48.00209086 +0000 UTC m=+5597.137359659" watchObservedRunningTime="2026-01-01 09:59:48.010491339 +0000 UTC m=+5597.145760138" Jan 01 09:59:48 crc kubenswrapper[4867]: I0101 09:59:48.050706 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jrx2d" Jan 01 09:59:48 crc kubenswrapper[4867]: I0101 09:59:48.109325 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jrx2d"] Jan 01 09:59:49 crc kubenswrapper[4867]: I0101 09:59:49.799641 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2k4nq/crc-debug-6gr65"] Jan 01 09:59:49 crc kubenswrapper[4867]: I0101 09:59:49.801943 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2k4nq/crc-debug-6gr65" Jan 01 09:59:49 crc kubenswrapper[4867]: I0101 09:59:49.938836 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd54n\" (UniqueName: \"kubernetes.io/projected/e422cbc7-a3ab-4ed9-9e73-db95fd37ab9c-kube-api-access-wd54n\") pod \"crc-debug-6gr65\" (UID: \"e422cbc7-a3ab-4ed9-9e73-db95fd37ab9c\") " pod="openshift-must-gather-2k4nq/crc-debug-6gr65" Jan 01 09:59:49 crc kubenswrapper[4867]: I0101 09:59:49.939041 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e422cbc7-a3ab-4ed9-9e73-db95fd37ab9c-host\") pod \"crc-debug-6gr65\" (UID: \"e422cbc7-a3ab-4ed9-9e73-db95fd37ab9c\") " pod="openshift-must-gather-2k4nq/crc-debug-6gr65" Jan 01 09:59:49 crc kubenswrapper[4867]: I0101 09:59:49.999790 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jrx2d" podUID="f813ef05-9064-465b-8e51-37e08d0d9021" containerName="registry-server" containerID="cri-o://86aa91873a27ed06c321e9bc354ddfa5bebfcf53e306daba09c33a433592ac63" gracePeriod=2 Jan 01 09:59:50 crc kubenswrapper[4867]: I0101 09:59:50.040819 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e422cbc7-a3ab-4ed9-9e73-db95fd37ab9c-host\") pod \"crc-debug-6gr65\" (UID: \"e422cbc7-a3ab-4ed9-9e73-db95fd37ab9c\") " pod="openshift-must-gather-2k4nq/crc-debug-6gr65" Jan 01 09:59:50 crc kubenswrapper[4867]: I0101 09:59:50.041027 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e422cbc7-a3ab-4ed9-9e73-db95fd37ab9c-host\") pod \"crc-debug-6gr65\" (UID: \"e422cbc7-a3ab-4ed9-9e73-db95fd37ab9c\") " pod="openshift-must-gather-2k4nq/crc-debug-6gr65" Jan 01 09:59:50 crc kubenswrapper[4867]: I0101 09:59:50.041098 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd54n\" (UniqueName: \"kubernetes.io/projected/e422cbc7-a3ab-4ed9-9e73-db95fd37ab9c-kube-api-access-wd54n\") pod \"crc-debug-6gr65\" (UID: \"e422cbc7-a3ab-4ed9-9e73-db95fd37ab9c\") " pod="openshift-must-gather-2k4nq/crc-debug-6gr65" Jan 01 09:59:50 crc kubenswrapper[4867]: I0101 09:59:50.079766 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd54n\" (UniqueName: \"kubernetes.io/projected/e422cbc7-a3ab-4ed9-9e73-db95fd37ab9c-kube-api-access-wd54n\") pod \"crc-debug-6gr65\" (UID: \"e422cbc7-a3ab-4ed9-9e73-db95fd37ab9c\") " pod="openshift-must-gather-2k4nq/crc-debug-6gr65" Jan 01 09:59:50 crc kubenswrapper[4867]: I0101 09:59:50.124214 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2k4nq/crc-debug-6gr65" Jan 01 09:59:50 crc kubenswrapper[4867]: W0101 09:59:50.171016 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode422cbc7_a3ab_4ed9_9e73_db95fd37ab9c.slice/crio-44e23384215e5e7ee74a4875146295b26ec84179bf14cf0230d6bd9c087b1c03 WatchSource:0}: Error finding container 44e23384215e5e7ee74a4875146295b26ec84179bf14cf0230d6bd9c087b1c03: Status 404 returned error can't find the container with id 44e23384215e5e7ee74a4875146295b26ec84179bf14cf0230d6bd9c087b1c03 Jan 01 09:59:50 crc kubenswrapper[4867]: I0101 09:59:50.918251 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jrx2d" Jan 01 09:59:51 crc kubenswrapper[4867]: I0101 09:59:51.010337 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2k4nq/crc-debug-6gr65" event={"ID":"e422cbc7-a3ab-4ed9-9e73-db95fd37ab9c","Type":"ContainerStarted","Data":"44e23384215e5e7ee74a4875146295b26ec84179bf14cf0230d6bd9c087b1c03"} Jan 01 09:59:51 crc kubenswrapper[4867]: I0101 09:59:51.014373 4867 generic.go:334] "Generic (PLEG): container finished" podID="f813ef05-9064-465b-8e51-37e08d0d9021" containerID="86aa91873a27ed06c321e9bc354ddfa5bebfcf53e306daba09c33a433592ac63" exitCode=0 Jan 01 09:59:51 crc kubenswrapper[4867]: I0101 09:59:51.014407 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrx2d" event={"ID":"f813ef05-9064-465b-8e51-37e08d0d9021","Type":"ContainerDied","Data":"86aa91873a27ed06c321e9bc354ddfa5bebfcf53e306daba09c33a433592ac63"} Jan 01 09:59:51 crc kubenswrapper[4867]: I0101 09:59:51.014432 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrx2d" event={"ID":"f813ef05-9064-465b-8e51-37e08d0d9021","Type":"ContainerDied","Data":"28ad38b98e98e56b0e6e77a8f6d5ad27e797a54b2d74c2f406a85c018b3d6996"} Jan 01 09:59:51 crc kubenswrapper[4867]: I0101 09:59:51.014452 4867 scope.go:117] "RemoveContainer" containerID="86aa91873a27ed06c321e9bc354ddfa5bebfcf53e306daba09c33a433592ac63" Jan 01 09:59:51 crc kubenswrapper[4867]: I0101 09:59:51.014575 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jrx2d" Jan 01 09:59:51 crc kubenswrapper[4867]: I0101 09:59:51.044097 4867 scope.go:117] "RemoveContainer" containerID="bb1ddd0290e1d6439d1200427f88a143eaae5e10cb0bdff55634a6cbd349b721" Jan 01 09:59:51 crc kubenswrapper[4867]: I0101 09:59:51.063652 4867 scope.go:117] "RemoveContainer" containerID="28529004a7ec64b2187fc8bf650bafe9239d2ac9bd1f98eb34807766989bbd1e" Jan 01 09:59:51 crc kubenswrapper[4867]: I0101 09:59:51.082492 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48mhr\" (UniqueName: \"kubernetes.io/projected/f813ef05-9064-465b-8e51-37e08d0d9021-kube-api-access-48mhr\") pod \"f813ef05-9064-465b-8e51-37e08d0d9021\" (UID: \"f813ef05-9064-465b-8e51-37e08d0d9021\") " Jan 01 09:59:51 crc kubenswrapper[4867]: I0101 09:59:51.082582 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f813ef05-9064-465b-8e51-37e08d0d9021-catalog-content\") pod \"f813ef05-9064-465b-8e51-37e08d0d9021\" (UID: \"f813ef05-9064-465b-8e51-37e08d0d9021\") " Jan 01 09:59:51 crc kubenswrapper[4867]: I0101 09:59:51.082714 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f813ef05-9064-465b-8e51-37e08d0d9021-utilities\") pod \"f813ef05-9064-465b-8e51-37e08d0d9021\" (UID: \"f813ef05-9064-465b-8e51-37e08d0d9021\") " Jan 01 09:59:51 crc kubenswrapper[4867]: I0101 09:59:51.083676 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f813ef05-9064-465b-8e51-37e08d0d9021-utilities" (OuterVolumeSpecName: "utilities") pod "f813ef05-9064-465b-8e51-37e08d0d9021" (UID: "f813ef05-9064-465b-8e51-37e08d0d9021"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:59:51 crc kubenswrapper[4867]: I0101 09:59:51.088950 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f813ef05-9064-465b-8e51-37e08d0d9021-kube-api-access-48mhr" (OuterVolumeSpecName: "kube-api-access-48mhr") pod "f813ef05-9064-465b-8e51-37e08d0d9021" (UID: "f813ef05-9064-465b-8e51-37e08d0d9021"). InnerVolumeSpecName "kube-api-access-48mhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 09:59:51 crc kubenswrapper[4867]: I0101 09:59:51.095716 4867 scope.go:117] "RemoveContainer" containerID="86aa91873a27ed06c321e9bc354ddfa5bebfcf53e306daba09c33a433592ac63" Jan 01 09:59:51 crc kubenswrapper[4867]: E0101 09:59:51.096198 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86aa91873a27ed06c321e9bc354ddfa5bebfcf53e306daba09c33a433592ac63\": container with ID starting with 86aa91873a27ed06c321e9bc354ddfa5bebfcf53e306daba09c33a433592ac63 not found: ID does not exist" containerID="86aa91873a27ed06c321e9bc354ddfa5bebfcf53e306daba09c33a433592ac63" Jan 01 09:59:51 crc kubenswrapper[4867]: I0101 09:59:51.096227 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86aa91873a27ed06c321e9bc354ddfa5bebfcf53e306daba09c33a433592ac63"} err="failed to get container status \"86aa91873a27ed06c321e9bc354ddfa5bebfcf53e306daba09c33a433592ac63\": rpc error: code = NotFound desc = could not find container \"86aa91873a27ed06c321e9bc354ddfa5bebfcf53e306daba09c33a433592ac63\": container with ID starting with 86aa91873a27ed06c321e9bc354ddfa5bebfcf53e306daba09c33a433592ac63 not found: ID does not exist" Jan 01 09:59:51 crc kubenswrapper[4867]: I0101 09:59:51.096245 4867 scope.go:117] "RemoveContainer" containerID="bb1ddd0290e1d6439d1200427f88a143eaae5e10cb0bdff55634a6cbd349b721" Jan 01 09:59:51 crc kubenswrapper[4867]: E0101 09:59:51.096548 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb1ddd0290e1d6439d1200427f88a143eaae5e10cb0bdff55634a6cbd349b721\": container with ID starting with bb1ddd0290e1d6439d1200427f88a143eaae5e10cb0bdff55634a6cbd349b721 not found: ID does not exist" containerID="bb1ddd0290e1d6439d1200427f88a143eaae5e10cb0bdff55634a6cbd349b721" Jan 01 09:59:51 crc kubenswrapper[4867]: I0101 09:59:51.096598 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb1ddd0290e1d6439d1200427f88a143eaae5e10cb0bdff55634a6cbd349b721"} err="failed to get container status \"bb1ddd0290e1d6439d1200427f88a143eaae5e10cb0bdff55634a6cbd349b721\": rpc error: code = NotFound desc = could not find container \"bb1ddd0290e1d6439d1200427f88a143eaae5e10cb0bdff55634a6cbd349b721\": container with ID starting with bb1ddd0290e1d6439d1200427f88a143eaae5e10cb0bdff55634a6cbd349b721 not found: ID does not exist" Jan 01 09:59:51 crc kubenswrapper[4867]: I0101 09:59:51.096625 4867 scope.go:117] "RemoveContainer" containerID="28529004a7ec64b2187fc8bf650bafe9239d2ac9bd1f98eb34807766989bbd1e" Jan 01 09:59:51 crc kubenswrapper[4867]: E0101 09:59:51.098231 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28529004a7ec64b2187fc8bf650bafe9239d2ac9bd1f98eb34807766989bbd1e\": container with ID starting with 28529004a7ec64b2187fc8bf650bafe9239d2ac9bd1f98eb34807766989bbd1e not found: ID does not exist" containerID="28529004a7ec64b2187fc8bf650bafe9239d2ac9bd1f98eb34807766989bbd1e" Jan 01 09:59:51 crc kubenswrapper[4867]: I0101 09:59:51.098252 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28529004a7ec64b2187fc8bf650bafe9239d2ac9bd1f98eb34807766989bbd1e"} err="failed to get container status \"28529004a7ec64b2187fc8bf650bafe9239d2ac9bd1f98eb34807766989bbd1e\": rpc error: code = NotFound desc = could not find container \"28529004a7ec64b2187fc8bf650bafe9239d2ac9bd1f98eb34807766989bbd1e\": container with ID starting with 28529004a7ec64b2187fc8bf650bafe9239d2ac9bd1f98eb34807766989bbd1e not found: ID does not exist" Jan 01 09:59:51 crc kubenswrapper[4867]: I0101 09:59:51.134315 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f813ef05-9064-465b-8e51-37e08d0d9021-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f813ef05-9064-465b-8e51-37e08d0d9021" (UID: "f813ef05-9064-465b-8e51-37e08d0d9021"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 09:59:51 crc kubenswrapper[4867]: I0101 09:59:51.184465 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f813ef05-9064-465b-8e51-37e08d0d9021-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 09:59:51 crc kubenswrapper[4867]: I0101 09:59:51.184494 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48mhr\" (UniqueName: \"kubernetes.io/projected/f813ef05-9064-465b-8e51-37e08d0d9021-kube-api-access-48mhr\") on node \"crc\" DevicePath \"\"" Jan 01 09:59:51 crc kubenswrapper[4867]: I0101 09:59:51.184511 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f813ef05-9064-465b-8e51-37e08d0d9021-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 09:59:51 crc kubenswrapper[4867]: I0101 09:59:51.333761 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jrx2d"] Jan 01 09:59:51 crc kubenswrapper[4867]: I0101 09:59:51.339911 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jrx2d"] Jan 01 09:59:53 crc kubenswrapper[4867]: I0101 09:59:53.140882 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f813ef05-9064-465b-8e51-37e08d0d9021" path="/var/lib/kubelet/pods/f813ef05-9064-465b-8e51-37e08d0d9021/volumes" Jan 01 09:59:53 crc kubenswrapper[4867]: I0101 09:59:53.476122 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9sp5v"] Jan 01 09:59:53 crc kubenswrapper[4867]: E0101 09:59:53.476484 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f813ef05-9064-465b-8e51-37e08d0d9021" containerName="registry-server" Jan 01 09:59:53 crc kubenswrapper[4867]: I0101 09:59:53.476505 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f813ef05-9064-465b-8e51-37e08d0d9021" containerName="registry-server" Jan 01 09:59:53 crc kubenswrapper[4867]: E0101 09:59:53.476525 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f813ef05-9064-465b-8e51-37e08d0d9021" containerName="extract-utilities" Jan 01 09:59:53 crc kubenswrapper[4867]: I0101 09:59:53.476533 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f813ef05-9064-465b-8e51-37e08d0d9021" containerName="extract-utilities" Jan 01 09:59:53 crc kubenswrapper[4867]: E0101 09:59:53.476551 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f813ef05-9064-465b-8e51-37e08d0d9021" containerName="extract-content" Jan 01 09:59:53 crc kubenswrapper[4867]: I0101 09:59:53.476559 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f813ef05-9064-465b-8e51-37e08d0d9021" containerName="extract-content" Jan 01 09:59:53 crc kubenswrapper[4867]: I0101 09:59:53.476753 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f813ef05-9064-465b-8e51-37e08d0d9021" containerName="registry-server" Jan 01 09:59:53 crc kubenswrapper[4867]: I0101 09:59:53.478355 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9sp5v" Jan 01 09:59:53 crc kubenswrapper[4867]: I0101 09:59:53.486492 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9sp5v"] Jan 01 09:59:53 crc kubenswrapper[4867]: I0101 09:59:53.648802 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da41bfce-ede5-4cc0-8661-775b11aeffce-catalog-content\") pod \"certified-operators-9sp5v\" (UID: \"da41bfce-ede5-4cc0-8661-775b11aeffce\") " pod="openshift-marketplace/certified-operators-9sp5v" Jan 01 09:59:53 crc kubenswrapper[4867]: I0101 09:59:53.649082 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9fjv\" (UniqueName: \"kubernetes.io/projected/da41bfce-ede5-4cc0-8661-775b11aeffce-kube-api-access-z9fjv\") pod \"certified-operators-9sp5v\" (UID: \"da41bfce-ede5-4cc0-8661-775b11aeffce\") " pod="openshift-marketplace/certified-operators-9sp5v" Jan 01 09:59:53 crc kubenswrapper[4867]: I0101 09:59:53.649130 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da41bfce-ede5-4cc0-8661-775b11aeffce-utilities\") pod \"certified-operators-9sp5v\" (UID: \"da41bfce-ede5-4cc0-8661-775b11aeffce\") " pod="openshift-marketplace/certified-operators-9sp5v" Jan 01 09:59:53 crc kubenswrapper[4867]: I0101 09:59:53.751186 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da41bfce-ede5-4cc0-8661-775b11aeffce-utilities\") pod \"certified-operators-9sp5v\" (UID: \"da41bfce-ede5-4cc0-8661-775b11aeffce\") " pod="openshift-marketplace/certified-operators-9sp5v" Jan 01 09:59:53 crc kubenswrapper[4867]: I0101 09:59:53.751233 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9fjv\" (UniqueName: \"kubernetes.io/projected/da41bfce-ede5-4cc0-8661-775b11aeffce-kube-api-access-z9fjv\") pod \"certified-operators-9sp5v\" (UID: \"da41bfce-ede5-4cc0-8661-775b11aeffce\") " pod="openshift-marketplace/certified-operators-9sp5v" Jan 01 09:59:53 crc kubenswrapper[4867]: I0101 09:59:53.751319 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da41bfce-ede5-4cc0-8661-775b11aeffce-catalog-content\") pod \"certified-operators-9sp5v\" (UID: \"da41bfce-ede5-4cc0-8661-775b11aeffce\") " pod="openshift-marketplace/certified-operators-9sp5v" Jan 01 09:59:53 crc kubenswrapper[4867]: I0101 09:59:53.751762 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da41bfce-ede5-4cc0-8661-775b11aeffce-catalog-content\") pod \"certified-operators-9sp5v\" (UID: \"da41bfce-ede5-4cc0-8661-775b11aeffce\") " pod="openshift-marketplace/certified-operators-9sp5v" Jan 01 09:59:53 crc kubenswrapper[4867]: I0101 09:59:53.751907 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da41bfce-ede5-4cc0-8661-775b11aeffce-utilities\") pod \"certified-operators-9sp5v\" (UID: \"da41bfce-ede5-4cc0-8661-775b11aeffce\") " pod="openshift-marketplace/certified-operators-9sp5v" Jan 01 09:59:53 crc kubenswrapper[4867]: I0101 09:59:53.770631 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9fjv\" (UniqueName: \"kubernetes.io/projected/da41bfce-ede5-4cc0-8661-775b11aeffce-kube-api-access-z9fjv\") pod \"certified-operators-9sp5v\" (UID: \"da41bfce-ede5-4cc0-8661-775b11aeffce\") " pod="openshift-marketplace/certified-operators-9sp5v" Jan 01 09:59:53 crc kubenswrapper[4867]: I0101 09:59:53.842024 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9sp5v" Jan 01 09:59:54 crc kubenswrapper[4867]: I0101 09:59:54.366177 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9sp5v"] Jan 01 09:59:54 crc kubenswrapper[4867]: W0101 09:59:54.388449 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda41bfce_ede5_4cc0_8661_775b11aeffce.slice/crio-ad08d02507261b7492b34fbe73cdb5543e5432c867371ed04017c0ea20fbb058 WatchSource:0}: Error finding container ad08d02507261b7492b34fbe73cdb5543e5432c867371ed04017c0ea20fbb058: Status 404 returned error can't find the container with id ad08d02507261b7492b34fbe73cdb5543e5432c867371ed04017c0ea20fbb058 Jan 01 09:59:55 crc kubenswrapper[4867]: I0101 09:59:55.067538 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9sp5v" event={"ID":"da41bfce-ede5-4cc0-8661-775b11aeffce","Type":"ContainerStarted","Data":"ad08d02507261b7492b34fbe73cdb5543e5432c867371ed04017c0ea20fbb058"} Jan 01 09:59:56 crc kubenswrapper[4867]: I0101 09:59:56.077105 4867 generic.go:334] "Generic (PLEG): container finished" podID="da41bfce-ede5-4cc0-8661-775b11aeffce" containerID="98b1b017163d735b83eda96bf183639ea36edbd7c0d07f0d6802f9ba331af9b5" exitCode=0 Jan 01 09:59:56 crc kubenswrapper[4867]: I0101 09:59:56.077206 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9sp5v" event={"ID":"da41bfce-ede5-4cc0-8661-775b11aeffce","Type":"ContainerDied","Data":"98b1b017163d735b83eda96bf183639ea36edbd7c0d07f0d6802f9ba331af9b5"} Jan 01 10:00:00 crc kubenswrapper[4867]: I0101 10:00:00.171551 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29454360-l77xf"] Jan 01 10:00:00 crc kubenswrapper[4867]: I0101 10:00:00.175228 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29454360-l77xf" Jan 01 10:00:00 crc kubenswrapper[4867]: I0101 10:00:00.178875 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 01 10:00:00 crc kubenswrapper[4867]: I0101 10:00:00.179123 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 01 10:00:00 crc kubenswrapper[4867]: I0101 10:00:00.207238 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29454360-l77xf"] Jan 01 10:00:00 crc kubenswrapper[4867]: I0101 10:00:00.355413 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzq58\" (UniqueName: \"kubernetes.io/projected/4bd35d89-54a6-4c50-ad75-6a6744a49117-kube-api-access-nzq58\") pod \"collect-profiles-29454360-l77xf\" (UID: \"4bd35d89-54a6-4c50-ad75-6a6744a49117\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454360-l77xf" Jan 01 10:00:00 crc kubenswrapper[4867]: I0101 10:00:00.355646 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4bd35d89-54a6-4c50-ad75-6a6744a49117-secret-volume\") pod \"collect-profiles-29454360-l77xf\" (UID: \"4bd35d89-54a6-4c50-ad75-6a6744a49117\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454360-l77xf" Jan 01 10:00:00 crc kubenswrapper[4867]: I0101 10:00:00.355724 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bd35d89-54a6-4c50-ad75-6a6744a49117-config-volume\") pod \"collect-profiles-29454360-l77xf\" (UID: \"4bd35d89-54a6-4c50-ad75-6a6744a49117\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454360-l77xf" Jan 01 10:00:00 crc kubenswrapper[4867]: I0101 10:00:00.457342 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzq58\" (UniqueName: \"kubernetes.io/projected/4bd35d89-54a6-4c50-ad75-6a6744a49117-kube-api-access-nzq58\") pod \"collect-profiles-29454360-l77xf\" (UID: \"4bd35d89-54a6-4c50-ad75-6a6744a49117\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454360-l77xf" Jan 01 10:00:00 crc kubenswrapper[4867]: I0101 10:00:00.457399 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4bd35d89-54a6-4c50-ad75-6a6744a49117-secret-volume\") pod \"collect-profiles-29454360-l77xf\" (UID: \"4bd35d89-54a6-4c50-ad75-6a6744a49117\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454360-l77xf" Jan 01 10:00:00 crc kubenswrapper[4867]: I0101 10:00:00.457440 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bd35d89-54a6-4c50-ad75-6a6744a49117-config-volume\") pod \"collect-profiles-29454360-l77xf\" (UID: \"4bd35d89-54a6-4c50-ad75-6a6744a49117\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454360-l77xf" Jan 01 10:00:00 crc kubenswrapper[4867]: I0101 10:00:00.458381 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bd35d89-54a6-4c50-ad75-6a6744a49117-config-volume\") pod \"collect-profiles-29454360-l77xf\" (UID: \"4bd35d89-54a6-4c50-ad75-6a6744a49117\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454360-l77xf" Jan 01 10:00:00 crc kubenswrapper[4867]: I0101 10:00:00.470641 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4bd35d89-54a6-4c50-ad75-6a6744a49117-secret-volume\") pod \"collect-profiles-29454360-l77xf\" (UID: \"4bd35d89-54a6-4c50-ad75-6a6744a49117\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454360-l77xf" Jan 01 10:00:00 crc kubenswrapper[4867]: I0101 10:00:00.487747 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzq58\" (UniqueName: \"kubernetes.io/projected/4bd35d89-54a6-4c50-ad75-6a6744a49117-kube-api-access-nzq58\") pod \"collect-profiles-29454360-l77xf\" (UID: \"4bd35d89-54a6-4c50-ad75-6a6744a49117\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29454360-l77xf" Jan 01 10:00:00 crc kubenswrapper[4867]: I0101 10:00:00.491515 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29454360-l77xf" Jan 01 10:00:01 crc kubenswrapper[4867]: I0101 10:00:01.756761 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29454360-l77xf"] Jan 01 10:00:01 crc kubenswrapper[4867]: W0101 10:00:01.762532 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bd35d89_54a6_4c50_ad75_6a6744a49117.slice/crio-9ebaf7ce933e7c7e35a9b0a26b91ca720fed0d2c71ae5aa1709faaf9391d74f3 WatchSource:0}: Error finding container 9ebaf7ce933e7c7e35a9b0a26b91ca720fed0d2c71ae5aa1709faaf9391d74f3: Status 404 returned error can't find the container with id 9ebaf7ce933e7c7e35a9b0a26b91ca720fed0d2c71ae5aa1709faaf9391d74f3 Jan 01 10:00:02 crc kubenswrapper[4867]: I0101 10:00:02.131451 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29454360-l77xf" event={"ID":"4bd35d89-54a6-4c50-ad75-6a6744a49117","Type":"ContainerStarted","Data":"36085f52368be9ea2d4d47e2883944c84063cdf1fff05f019eb094bccf90b530"} Jan 01 10:00:02 crc kubenswrapper[4867]: I0101 10:00:02.132044 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29454360-l77xf" event={"ID":"4bd35d89-54a6-4c50-ad75-6a6744a49117","Type":"ContainerStarted","Data":"9ebaf7ce933e7c7e35a9b0a26b91ca720fed0d2c71ae5aa1709faaf9391d74f3"} Jan 01 10:00:02 crc kubenswrapper[4867]: I0101 10:00:02.133780 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2k4nq/crc-debug-6gr65" event={"ID":"e422cbc7-a3ab-4ed9-9e73-db95fd37ab9c","Type":"ContainerStarted","Data":"394e7bf21d397093af4efad2f3d3f3fc9097caba43614ee4d4f715da5cd7fd8f"} Jan 01 10:00:02 crc kubenswrapper[4867]: I0101 10:00:02.160150 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29454360-l77xf" podStartSLOduration=2.160127887 podStartE2EDuration="2.160127887s" podCreationTimestamp="2026-01-01 10:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 10:00:02.156844324 +0000 UTC m=+5611.292113093" watchObservedRunningTime="2026-01-01 10:00:02.160127887 +0000 UTC m=+5611.295396656" Jan 01 10:00:02 crc kubenswrapper[4867]: I0101 10:00:02.181334 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-2k4nq/crc-debug-6gr65" podStartSLOduration=1.933386982 podStartE2EDuration="13.181318227s" podCreationTimestamp="2026-01-01 09:59:49 +0000 UTC" firstStartedPulling="2026-01-01 09:59:50.173176238 +0000 UTC m=+5599.308445027" lastFinishedPulling="2026-01-01 10:00:01.421107503 +0000 UTC m=+5610.556376272" observedRunningTime="2026-01-01 10:00:02.180843484 +0000 UTC m=+5611.316112253" watchObservedRunningTime="2026-01-01 10:00:02.181318227 +0000 UTC m=+5611.316586996" Jan 01 10:00:03 crc kubenswrapper[4867]: I0101 10:00:03.143590 4867 generic.go:334] "Generic (PLEG): container finished" podID="4bd35d89-54a6-4c50-ad75-6a6744a49117" containerID="36085f52368be9ea2d4d47e2883944c84063cdf1fff05f019eb094bccf90b530" exitCode=0 Jan 01 10:00:03 crc kubenswrapper[4867]: I0101 10:00:03.143661 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29454360-l77xf" event={"ID":"4bd35d89-54a6-4c50-ad75-6a6744a49117","Type":"ContainerDied","Data":"36085f52368be9ea2d4d47e2883944c84063cdf1fff05f019eb094bccf90b530"} Jan 01 10:00:03 crc kubenswrapper[4867]: I0101 10:00:03.145793 4867 generic.go:334] "Generic (PLEG): container finished" podID="da41bfce-ede5-4cc0-8661-775b11aeffce" containerID="7b3d92debf26c9e42ff3b667382f2c71f00a2ec6f988ebc0b04c8a7d225d4aa3" exitCode=0 Jan 01 10:00:03 crc kubenswrapper[4867]: I0101 10:00:03.145950 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9sp5v" event={"ID":"da41bfce-ede5-4cc0-8661-775b11aeffce","Type":"ContainerDied","Data":"7b3d92debf26c9e42ff3b667382f2c71f00a2ec6f988ebc0b04c8a7d225d4aa3"} Jan 01 10:00:04 crc kubenswrapper[4867]: I0101 10:00:04.155282 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9sp5v" event={"ID":"da41bfce-ede5-4cc0-8661-775b11aeffce","Type":"ContainerStarted","Data":"a2587772420f41245d91d911681b8d9c3aea4e0db9bba64947c2db61240f1d72"} Jan 01 10:00:04 crc kubenswrapper[4867]: I0101 10:00:04.479340 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29454360-l77xf" Jan 01 10:00:04 crc kubenswrapper[4867]: I0101 10:00:04.548252 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9sp5v" podStartSLOduration=9.16660027 podStartE2EDuration="11.548230385s" podCreationTimestamp="2026-01-01 09:59:53 +0000 UTC" firstStartedPulling="2026-01-01 10:00:01.299390483 +0000 UTC m=+5610.434659252" lastFinishedPulling="2026-01-01 10:00:03.681020588 +0000 UTC m=+5612.816289367" observedRunningTime="2026-01-01 10:00:04.170765007 +0000 UTC m=+5613.306033786" watchObservedRunningTime="2026-01-01 10:00:04.548230385 +0000 UTC m=+5613.683499154" Jan 01 10:00:04 crc kubenswrapper[4867]: I0101 10:00:04.646672 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzq58\" (UniqueName: \"kubernetes.io/projected/4bd35d89-54a6-4c50-ad75-6a6744a49117-kube-api-access-nzq58\") pod \"4bd35d89-54a6-4c50-ad75-6a6744a49117\" (UID: \"4bd35d89-54a6-4c50-ad75-6a6744a49117\") " Jan 01 10:00:04 crc kubenswrapper[4867]: I0101 10:00:04.647065 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4bd35d89-54a6-4c50-ad75-6a6744a49117-secret-volume\") pod \"4bd35d89-54a6-4c50-ad75-6a6744a49117\" (UID: \"4bd35d89-54a6-4c50-ad75-6a6744a49117\") " Jan 01 10:00:04 crc kubenswrapper[4867]: I0101 10:00:04.647371 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bd35d89-54a6-4c50-ad75-6a6744a49117-config-volume\") pod \"4bd35d89-54a6-4c50-ad75-6a6744a49117\" (UID: \"4bd35d89-54a6-4c50-ad75-6a6744a49117\") " Jan 01 10:00:04 crc kubenswrapper[4867]: I0101 10:00:04.649221 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bd35d89-54a6-4c50-ad75-6a6744a49117-config-volume" (OuterVolumeSpecName: "config-volume") pod "4bd35d89-54a6-4c50-ad75-6a6744a49117" (UID: "4bd35d89-54a6-4c50-ad75-6a6744a49117"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 01 10:00:04 crc kubenswrapper[4867]: I0101 10:00:04.654415 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bd35d89-54a6-4c50-ad75-6a6744a49117-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4bd35d89-54a6-4c50-ad75-6a6744a49117" (UID: "4bd35d89-54a6-4c50-ad75-6a6744a49117"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 10:00:04 crc kubenswrapper[4867]: I0101 10:00:04.658097 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bd35d89-54a6-4c50-ad75-6a6744a49117-kube-api-access-nzq58" (OuterVolumeSpecName: "kube-api-access-nzq58") pod "4bd35d89-54a6-4c50-ad75-6a6744a49117" (UID: "4bd35d89-54a6-4c50-ad75-6a6744a49117"). InnerVolumeSpecName "kube-api-access-nzq58". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 10:00:04 crc kubenswrapper[4867]: I0101 10:00:04.749012 4867 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bd35d89-54a6-4c50-ad75-6a6744a49117-config-volume\") on node \"crc\" DevicePath \"\"" Jan 01 10:00:04 crc kubenswrapper[4867]: I0101 10:00:04.749067 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzq58\" (UniqueName: \"kubernetes.io/projected/4bd35d89-54a6-4c50-ad75-6a6744a49117-kube-api-access-nzq58\") on node \"crc\" DevicePath \"\"" Jan 01 10:00:04 crc kubenswrapper[4867]: I0101 10:00:04.749080 4867 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4bd35d89-54a6-4c50-ad75-6a6744a49117-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 01 10:00:04 crc kubenswrapper[4867]: I0101 10:00:04.816957 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29454315-g2ds6"] Jan 01 10:00:04 crc kubenswrapper[4867]: I0101 10:00:04.825033 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29454315-g2ds6"] Jan 01 10:00:05 crc kubenswrapper[4867]: I0101 10:00:05.138895 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ccd453a-3c47-4fad-88e6-e5dc9b9bd631" path="/var/lib/kubelet/pods/7ccd453a-3c47-4fad-88e6-e5dc9b9bd631/volumes" Jan 01 10:00:05 crc kubenswrapper[4867]: I0101 10:00:05.164382 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29454360-l77xf" Jan 01 10:00:05 crc kubenswrapper[4867]: I0101 10:00:05.164765 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29454360-l77xf" event={"ID":"4bd35d89-54a6-4c50-ad75-6a6744a49117","Type":"ContainerDied","Data":"9ebaf7ce933e7c7e35a9b0a26b91ca720fed0d2c71ae5aa1709faaf9391d74f3"} Jan 01 10:00:05 crc kubenswrapper[4867]: I0101 10:00:05.164785 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ebaf7ce933e7c7e35a9b0a26b91ca720fed0d2c71ae5aa1709faaf9391d74f3" Jan 01 10:00:13 crc kubenswrapper[4867]: I0101 10:00:13.843189 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9sp5v" Jan 01 10:00:13 crc kubenswrapper[4867]: I0101 10:00:13.843629 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9sp5v" Jan 01 10:00:13 crc kubenswrapper[4867]: I0101 10:00:13.890312 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9sp5v" Jan 01 10:00:14 crc kubenswrapper[4867]: I0101 10:00:14.305503 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9sp5v" Jan 01 10:00:14 crc kubenswrapper[4867]: I0101 10:00:14.361877 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9sp5v"] Jan 01 10:00:16 crc kubenswrapper[4867]: I0101 10:00:16.259614 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9sp5v" podUID="da41bfce-ede5-4cc0-8661-775b11aeffce" containerName="registry-server" containerID="cri-o://a2587772420f41245d91d911681b8d9c3aea4e0db9bba64947c2db61240f1d72" gracePeriod=2 Jan 01 10:00:17 crc kubenswrapper[4867]: I0101 10:00:17.194284 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9sp5v" Jan 01 10:00:17 crc kubenswrapper[4867]: I0101 10:00:17.269336 4867 generic.go:334] "Generic (PLEG): container finished" podID="da41bfce-ede5-4cc0-8661-775b11aeffce" containerID="a2587772420f41245d91d911681b8d9c3aea4e0db9bba64947c2db61240f1d72" exitCode=0 Jan 01 10:00:17 crc kubenswrapper[4867]: I0101 10:00:17.269381 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9sp5v" event={"ID":"da41bfce-ede5-4cc0-8661-775b11aeffce","Type":"ContainerDied","Data":"a2587772420f41245d91d911681b8d9c3aea4e0db9bba64947c2db61240f1d72"} Jan 01 10:00:17 crc kubenswrapper[4867]: I0101 10:00:17.269408 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9sp5v" event={"ID":"da41bfce-ede5-4cc0-8661-775b11aeffce","Type":"ContainerDied","Data":"ad08d02507261b7492b34fbe73cdb5543e5432c867371ed04017c0ea20fbb058"} Jan 01 10:00:17 crc kubenswrapper[4867]: I0101 10:00:17.269417 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9sp5v" Jan 01 10:00:17 crc kubenswrapper[4867]: I0101 10:00:17.269629 4867 scope.go:117] "RemoveContainer" containerID="a2587772420f41245d91d911681b8d9c3aea4e0db9bba64947c2db61240f1d72" Jan 01 10:00:17 crc kubenswrapper[4867]: I0101 10:00:17.301838 4867 scope.go:117] "RemoveContainer" containerID="7b3d92debf26c9e42ff3b667382f2c71f00a2ec6f988ebc0b04c8a7d225d4aa3" Jan 01 10:00:17 crc kubenswrapper[4867]: I0101 10:00:17.323525 4867 scope.go:117] "RemoveContainer" containerID="98b1b017163d735b83eda96bf183639ea36edbd7c0d07f0d6802f9ba331af9b5" Jan 01 10:00:17 crc kubenswrapper[4867]: I0101 10:00:17.358592 4867 scope.go:117] "RemoveContainer" containerID="a2587772420f41245d91d911681b8d9c3aea4e0db9bba64947c2db61240f1d72" Jan 01 10:00:17 crc kubenswrapper[4867]: E0101 10:00:17.359104 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2587772420f41245d91d911681b8d9c3aea4e0db9bba64947c2db61240f1d72\": container with ID starting with a2587772420f41245d91d911681b8d9c3aea4e0db9bba64947c2db61240f1d72 not found: ID does not exist" containerID="a2587772420f41245d91d911681b8d9c3aea4e0db9bba64947c2db61240f1d72" Jan 01 10:00:17 crc kubenswrapper[4867]: I0101 10:00:17.359147 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2587772420f41245d91d911681b8d9c3aea4e0db9bba64947c2db61240f1d72"} err="failed to get container status \"a2587772420f41245d91d911681b8d9c3aea4e0db9bba64947c2db61240f1d72\": rpc error: code = NotFound desc = could not find container \"a2587772420f41245d91d911681b8d9c3aea4e0db9bba64947c2db61240f1d72\": container with ID starting with a2587772420f41245d91d911681b8d9c3aea4e0db9bba64947c2db61240f1d72 not found: ID does not exist" Jan 01 10:00:17 crc kubenswrapper[4867]: I0101 10:00:17.359172 4867 scope.go:117] "RemoveContainer" containerID="7b3d92debf26c9e42ff3b667382f2c71f00a2ec6f988ebc0b04c8a7d225d4aa3" Jan 01 10:00:17 crc kubenswrapper[4867]: I0101 10:00:17.359335 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da41bfce-ede5-4cc0-8661-775b11aeffce-utilities\") pod \"da41bfce-ede5-4cc0-8661-775b11aeffce\" (UID: \"da41bfce-ede5-4cc0-8661-775b11aeffce\") " Jan 01 10:00:17 crc kubenswrapper[4867]: I0101 10:00:17.359418 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da41bfce-ede5-4cc0-8661-775b11aeffce-catalog-content\") pod \"da41bfce-ede5-4cc0-8661-775b11aeffce\" (UID: \"da41bfce-ede5-4cc0-8661-775b11aeffce\") " Jan 01 10:00:17 crc kubenswrapper[4867]: I0101 10:00:17.359528 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9fjv\" (UniqueName: \"kubernetes.io/projected/da41bfce-ede5-4cc0-8661-775b11aeffce-kube-api-access-z9fjv\") pod \"da41bfce-ede5-4cc0-8661-775b11aeffce\" (UID: \"da41bfce-ede5-4cc0-8661-775b11aeffce\") " Jan 01 10:00:17 crc kubenswrapper[4867]: E0101 10:00:17.359762 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b3d92debf26c9e42ff3b667382f2c71f00a2ec6f988ebc0b04c8a7d225d4aa3\": container with ID starting with 7b3d92debf26c9e42ff3b667382f2c71f00a2ec6f988ebc0b04c8a7d225d4aa3 not found: ID does not exist" containerID="7b3d92debf26c9e42ff3b667382f2c71f00a2ec6f988ebc0b04c8a7d225d4aa3" Jan 01 10:00:17 crc kubenswrapper[4867]: I0101 10:00:17.359787 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b3d92debf26c9e42ff3b667382f2c71f00a2ec6f988ebc0b04c8a7d225d4aa3"} err="failed to get container status \"7b3d92debf26c9e42ff3b667382f2c71f00a2ec6f988ebc0b04c8a7d225d4aa3\": rpc error: code = NotFound desc = could not find container \"7b3d92debf26c9e42ff3b667382f2c71f00a2ec6f988ebc0b04c8a7d225d4aa3\": container with ID starting with 7b3d92debf26c9e42ff3b667382f2c71f00a2ec6f988ebc0b04c8a7d225d4aa3 not found: ID does not exist" Jan 01 10:00:17 crc kubenswrapper[4867]: I0101 10:00:17.359800 4867 scope.go:117] "RemoveContainer" containerID="98b1b017163d735b83eda96bf183639ea36edbd7c0d07f0d6802f9ba331af9b5" Jan 01 10:00:17 crc kubenswrapper[4867]: E0101 10:00:17.360053 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98b1b017163d735b83eda96bf183639ea36edbd7c0d07f0d6802f9ba331af9b5\": container with ID starting with 98b1b017163d735b83eda96bf183639ea36edbd7c0d07f0d6802f9ba331af9b5 not found: ID does not exist" containerID="98b1b017163d735b83eda96bf183639ea36edbd7c0d07f0d6802f9ba331af9b5" Jan 01 10:00:17 crc kubenswrapper[4867]: I0101 10:00:17.360074 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98b1b017163d735b83eda96bf183639ea36edbd7c0d07f0d6802f9ba331af9b5"} err="failed to get container status \"98b1b017163d735b83eda96bf183639ea36edbd7c0d07f0d6802f9ba331af9b5\": rpc error: code = NotFound desc = could not find container \"98b1b017163d735b83eda96bf183639ea36edbd7c0d07f0d6802f9ba331af9b5\": container with ID starting with 98b1b017163d735b83eda96bf183639ea36edbd7c0d07f0d6802f9ba331af9b5 not found: ID does not exist" Jan 01 10:00:17 crc kubenswrapper[4867]: I0101 10:00:17.360319 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da41bfce-ede5-4cc0-8661-775b11aeffce-utilities" (OuterVolumeSpecName: "utilities") pod "da41bfce-ede5-4cc0-8661-775b11aeffce" (UID: "da41bfce-ede5-4cc0-8661-775b11aeffce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 10:00:17 crc kubenswrapper[4867]: I0101 10:00:17.365077 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da41bfce-ede5-4cc0-8661-775b11aeffce-kube-api-access-z9fjv" (OuterVolumeSpecName: "kube-api-access-z9fjv") pod "da41bfce-ede5-4cc0-8661-775b11aeffce" (UID: "da41bfce-ede5-4cc0-8661-775b11aeffce"). InnerVolumeSpecName "kube-api-access-z9fjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 10:00:17 crc kubenswrapper[4867]: I0101 10:00:17.404947 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da41bfce-ede5-4cc0-8661-775b11aeffce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da41bfce-ede5-4cc0-8661-775b11aeffce" (UID: "da41bfce-ede5-4cc0-8661-775b11aeffce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 10:00:17 crc kubenswrapper[4867]: I0101 10:00:17.462261 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9fjv\" (UniqueName: \"kubernetes.io/projected/da41bfce-ede5-4cc0-8661-775b11aeffce-kube-api-access-z9fjv\") on node \"crc\" DevicePath \"\"" Jan 01 10:00:17 crc kubenswrapper[4867]: I0101 10:00:17.462295 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da41bfce-ede5-4cc0-8661-775b11aeffce-utilities\") on node \"crc\" DevicePath \"\"" Jan 01 10:00:17 crc kubenswrapper[4867]: I0101 10:00:17.462307 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da41bfce-ede5-4cc0-8661-775b11aeffce-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 01 10:00:17 crc kubenswrapper[4867]: I0101 10:00:17.614824 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9sp5v"] Jan 01 10:00:17 crc kubenswrapper[4867]: I0101 10:00:17.624342 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9sp5v"] Jan 01 10:00:19 crc kubenswrapper[4867]: I0101 10:00:19.140178 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da41bfce-ede5-4cc0-8661-775b11aeffce" path="/var/lib/kubelet/pods/da41bfce-ede5-4cc0-8661-775b11aeffce/volumes" Jan 01 10:00:31 crc kubenswrapper[4867]: I0101 10:00:31.380939 4867 generic.go:334] "Generic (PLEG): container finished" podID="e422cbc7-a3ab-4ed9-9e73-db95fd37ab9c" containerID="394e7bf21d397093af4efad2f3d3f3fc9097caba43614ee4d4f715da5cd7fd8f" exitCode=0 Jan 01 10:00:31 crc kubenswrapper[4867]: I0101 10:00:31.381048 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2k4nq/crc-debug-6gr65" event={"ID":"e422cbc7-a3ab-4ed9-9e73-db95fd37ab9c","Type":"ContainerDied","Data":"394e7bf21d397093af4efad2f3d3f3fc9097caba43614ee4d4f715da5cd7fd8f"} Jan 01 10:00:32 crc kubenswrapper[4867]: I0101 10:00:32.487050 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2k4nq/crc-debug-6gr65" Jan 01 10:00:32 crc kubenswrapper[4867]: I0101 10:00:32.538724 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2k4nq/crc-debug-6gr65"] Jan 01 10:00:32 crc kubenswrapper[4867]: I0101 10:00:32.546206 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2k4nq/crc-debug-6gr65"] Jan 01 10:00:32 crc kubenswrapper[4867]: I0101 10:00:32.659712 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wd54n\" (UniqueName: \"kubernetes.io/projected/e422cbc7-a3ab-4ed9-9e73-db95fd37ab9c-kube-api-access-wd54n\") pod \"e422cbc7-a3ab-4ed9-9e73-db95fd37ab9c\" (UID: \"e422cbc7-a3ab-4ed9-9e73-db95fd37ab9c\") " Jan 01 10:00:32 crc kubenswrapper[4867]: I0101 10:00:32.659813 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e422cbc7-a3ab-4ed9-9e73-db95fd37ab9c-host\") pod \"e422cbc7-a3ab-4ed9-9e73-db95fd37ab9c\" (UID: \"e422cbc7-a3ab-4ed9-9e73-db95fd37ab9c\") " Jan 01 10:00:32 crc kubenswrapper[4867]: I0101 10:00:32.660003 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e422cbc7-a3ab-4ed9-9e73-db95fd37ab9c-host" (OuterVolumeSpecName: "host") pod "e422cbc7-a3ab-4ed9-9e73-db95fd37ab9c" (UID: "e422cbc7-a3ab-4ed9-9e73-db95fd37ab9c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 10:00:32 crc kubenswrapper[4867]: I0101 10:00:32.660363 4867 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e422cbc7-a3ab-4ed9-9e73-db95fd37ab9c-host\") on node \"crc\" DevicePath \"\"" Jan 01 10:00:32 crc kubenswrapper[4867]: I0101 10:00:32.666660 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e422cbc7-a3ab-4ed9-9e73-db95fd37ab9c-kube-api-access-wd54n" (OuterVolumeSpecName: "kube-api-access-wd54n") pod "e422cbc7-a3ab-4ed9-9e73-db95fd37ab9c" (UID: "e422cbc7-a3ab-4ed9-9e73-db95fd37ab9c"). InnerVolumeSpecName "kube-api-access-wd54n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 10:00:32 crc kubenswrapper[4867]: I0101 10:00:32.761403 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wd54n\" (UniqueName: \"kubernetes.io/projected/e422cbc7-a3ab-4ed9-9e73-db95fd37ab9c-kube-api-access-wd54n\") on node \"crc\" DevicePath \"\"" Jan 01 10:00:33 crc kubenswrapper[4867]: I0101 10:00:33.146809 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e422cbc7-a3ab-4ed9-9e73-db95fd37ab9c" path="/var/lib/kubelet/pods/e422cbc7-a3ab-4ed9-9e73-db95fd37ab9c/volumes" Jan 01 10:00:33 crc kubenswrapper[4867]: I0101 10:00:33.398832 4867 scope.go:117] "RemoveContainer" containerID="394e7bf21d397093af4efad2f3d3f3fc9097caba43614ee4d4f715da5cd7fd8f" Jan 01 10:00:33 crc kubenswrapper[4867]: I0101 10:00:33.398857 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2k4nq/crc-debug-6gr65" Jan 01 10:00:33 crc kubenswrapper[4867]: I0101 10:00:33.748462 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2k4nq/crc-debug-5kw6h"] Jan 01 10:00:33 crc kubenswrapper[4867]: E0101 10:00:33.748833 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e422cbc7-a3ab-4ed9-9e73-db95fd37ab9c" containerName="container-00" Jan 01 10:00:33 crc kubenswrapper[4867]: I0101 10:00:33.748848 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e422cbc7-a3ab-4ed9-9e73-db95fd37ab9c" containerName="container-00" Jan 01 10:00:33 crc kubenswrapper[4867]: E0101 10:00:33.748863 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da41bfce-ede5-4cc0-8661-775b11aeffce" containerName="extract-utilities" Jan 01 10:00:33 crc kubenswrapper[4867]: I0101 10:00:33.748871 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="da41bfce-ede5-4cc0-8661-775b11aeffce" containerName="extract-utilities" Jan 01 10:00:33 crc kubenswrapper[4867]: E0101 10:00:33.748920 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da41bfce-ede5-4cc0-8661-775b11aeffce" containerName="registry-server" Jan 01 10:00:33 crc kubenswrapper[4867]: I0101 10:00:33.748929 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="da41bfce-ede5-4cc0-8661-775b11aeffce" containerName="registry-server" Jan 01 10:00:33 crc kubenswrapper[4867]: E0101 10:00:33.748942 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da41bfce-ede5-4cc0-8661-775b11aeffce" containerName="extract-content" Jan 01 10:00:33 crc kubenswrapper[4867]: I0101 10:00:33.748950 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="da41bfce-ede5-4cc0-8661-775b11aeffce" containerName="extract-content" Jan 01 10:00:33 crc kubenswrapper[4867]: E0101 10:00:33.748967 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bd35d89-54a6-4c50-ad75-6a6744a49117" containerName="collect-profiles" Jan 01 10:00:33 crc kubenswrapper[4867]: I0101 10:00:33.748974 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bd35d89-54a6-4c50-ad75-6a6744a49117" containerName="collect-profiles" Jan 01 10:00:33 crc kubenswrapper[4867]: I0101 10:00:33.749173 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="da41bfce-ede5-4cc0-8661-775b11aeffce" containerName="registry-server" Jan 01 10:00:33 crc kubenswrapper[4867]: I0101 10:00:33.749197 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="e422cbc7-a3ab-4ed9-9e73-db95fd37ab9c" containerName="container-00" Jan 01 10:00:33 crc kubenswrapper[4867]: I0101 10:00:33.749209 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bd35d89-54a6-4c50-ad75-6a6744a49117" containerName="collect-profiles" Jan 01 10:00:33 crc kubenswrapper[4867]: I0101 10:00:33.749779 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2k4nq/crc-debug-5kw6h" Jan 01 10:00:33 crc kubenswrapper[4867]: I0101 10:00:33.881055 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f669060e-ceaa-44ce-94b2-590872953bea-host\") pod \"crc-debug-5kw6h\" (UID: \"f669060e-ceaa-44ce-94b2-590872953bea\") " pod="openshift-must-gather-2k4nq/crc-debug-5kw6h" Jan 01 10:00:33 crc kubenswrapper[4867]: I0101 10:00:33.881236 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmkc8\" (UniqueName: \"kubernetes.io/projected/f669060e-ceaa-44ce-94b2-590872953bea-kube-api-access-lmkc8\") pod \"crc-debug-5kw6h\" (UID: \"f669060e-ceaa-44ce-94b2-590872953bea\") " pod="openshift-must-gather-2k4nq/crc-debug-5kw6h" Jan 01 10:00:33 crc kubenswrapper[4867]: I0101 10:00:33.988918 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmkc8\" (UniqueName: \"kubernetes.io/projected/f669060e-ceaa-44ce-94b2-590872953bea-kube-api-access-lmkc8\") pod \"crc-debug-5kw6h\" (UID: \"f669060e-ceaa-44ce-94b2-590872953bea\") " pod="openshift-must-gather-2k4nq/crc-debug-5kw6h" Jan 01 10:00:33 crc kubenswrapper[4867]: I0101 10:00:33.989263 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f669060e-ceaa-44ce-94b2-590872953bea-host\") pod \"crc-debug-5kw6h\" (UID: \"f669060e-ceaa-44ce-94b2-590872953bea\") " pod="openshift-must-gather-2k4nq/crc-debug-5kw6h" Jan 01 10:00:33 crc kubenswrapper[4867]: I0101 10:00:33.989551 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f669060e-ceaa-44ce-94b2-590872953bea-host\") pod \"crc-debug-5kw6h\" (UID: \"f669060e-ceaa-44ce-94b2-590872953bea\") " pod="openshift-must-gather-2k4nq/crc-debug-5kw6h" Jan 01 10:00:34 crc kubenswrapper[4867]: I0101 10:00:34.021668 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmkc8\" (UniqueName: \"kubernetes.io/projected/f669060e-ceaa-44ce-94b2-590872953bea-kube-api-access-lmkc8\") pod \"crc-debug-5kw6h\" (UID: \"f669060e-ceaa-44ce-94b2-590872953bea\") " pod="openshift-must-gather-2k4nq/crc-debug-5kw6h" Jan 01 10:00:34 crc kubenswrapper[4867]: I0101 10:00:34.078230 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2k4nq/crc-debug-5kw6h" Jan 01 10:00:34 crc kubenswrapper[4867]: W0101 10:00:34.105876 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf669060e_ceaa_44ce_94b2_590872953bea.slice/crio-b08ac9721d4eac2c8630561a98bb6416876d79f3ab4e9400d9fcb09c8fb59f8a WatchSource:0}: Error finding container b08ac9721d4eac2c8630561a98bb6416876d79f3ab4e9400d9fcb09c8fb59f8a: Status 404 returned error can't find the container with id b08ac9721d4eac2c8630561a98bb6416876d79f3ab4e9400d9fcb09c8fb59f8a Jan 01 10:00:34 crc kubenswrapper[4867]: I0101 10:00:34.412452 4867 generic.go:334] "Generic (PLEG): container finished" podID="f669060e-ceaa-44ce-94b2-590872953bea" containerID="18e161982bea2ec026db586185dc950f8e67c1ae8147dcdfc98115ffb2b9e0fb" exitCode=1 Jan 01 10:00:34 crc kubenswrapper[4867]: I0101 10:00:34.412580 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2k4nq/crc-debug-5kw6h" event={"ID":"f669060e-ceaa-44ce-94b2-590872953bea","Type":"ContainerDied","Data":"18e161982bea2ec026db586185dc950f8e67c1ae8147dcdfc98115ffb2b9e0fb"} Jan 01 10:00:34 crc kubenswrapper[4867]: I0101 10:00:34.412662 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2k4nq/crc-debug-5kw6h" event={"ID":"f669060e-ceaa-44ce-94b2-590872953bea","Type":"ContainerStarted","Data":"b08ac9721d4eac2c8630561a98bb6416876d79f3ab4e9400d9fcb09c8fb59f8a"} Jan 01 10:00:34 crc kubenswrapper[4867]: I0101 10:00:34.472008 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2k4nq/crc-debug-5kw6h"] Jan 01 10:00:34 crc kubenswrapper[4867]: I0101 10:00:34.484084 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2k4nq/crc-debug-5kw6h"] Jan 01 10:00:35 crc kubenswrapper[4867]: I0101 10:00:35.503979 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2k4nq/crc-debug-5kw6h" Jan 01 10:00:35 crc kubenswrapper[4867]: I0101 10:00:35.617827 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmkc8\" (UniqueName: \"kubernetes.io/projected/f669060e-ceaa-44ce-94b2-590872953bea-kube-api-access-lmkc8\") pod \"f669060e-ceaa-44ce-94b2-590872953bea\" (UID: \"f669060e-ceaa-44ce-94b2-590872953bea\") " Jan 01 10:00:35 crc kubenswrapper[4867]: I0101 10:00:35.618013 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f669060e-ceaa-44ce-94b2-590872953bea-host\") pod \"f669060e-ceaa-44ce-94b2-590872953bea\" (UID: \"f669060e-ceaa-44ce-94b2-590872953bea\") " Jan 01 10:00:35 crc kubenswrapper[4867]: I0101 10:00:35.618315 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f669060e-ceaa-44ce-94b2-590872953bea-host" (OuterVolumeSpecName: "host") pod "f669060e-ceaa-44ce-94b2-590872953bea" (UID: "f669060e-ceaa-44ce-94b2-590872953bea"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 01 10:00:35 crc kubenswrapper[4867]: I0101 10:00:35.640222 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f669060e-ceaa-44ce-94b2-590872953bea-kube-api-access-lmkc8" (OuterVolumeSpecName: "kube-api-access-lmkc8") pod "f669060e-ceaa-44ce-94b2-590872953bea" (UID: "f669060e-ceaa-44ce-94b2-590872953bea"). InnerVolumeSpecName "kube-api-access-lmkc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 10:00:35 crc kubenswrapper[4867]: I0101 10:00:35.720421 4867 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f669060e-ceaa-44ce-94b2-590872953bea-host\") on node \"crc\" DevicePath \"\"" Jan 01 10:00:35 crc kubenswrapper[4867]: I0101 10:00:35.720452 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmkc8\" (UniqueName: \"kubernetes.io/projected/f669060e-ceaa-44ce-94b2-590872953bea-kube-api-access-lmkc8\") on node \"crc\" DevicePath \"\"" Jan 01 10:00:36 crc kubenswrapper[4867]: I0101 10:00:36.429143 4867 scope.go:117] "RemoveContainer" containerID="18e161982bea2ec026db586185dc950f8e67c1ae8147dcdfc98115ffb2b9e0fb" Jan 01 10:00:36 crc kubenswrapper[4867]: I0101 10:00:36.429163 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2k4nq/crc-debug-5kw6h" Jan 01 10:00:37 crc kubenswrapper[4867]: I0101 10:00:37.146109 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f669060e-ceaa-44ce-94b2-590872953bea" path="/var/lib/kubelet/pods/f669060e-ceaa-44ce-94b2-590872953bea/volumes" Jan 01 10:00:41 crc kubenswrapper[4867]: I0101 10:00:41.876297 4867 scope.go:117] "RemoveContainer" containerID="c80bd55e3c9000c010da52c346ec10195daeacc554c41911326baceb7894f5c2" Jan 01 10:00:42 crc kubenswrapper[4867]: I0101 10:00:42.315656 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69f667c44c-wpw5v_436e2299-2fd9-4973-aa16-f5a02aa58c36/init/0.log" Jan 01 10:00:42 crc kubenswrapper[4867]: I0101 10:00:42.570070 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69f667c44c-wpw5v_436e2299-2fd9-4973-aa16-f5a02aa58c36/init/0.log" Jan 01 10:00:42 crc kubenswrapper[4867]: I0101 10:00:42.588871 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69f667c44c-wpw5v_436e2299-2fd9-4973-aa16-f5a02aa58c36/dnsmasq-dns/0.log" Jan 01 10:00:42 crc kubenswrapper[4867]: I0101 10:00:42.783913 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-bootstrap-z9c4q_92572092-af40-4fa8-973d-2d38bce43919/keystone-bootstrap/0.log" Jan 01 10:00:42 crc kubenswrapper[4867]: I0101 10:00:42.828066 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-db-create-xz5g7_3f39adf8-53ff-43ed-8646-9bee0c5fad79/mariadb-database-create/0.log" Jan 01 10:00:42 crc kubenswrapper[4867]: I0101 10:00:42.973788 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-db-sync-4gdmp_27def929-31ca-4a9c-af0e-5830dddceab9/keystone-db-sync/0.log" Jan 01 10:00:43 crc kubenswrapper[4867]: I0101 10:00:43.090985 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-f125-account-create-update-jh6hd_f374d6d6-c44d-4f8a-b4ed-ee985a92300f/mariadb-account-create-update/0.log" Jan 01 10:00:43 crc kubenswrapper[4867]: I0101 10:00:43.153339 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-ffdb4dfc7-dbp8r_82018793-9d72-4fd1-b828-368c2ed205d9/keystone-api/0.log" Jan 01 10:00:43 crc kubenswrapper[4867]: I0101 10:00:43.274970 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-copy-data_c5c828c6-13cc-4866-ae31-8cb33206e039/adoption/0.log" Jan 01 10:00:43 crc kubenswrapper[4867]: I0101 10:00:43.455188 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_32626dd7-5d1c-4beb-84bd-b97be403cc4a/mysql-bootstrap/0.log" Jan 01 10:00:43 crc kubenswrapper[4867]: I0101 10:00:43.669585 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_32626dd7-5d1c-4beb-84bd-b97be403cc4a/mysql-bootstrap/0.log" Jan 01 10:00:43 crc kubenswrapper[4867]: I0101 10:00:43.714335 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_32626dd7-5d1c-4beb-84bd-b97be403cc4a/galera/0.log" Jan 01 10:00:43 crc kubenswrapper[4867]: I0101 10:00:43.882978 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9fd0654b-6402-47a9-baa9-e172d84990a1/mysql-bootstrap/0.log" Jan 01 10:00:44 crc kubenswrapper[4867]: I0101 10:00:44.055287 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9fd0654b-6402-47a9-baa9-e172d84990a1/mysql-bootstrap/0.log" Jan 01 10:00:44 crc kubenswrapper[4867]: I0101 10:00:44.091349 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9fd0654b-6402-47a9-baa9-e172d84990a1/galera/0.log" Jan 01 10:00:44 crc kubenswrapper[4867]: I0101 10:00:44.253657 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_6b30be34-6c47-4a46-92fd-e6629f548214/openstackclient/0.log" Jan 01 10:00:44 crc kubenswrapper[4867]: I0101 10:00:44.334122 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-copy-data_06c46cb6-0f6e-49e3-bbd8-c0ffbedfd8ab/adoption/0.log" Jan 01 10:00:44 crc kubenswrapper[4867]: I0101 10:00:44.463288 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_ffdde0e8-731c-4b7d-ae0f-9e6d793004c3/memcached/0.log" Jan 01 10:00:44 crc kubenswrapper[4867]: I0101 10:00:44.518537 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_c356edf9-e029-4fb9-b0e5-7cd7b5544429/openstack-network-exporter/0.log" Jan 01 10:00:44 crc kubenswrapper[4867]: I0101 10:00:44.568525 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_c356edf9-e029-4fb9-b0e5-7cd7b5544429/ovn-northd/0.log" Jan 01 10:00:44 crc kubenswrapper[4867]: I0101 10:00:44.716831 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05/ovsdbserver-nb/0.log" Jan 01 10:00:44 crc kubenswrapper[4867]: I0101 10:00:44.717923 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c82f7c72-fbe7-4d7e-9db9-8c1bb5dfcc05/openstack-network-exporter/0.log" Jan 01 10:00:44 crc kubenswrapper[4867]: I0101 10:00:44.871507 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_73ffb5ca-edc5-4352-8131-bd2322e6f9da/openstack-network-exporter/0.log" Jan 01 10:00:44 crc kubenswrapper[4867]: I0101 10:00:44.897470 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_73ffb5ca-edc5-4352-8131-bd2322e6f9da/ovsdbserver-nb/0.log" Jan 01 10:00:44 crc kubenswrapper[4867]: I0101 10:00:44.942935 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_6794a96b-8a53-4b4d-81c5-61f54a3fe243/openstack-network-exporter/0.log" Jan 01 10:00:45 crc kubenswrapper[4867]: I0101 10:00:45.037709 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_6794a96b-8a53-4b4d-81c5-61f54a3fe243/ovsdbserver-nb/0.log" Jan 01 10:00:45 crc kubenswrapper[4867]: I0101 10:00:45.088427 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_3d31a0cf-43f1-4682-a8f0-2e778d2a06e4/openstack-network-exporter/0.log" Jan 01 10:00:45 crc kubenswrapper[4867]: I0101 10:00:45.123301 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_3d31a0cf-43f1-4682-a8f0-2e778d2a06e4/ovsdbserver-sb/0.log" Jan 01 10:00:45 crc kubenswrapper[4867]: I0101 10:00:45.254378 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_efced2ba-0a7c-4f12-8c38-442eff97aae8/openstack-network-exporter/0.log" Jan 01 10:00:45 crc kubenswrapper[4867]: I0101 10:00:45.274180 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_efced2ba-0a7c-4f12-8c38-442eff97aae8/ovsdbserver-sb/0.log" Jan 01 10:00:45 crc kubenswrapper[4867]: I0101 10:00:45.412438 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_eb6e3759-2038-4c6c-bd1a-4702d3f638f6/openstack-network-exporter/0.log" Jan 01 10:00:45 crc kubenswrapper[4867]: I0101 10:00:45.466600 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_eb6e3759-2038-4c6c-bd1a-4702d3f638f6/ovsdbserver-sb/0.log" Jan 01 10:00:45 crc kubenswrapper[4867]: I0101 10:00:45.550976 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4de45c4e-693b-4158-8b1c-0a50d54ae477/setup-container/0.log" Jan 01 10:00:45 crc kubenswrapper[4867]: I0101 10:00:45.750310 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4de45c4e-693b-4158-8b1c-0a50d54ae477/setup-container/0.log" Jan 01 10:00:45 crc kubenswrapper[4867]: I0101 10:00:45.781247 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_eaf26a82-cbbb-41bd-89ed-9722ddd150cf/setup-container/0.log" Jan 01 10:00:45 crc kubenswrapper[4867]: I0101 10:00:45.799490 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4de45c4e-693b-4158-8b1c-0a50d54ae477/rabbitmq/0.log" Jan 01 10:00:45 crc kubenswrapper[4867]: I0101 10:00:45.928431 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_eaf26a82-cbbb-41bd-89ed-9722ddd150cf/setup-container/0.log" Jan 01 10:00:45 crc kubenswrapper[4867]: I0101 10:00:45.977845 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_eaf26a82-cbbb-41bd-89ed-9722ddd150cf/rabbitmq/0.log" Jan 01 10:00:51 crc kubenswrapper[4867]: I0101 10:00:51.331560 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 10:00:51 crc kubenswrapper[4867]: I0101 10:00:51.332239 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 10:01:00 crc kubenswrapper[4867]: I0101 10:01:00.156465 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29454361-9sqxn"] Jan 01 10:01:00 crc kubenswrapper[4867]: E0101 10:01:00.157403 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f669060e-ceaa-44ce-94b2-590872953bea" containerName="container-00" Jan 01 10:01:00 crc kubenswrapper[4867]: I0101 10:01:00.157418 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f669060e-ceaa-44ce-94b2-590872953bea" containerName="container-00" Jan 01 10:01:00 crc kubenswrapper[4867]: I0101 10:01:00.157642 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f669060e-ceaa-44ce-94b2-590872953bea" containerName="container-00" Jan 01 10:01:00 crc kubenswrapper[4867]: I0101 10:01:00.158300 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29454361-9sqxn" Jan 01 10:01:00 crc kubenswrapper[4867]: I0101 10:01:00.175806 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29454361-9sqxn"] Jan 01 10:01:00 crc kubenswrapper[4867]: I0101 10:01:00.228841 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7c5c490-9373-4f6c-be5b-1109ec5e2bc3-combined-ca-bundle\") pod \"keystone-cron-29454361-9sqxn\" (UID: \"d7c5c490-9373-4f6c-be5b-1109ec5e2bc3\") " pod="openstack/keystone-cron-29454361-9sqxn" Jan 01 10:01:00 crc kubenswrapper[4867]: I0101 10:01:00.228949 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7c5c490-9373-4f6c-be5b-1109ec5e2bc3-config-data\") pod \"keystone-cron-29454361-9sqxn\" (UID: \"d7c5c490-9373-4f6c-be5b-1109ec5e2bc3\") " pod="openstack/keystone-cron-29454361-9sqxn" Jan 01 10:01:00 crc kubenswrapper[4867]: I0101 10:01:00.229086 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn69j\" (UniqueName: \"kubernetes.io/projected/d7c5c490-9373-4f6c-be5b-1109ec5e2bc3-kube-api-access-wn69j\") pod \"keystone-cron-29454361-9sqxn\" (UID: \"d7c5c490-9373-4f6c-be5b-1109ec5e2bc3\") " pod="openstack/keystone-cron-29454361-9sqxn" Jan 01 10:01:00 crc kubenswrapper[4867]: I0101 10:01:00.229106 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d7c5c490-9373-4f6c-be5b-1109ec5e2bc3-fernet-keys\") pod \"keystone-cron-29454361-9sqxn\" (UID: \"d7c5c490-9373-4f6c-be5b-1109ec5e2bc3\") " pod="openstack/keystone-cron-29454361-9sqxn" Jan 01 10:01:00 crc kubenswrapper[4867]: I0101 10:01:00.331015 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn69j\" (UniqueName: \"kubernetes.io/projected/d7c5c490-9373-4f6c-be5b-1109ec5e2bc3-kube-api-access-wn69j\") pod \"keystone-cron-29454361-9sqxn\" (UID: \"d7c5c490-9373-4f6c-be5b-1109ec5e2bc3\") " pod="openstack/keystone-cron-29454361-9sqxn" Jan 01 10:01:00 crc kubenswrapper[4867]: I0101 10:01:00.331110 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d7c5c490-9373-4f6c-be5b-1109ec5e2bc3-fernet-keys\") pod \"keystone-cron-29454361-9sqxn\" (UID: \"d7c5c490-9373-4f6c-be5b-1109ec5e2bc3\") " pod="openstack/keystone-cron-29454361-9sqxn" Jan 01 10:01:00 crc kubenswrapper[4867]: I0101 10:01:00.332410 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7c5c490-9373-4f6c-be5b-1109ec5e2bc3-combined-ca-bundle\") pod \"keystone-cron-29454361-9sqxn\" (UID: \"d7c5c490-9373-4f6c-be5b-1109ec5e2bc3\") " pod="openstack/keystone-cron-29454361-9sqxn" Jan 01 10:01:00 crc kubenswrapper[4867]: I0101 10:01:00.332481 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7c5c490-9373-4f6c-be5b-1109ec5e2bc3-config-data\") pod \"keystone-cron-29454361-9sqxn\" (UID: \"d7c5c490-9373-4f6c-be5b-1109ec5e2bc3\") " pod="openstack/keystone-cron-29454361-9sqxn" Jan 01 10:01:00 crc kubenswrapper[4867]: I0101 10:01:00.337672 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d7c5c490-9373-4f6c-be5b-1109ec5e2bc3-fernet-keys\") pod \"keystone-cron-29454361-9sqxn\" (UID: \"d7c5c490-9373-4f6c-be5b-1109ec5e2bc3\") " pod="openstack/keystone-cron-29454361-9sqxn" Jan 01 10:01:00 crc kubenswrapper[4867]: I0101 10:01:00.339637 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7c5c490-9373-4f6c-be5b-1109ec5e2bc3-config-data\") pod \"keystone-cron-29454361-9sqxn\" (UID: \"d7c5c490-9373-4f6c-be5b-1109ec5e2bc3\") " pod="openstack/keystone-cron-29454361-9sqxn" Jan 01 10:01:00 crc kubenswrapper[4867]: I0101 10:01:00.340221 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7c5c490-9373-4f6c-be5b-1109ec5e2bc3-combined-ca-bundle\") pod \"keystone-cron-29454361-9sqxn\" (UID: \"d7c5c490-9373-4f6c-be5b-1109ec5e2bc3\") " pod="openstack/keystone-cron-29454361-9sqxn" Jan 01 10:01:00 crc kubenswrapper[4867]: I0101 10:01:00.346978 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn69j\" (UniqueName: \"kubernetes.io/projected/d7c5c490-9373-4f6c-be5b-1109ec5e2bc3-kube-api-access-wn69j\") pod \"keystone-cron-29454361-9sqxn\" (UID: \"d7c5c490-9373-4f6c-be5b-1109ec5e2bc3\") " pod="openstack/keystone-cron-29454361-9sqxn" Jan 01 10:01:00 crc kubenswrapper[4867]: I0101 10:01:00.481638 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29454361-9sqxn" Jan 01 10:01:00 crc kubenswrapper[4867]: I0101 10:01:00.901774 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29454361-9sqxn"] Jan 01 10:01:01 crc kubenswrapper[4867]: I0101 10:01:01.468838 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-f6f74d6db-k6qkg_80077b2f-5e6e-49f7-9d98-8c1004ab2cd4/manager/0.log" Jan 01 10:01:01 crc kubenswrapper[4867]: I0101 10:01:01.595620 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-78979fc445-mxm97_89e16415-08c2-45fe-8a85-b1f12d047cde/manager/0.log" Jan 01 10:01:01 crc kubenswrapper[4867]: I0101 10:01:01.648716 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29454361-9sqxn" event={"ID":"d7c5c490-9373-4f6c-be5b-1109ec5e2bc3","Type":"ContainerStarted","Data":"989533df551d41404a3adc7f472d053c7e58e238c8ff2ad812c129fff077129c"} Jan 01 10:01:01 crc kubenswrapper[4867]: I0101 10:01:01.649017 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29454361-9sqxn" event={"ID":"d7c5c490-9373-4f6c-be5b-1109ec5e2bc3","Type":"ContainerStarted","Data":"0fbf2de03a7db29f7c362edaa5ea8d18dc71b31a696a6005d6e6830966a94ad2"} Jan 01 10:01:01 crc kubenswrapper[4867]: I0101 10:01:01.661689 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29454361-9sqxn" podStartSLOduration=1.661670161 podStartE2EDuration="1.661670161s" podCreationTimestamp="2026-01-01 10:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-01 10:01:01.660569409 +0000 UTC m=+5670.795838178" watchObservedRunningTime="2026-01-01 10:01:01.661670161 +0000 UTC m=+5670.796938930" Jan 01 10:01:01 crc kubenswrapper[4867]: I0101 10:01:01.689075 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_de70fa16a7ca5622188f18febf39673d50b3bc4dd3ef258c154a3707ddch6ph_2aeaad03-1c14-49f8-b417-5e4d6470db87/util/0.log" Jan 01 10:01:01 crc kubenswrapper[4867]: I0101 10:01:01.864481 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_de70fa16a7ca5622188f18febf39673d50b3bc4dd3ef258c154a3707ddch6ph_2aeaad03-1c14-49f8-b417-5e4d6470db87/util/0.log" Jan 01 10:01:01 crc kubenswrapper[4867]: I0101 10:01:01.867543 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_de70fa16a7ca5622188f18febf39673d50b3bc4dd3ef258c154a3707ddch6ph_2aeaad03-1c14-49f8-b417-5e4d6470db87/pull/0.log" Jan 01 10:01:01 crc kubenswrapper[4867]: I0101 10:01:01.891482 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_de70fa16a7ca5622188f18febf39673d50b3bc4dd3ef258c154a3707ddch6ph_2aeaad03-1c14-49f8-b417-5e4d6470db87/pull/0.log" Jan 01 10:01:02 crc kubenswrapper[4867]: I0101 10:01:02.039628 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_de70fa16a7ca5622188f18febf39673d50b3bc4dd3ef258c154a3707ddch6ph_2aeaad03-1c14-49f8-b417-5e4d6470db87/pull/0.log" Jan 01 10:01:02 crc kubenswrapper[4867]: I0101 10:01:02.066309 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_de70fa16a7ca5622188f18febf39673d50b3bc4dd3ef258c154a3707ddch6ph_2aeaad03-1c14-49f8-b417-5e4d6470db87/extract/0.log" Jan 01 10:01:02 crc kubenswrapper[4867]: I0101 10:01:02.075935 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_de70fa16a7ca5622188f18febf39673d50b3bc4dd3ef258c154a3707ddch6ph_2aeaad03-1c14-49f8-b417-5e4d6470db87/util/0.log" Jan 01 10:01:02 crc kubenswrapper[4867]: I0101 10:01:02.240592 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66f8b87655-vp4t2_795d3985-4592-42e4-aa83-aaebb35bcc6d/manager/0.log" Jan 01 10:01:02 crc kubenswrapper[4867]: I0101 10:01:02.305225 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7b549fc966-8h6dx_7cee2279-7f63-4416-bf5f-42e1ca8bd334/manager/0.log" Jan 01 10:01:02 crc kubenswrapper[4867]: I0101 10:01:02.460434 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7f5ddd8d7b-mvsxj_bbd5645a-1a67-44ef-8aa2-25fa40566538/manager/0.log" Jan 01 10:01:02 crc kubenswrapper[4867]: I0101 10:01:02.467018 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-658dd65b86-hn42f_3b2fcdd1-2278-4f3e-b3aa-570321fafee8/manager/0.log" Jan 01 10:01:02 crc kubenswrapper[4867]: I0101 10:01:02.662535 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-f99f54bc8-5jhrr_dbc4e740-aa10-4b7b-88db-7c172dae38f9/manager/0.log" Jan 01 10:01:02 crc kubenswrapper[4867]: I0101 10:01:02.964913 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6d99759cf-vcnt9_8e2aeeaf-c653-49dc-9165-fc5445bb7aaf/manager/0.log" Jan 01 10:01:02 crc kubenswrapper[4867]: I0101 10:01:02.976800 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-598945d5b8-gbrnj_a8217738-2a7f-41ee-9d06-a329d7c8dbfc/manager/0.log" Jan 01 10:01:03 crc kubenswrapper[4867]: I0101 10:01:03.002695 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-568985c78-njm56_44a94a32-c18a-4e1a-8b8a-461a002ab55c/manager/0.log" Jan 01 10:01:03 crc kubenswrapper[4867]: I0101 10:01:03.181128 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-7b88bfc995-svlz4_06f61537-0f61-41ee-a049-10540e971c9d/manager/0.log" Jan 01 10:01:03 crc kubenswrapper[4867]: I0101 10:01:03.223589 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7cd87b778f-m4l4l_2c4f0e99-7a5c-4d99-8b15-3ddd97d6b6b0/manager/0.log" Jan 01 10:01:03 crc kubenswrapper[4867]: I0101 10:01:03.410138 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-68c649d9d-276zz_cd6e1e20-2735-40b9-a1c2-313e2845ffc8/manager/0.log" Jan 01 10:01:03 crc kubenswrapper[4867]: I0101 10:01:03.438006 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5fbbf8b6cc-59zpg_59732ee6-32d1-48be-9e3f-a9989be15bbc/manager/0.log" Jan 01 10:01:03 crc kubenswrapper[4867]: I0101 10:01:03.668507 4867 generic.go:334] "Generic (PLEG): container finished" podID="d7c5c490-9373-4f6c-be5b-1109ec5e2bc3" containerID="989533df551d41404a3adc7f472d053c7e58e238c8ff2ad812c129fff077129c" exitCode=0 Jan 01 10:01:03 crc kubenswrapper[4867]: I0101 10:01:03.668548 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29454361-9sqxn" event={"ID":"d7c5c490-9373-4f6c-be5b-1109ec5e2bc3","Type":"ContainerDied","Data":"989533df551d41404a3adc7f472d053c7e58e238c8ff2ad812c129fff077129c"} Jan 01 10:01:03 crc kubenswrapper[4867]: I0101 10:01:03.788166 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5c4776bcc5q4862_d721206b-841d-4c5c-9d94-202fff6b8838/manager/0.log" Jan 01 10:01:04 crc kubenswrapper[4867]: I0101 10:01:04.234346 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-7657r_aeb647d1-966c-41c8-8ef3-7895ff67e463/registry-server/0.log" Jan 01 10:01:04 crc kubenswrapper[4867]: I0101 10:01:04.277038 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-6879547b79-cctq6_ad4e0e83-de60-433b-a688-7e1bf4bd2c76/operator/0.log" Jan 01 10:01:04 crc kubenswrapper[4867]: I0101 10:01:04.531293 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-9b6f8f78c-2tl66_26964f50-c878-4612-b298-634abc246f6a/manager/0.log" Jan 01 10:01:04 crc kubenswrapper[4867]: I0101 10:01:04.581313 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bf6d4f946-4x7b9_6834a8b5-22a8-4a7c-b03f-633599137bd2/manager/0.log" Jan 01 10:01:04 crc kubenswrapper[4867]: I0101 10:01:04.792107 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7df7568dd6-9drs7_1e5669c2-43cd-4d20-9d76-67e4dee53753/manager/0.log" Jan 01 10:01:04 crc kubenswrapper[4867]: I0101 10:01:04.794536 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-mvr5b_065f64f0-26e4-4b68-8dfa-1bf17f20e99b/operator/0.log" Jan 01 10:01:04 crc kubenswrapper[4867]: I0101 10:01:04.965739 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-bb586bbf4-tkjs4_2e2a7fff-5652-4f64-9660-59b811de1346/manager/0.log" Jan 01 10:01:04 crc kubenswrapper[4867]: I0101 10:01:04.996079 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29454361-9sqxn" Jan 01 10:01:05 crc kubenswrapper[4867]: I0101 10:01:05.062996 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-68d988df55-q2rv6_8801d693-c818-4666-bda5-93d9db1d46a0/manager/0.log" Jan 01 10:01:05 crc kubenswrapper[4867]: I0101 10:01:05.104688 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7c5c490-9373-4f6c-be5b-1109ec5e2bc3-combined-ca-bundle\") pod \"d7c5c490-9373-4f6c-be5b-1109ec5e2bc3\" (UID: \"d7c5c490-9373-4f6c-be5b-1109ec5e2bc3\") " Jan 01 10:01:05 crc kubenswrapper[4867]: I0101 10:01:05.104760 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d7c5c490-9373-4f6c-be5b-1109ec5e2bc3-fernet-keys\") pod \"d7c5c490-9373-4f6c-be5b-1109ec5e2bc3\" (UID: \"d7c5c490-9373-4f6c-be5b-1109ec5e2bc3\") " Jan 01 10:01:05 crc kubenswrapper[4867]: I0101 10:01:05.104848 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn69j\" (UniqueName: \"kubernetes.io/projected/d7c5c490-9373-4f6c-be5b-1109ec5e2bc3-kube-api-access-wn69j\") pod \"d7c5c490-9373-4f6c-be5b-1109ec5e2bc3\" (UID: \"d7c5c490-9373-4f6c-be5b-1109ec5e2bc3\") " Jan 01 10:01:05 crc kubenswrapper[4867]: I0101 10:01:05.104929 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7c5c490-9373-4f6c-be5b-1109ec5e2bc3-config-data\") pod \"d7c5c490-9373-4f6c-be5b-1109ec5e2bc3\" (UID: \"d7c5c490-9373-4f6c-be5b-1109ec5e2bc3\") " Jan 01 10:01:05 crc kubenswrapper[4867]: I0101 10:01:05.111128 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7c5c490-9373-4f6c-be5b-1109ec5e2bc3-kube-api-access-wn69j" (OuterVolumeSpecName: "kube-api-access-wn69j") pod "d7c5c490-9373-4f6c-be5b-1109ec5e2bc3" (UID: "d7c5c490-9373-4f6c-be5b-1109ec5e2bc3"). InnerVolumeSpecName "kube-api-access-wn69j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 10:01:05 crc kubenswrapper[4867]: I0101 10:01:05.120031 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7c5c490-9373-4f6c-be5b-1109ec5e2bc3-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d7c5c490-9373-4f6c-be5b-1109ec5e2bc3" (UID: "d7c5c490-9373-4f6c-be5b-1109ec5e2bc3"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 10:01:05 crc kubenswrapper[4867]: I0101 10:01:05.156661 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-6c866cfdcb-7h2r7_96cea6bb-e017-4e6c-9298-aaf07b775dff/manager/0.log" Jan 01 10:01:05 crc kubenswrapper[4867]: I0101 10:01:05.169136 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7c5c490-9373-4f6c-be5b-1109ec5e2bc3-config-data" (OuterVolumeSpecName: "config-data") pod "d7c5c490-9373-4f6c-be5b-1109ec5e2bc3" (UID: "d7c5c490-9373-4f6c-be5b-1109ec5e2bc3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 10:01:05 crc kubenswrapper[4867]: I0101 10:01:05.179570 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7c5c490-9373-4f6c-be5b-1109ec5e2bc3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7c5c490-9373-4f6c-be5b-1109ec5e2bc3" (UID: "d7c5c490-9373-4f6c-be5b-1109ec5e2bc3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 01 10:01:05 crc kubenswrapper[4867]: I0101 10:01:05.207298 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7c5c490-9373-4f6c-be5b-1109ec5e2bc3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 01 10:01:05 crc kubenswrapper[4867]: I0101 10:01:05.207334 4867 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d7c5c490-9373-4f6c-be5b-1109ec5e2bc3-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 01 10:01:05 crc kubenswrapper[4867]: I0101 10:01:05.207343 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn69j\" (UniqueName: \"kubernetes.io/projected/d7c5c490-9373-4f6c-be5b-1109ec5e2bc3-kube-api-access-wn69j\") on node \"crc\" DevicePath \"\"" Jan 01 10:01:05 crc kubenswrapper[4867]: I0101 10:01:05.207352 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7c5c490-9373-4f6c-be5b-1109ec5e2bc3-config-data\") on node \"crc\" DevicePath \"\"" Jan 01 10:01:05 crc kubenswrapper[4867]: I0101 10:01:05.239585 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-9dbdf6486-jcqkc_57ffbe9b-99b1-433d-86fa-e61435d99318/manager/0.log" Jan 01 10:01:05 crc kubenswrapper[4867]: I0101 10:01:05.694429 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29454361-9sqxn" event={"ID":"d7c5c490-9373-4f6c-be5b-1109ec5e2bc3","Type":"ContainerDied","Data":"0fbf2de03a7db29f7c362edaa5ea8d18dc71b31a696a6005d6e6830966a94ad2"} Jan 01 10:01:05 crc kubenswrapper[4867]: I0101 10:01:05.694658 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fbf2de03a7db29f7c362edaa5ea8d18dc71b31a696a6005d6e6830966a94ad2" Jan 01 10:01:05 crc kubenswrapper[4867]: I0101 10:01:05.694487 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29454361-9sqxn" Jan 01 10:01:21 crc kubenswrapper[4867]: I0101 10:01:21.331816 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 10:01:21 crc kubenswrapper[4867]: I0101 10:01:21.332709 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 10:01:23 crc kubenswrapper[4867]: I0101 10:01:23.802989 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-nsfm2_aa7410b8-9dc3-410f-9c3b-c8cac55804c7/control-plane-machine-set-operator/0.log" Jan 01 10:01:23 crc kubenswrapper[4867]: I0101 10:01:23.985498 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hmhl9_94b7004c-c318-4872-a1b7-f983c691a523/kube-rbac-proxy/0.log" Jan 01 10:01:24 crc kubenswrapper[4867]: I0101 10:01:24.024063 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hmhl9_94b7004c-c318-4872-a1b7-f983c691a523/machine-api-operator/0.log" Jan 01 10:01:37 crc kubenswrapper[4867]: I0101 10:01:37.792151 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-xg99b_65f1bc8c-693f-4044-b161-26ba5eb03cea/cert-manager-controller/0.log" Jan 01 10:01:37 crc kubenswrapper[4867]: I0101 10:01:37.932174 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-lsg2f_b43c2576-b232-4853-a699-12c3c3af0886/cert-manager-cainjector/0.log" Jan 01 10:01:38 crc kubenswrapper[4867]: I0101 10:01:38.013909 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-s5bdk_53107d29-98dc-4814-9829-38a2b5243e0d/cert-manager-webhook/0.log" Jan 01 10:01:41 crc kubenswrapper[4867]: I0101 10:01:41.985725 4867 scope.go:117] "RemoveContainer" containerID="3d4fe00bd9f7ab58832c31b6e067e03ee28e44f37f7596d5b78ad71f16ad7b65" Jan 01 10:01:42 crc kubenswrapper[4867]: I0101 10:01:42.010512 4867 scope.go:117] "RemoveContainer" containerID="e1533fa58c3a26b664b7954368c94de4e3fd4cc5a9de896b829a2dfdda06c482" Jan 01 10:01:50 crc kubenswrapper[4867]: I0101 10:01:50.912027 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6ff7998486-2hlv5_5f75e4a7-c411-444c-820f-168a7f5e51fb/nmstate-console-plugin/0.log" Jan 01 10:01:51 crc kubenswrapper[4867]: I0101 10:01:51.086350 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-kt6pp_d382f382-e329-4c64-9d5f-daa382470de3/nmstate-handler/0.log" Jan 01 10:01:51 crc kubenswrapper[4867]: I0101 10:01:51.151359 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-56bcb_5c066e2d-baa4-4f40-a024-e8b4a5c67e1a/kube-rbac-proxy/0.log" Jan 01 10:01:51 crc kubenswrapper[4867]: I0101 10:01:51.218273 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-56bcb_5c066e2d-baa4-4f40-a024-e8b4a5c67e1a/nmstate-metrics/0.log" Jan 01 10:01:51 crc kubenswrapper[4867]: I0101 10:01:51.296959 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-6769fb99d-4nkqq_5814d320-5a21-4996-96a3-0a19c1d304f2/nmstate-operator/0.log" Jan 01 10:01:51 crc kubenswrapper[4867]: I0101 10:01:51.331213 4867 patch_prober.go:28] interesting pod/machine-config-daemon-69jph container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 01 10:01:51 crc kubenswrapper[4867]: I0101 10:01:51.331278 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 01 10:01:51 crc kubenswrapper[4867]: I0101 10:01:51.331327 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-69jph" Jan 01 10:01:51 crc kubenswrapper[4867]: I0101 10:01:51.332129 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4263e693e008f2e7c549f6d1a594c0a089e1f59ba3be2adb0d4b5d5430950a46"} pod="openshift-machine-config-operator/machine-config-daemon-69jph" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 01 10:01:51 crc kubenswrapper[4867]: I0101 10:01:51.332202 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerName="machine-config-daemon" containerID="cri-o://4263e693e008f2e7c549f6d1a594c0a089e1f59ba3be2adb0d4b5d5430950a46" gracePeriod=600 Jan 01 10:01:51 crc kubenswrapper[4867]: I0101 10:01:51.413843 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-f8fb84555-x2hhj_55028d16-a688-40c2-a1e7-eacb136d5ea1/nmstate-webhook/0.log" Jan 01 10:01:51 crc kubenswrapper[4867]: E0101 10:01:51.964386 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 10:01:52 crc kubenswrapper[4867]: I0101 10:01:52.069718 4867 generic.go:334] "Generic (PLEG): container finished" podID="4608a141-23bd-4286-8607-ad4b16b5ee11" containerID="4263e693e008f2e7c549f6d1a594c0a089e1f59ba3be2adb0d4b5d5430950a46" exitCode=0 Jan 01 10:01:52 crc kubenswrapper[4867]: I0101 10:01:52.069754 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-69jph" event={"ID":"4608a141-23bd-4286-8607-ad4b16b5ee11","Type":"ContainerDied","Data":"4263e693e008f2e7c549f6d1a594c0a089e1f59ba3be2adb0d4b5d5430950a46"} Jan 01 10:01:52 crc kubenswrapper[4867]: I0101 10:01:52.069819 4867 scope.go:117] "RemoveContainer" containerID="41d797c5a9ee389d0167543c461e0396ce5911531f543d8083183360c9bf4c88" Jan 01 10:01:52 crc kubenswrapper[4867]: I0101 10:01:52.070333 4867 scope.go:117] "RemoveContainer" containerID="4263e693e008f2e7c549f6d1a594c0a089e1f59ba3be2adb0d4b5d5430950a46" Jan 01 10:01:52 crc kubenswrapper[4867]: E0101 10:01:52.070601 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 10:02:06 crc kubenswrapper[4867]: I0101 10:02:06.128119 4867 scope.go:117] "RemoveContainer" containerID="4263e693e008f2e7c549f6d1a594c0a089e1f59ba3be2adb0d4b5d5430950a46" Jan 01 10:02:06 crc kubenswrapper[4867]: E0101 10:02:06.129063 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 10:02:06 crc kubenswrapper[4867]: I0101 10:02:06.168592 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-6k4kx_edc90365-7a93-4042-ba66-e7ee4e6ba188/kube-rbac-proxy/0.log" Jan 01 10:02:06 crc kubenswrapper[4867]: I0101 10:02:06.355722 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-blnbq_46986363-a0a5-4868-83b8-b1536fb75705/cp-frr-files/0.log" Jan 01 10:02:06 crc kubenswrapper[4867]: I0101 10:02:06.479943 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-6k4kx_edc90365-7a93-4042-ba66-e7ee4e6ba188/controller/0.log" Jan 01 10:02:06 crc kubenswrapper[4867]: I0101 10:02:06.613921 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-blnbq_46986363-a0a5-4868-83b8-b1536fb75705/cp-frr-files/0.log" Jan 01 10:02:06 crc kubenswrapper[4867]: I0101 10:02:06.624603 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-blnbq_46986363-a0a5-4868-83b8-b1536fb75705/cp-reloader/0.log" Jan 01 10:02:06 crc kubenswrapper[4867]: I0101 10:02:06.670775 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-blnbq_46986363-a0a5-4868-83b8-b1536fb75705/cp-reloader/0.log" Jan 01 10:02:06 crc kubenswrapper[4867]: I0101 10:02:06.682760 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-blnbq_46986363-a0a5-4868-83b8-b1536fb75705/cp-metrics/0.log" Jan 01 10:02:06 crc kubenswrapper[4867]: I0101 10:02:06.840564 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-blnbq_46986363-a0a5-4868-83b8-b1536fb75705/cp-reloader/0.log" Jan 01 10:02:06 crc kubenswrapper[4867]: I0101 10:02:06.853788 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-blnbq_46986363-a0a5-4868-83b8-b1536fb75705/cp-frr-files/0.log" Jan 01 10:02:06 crc kubenswrapper[4867]: I0101 10:02:06.894497 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-blnbq_46986363-a0a5-4868-83b8-b1536fb75705/cp-metrics/0.log" Jan 01 10:02:06 crc kubenswrapper[4867]: I0101 10:02:06.917667 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-blnbq_46986363-a0a5-4868-83b8-b1536fb75705/cp-metrics/0.log" Jan 01 10:02:07 crc kubenswrapper[4867]: I0101 10:02:07.090725 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-blnbq_46986363-a0a5-4868-83b8-b1536fb75705/cp-metrics/0.log" Jan 01 10:02:07 crc kubenswrapper[4867]: I0101 10:02:07.098465 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-blnbq_46986363-a0a5-4868-83b8-b1536fb75705/cp-frr-files/0.log" Jan 01 10:02:07 crc kubenswrapper[4867]: I0101 10:02:07.112012 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-blnbq_46986363-a0a5-4868-83b8-b1536fb75705/cp-reloader/0.log" Jan 01 10:02:07 crc kubenswrapper[4867]: I0101 10:02:07.146119 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-blnbq_46986363-a0a5-4868-83b8-b1536fb75705/controller/0.log" Jan 01 10:02:07 crc kubenswrapper[4867]: I0101 10:02:07.292722 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-blnbq_46986363-a0a5-4868-83b8-b1536fb75705/frr-metrics/0.log" Jan 01 10:02:07 crc kubenswrapper[4867]: I0101 10:02:07.363577 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-blnbq_46986363-a0a5-4868-83b8-b1536fb75705/kube-rbac-proxy/0.log" Jan 01 10:02:07 crc kubenswrapper[4867]: I0101 10:02:07.471439 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-blnbq_46986363-a0a5-4868-83b8-b1536fb75705/kube-rbac-proxy-frr/0.log" Jan 01 10:02:07 crc kubenswrapper[4867]: I0101 10:02:07.501222 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-blnbq_46986363-a0a5-4868-83b8-b1536fb75705/reloader/0.log" Jan 01 10:02:07 crc kubenswrapper[4867]: I0101 10:02:07.698856 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-g8s2w_1fe21b9b-fc35-41dd-aa42-deb7bef61c21/frr-k8s-webhook-server/0.log" Jan 01 10:02:07 crc kubenswrapper[4867]: I0101 10:02:07.837670 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-74c858c997-zhnxl_1a30748a-7ac7-4db0-89b3-17a43c7e3fde/manager/0.log" Jan 01 10:02:07 crc kubenswrapper[4867]: I0101 10:02:07.935030 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-66c4777458-fml45_24863f96-c5b8-4c66-bcf6-5e796cf8068a/webhook-server/0.log" Jan 01 10:02:08 crc kubenswrapper[4867]: I0101 10:02:08.063084 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-cc2m9_2996cfdf-82a5-4df9-b1e8-5553e35489b4/kube-rbac-proxy/0.log" Jan 01 10:02:08 crc kubenswrapper[4867]: I0101 10:02:08.742674 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-cc2m9_2996cfdf-82a5-4df9-b1e8-5553e35489b4/speaker/0.log" Jan 01 10:02:08 crc kubenswrapper[4867]: I0101 10:02:08.944565 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-blnbq_46986363-a0a5-4868-83b8-b1536fb75705/frr/0.log" Jan 01 10:02:20 crc kubenswrapper[4867]: I0101 10:02:20.129336 4867 scope.go:117] "RemoveContainer" containerID="4263e693e008f2e7c549f6d1a594c0a089e1f59ba3be2adb0d4b5d5430950a46" Jan 01 10:02:20 crc kubenswrapper[4867]: E0101 10:02:20.130368 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 10:02:23 crc kubenswrapper[4867]: I0101 10:02:23.762803 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5wcpl_c5482c47-b1ad-4526-b3f3-b0388ae47cc9/util/0.log" Jan 01 10:02:23 crc kubenswrapper[4867]: I0101 10:02:23.864979 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5wcpl_c5482c47-b1ad-4526-b3f3-b0388ae47cc9/util/0.log" Jan 01 10:02:23 crc kubenswrapper[4867]: I0101 10:02:23.908663 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5wcpl_c5482c47-b1ad-4526-b3f3-b0388ae47cc9/pull/0.log" Jan 01 10:02:23 crc kubenswrapper[4867]: I0101 10:02:23.972747 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5wcpl_c5482c47-b1ad-4526-b3f3-b0388ae47cc9/pull/0.log" Jan 01 10:02:24 crc kubenswrapper[4867]: I0101 10:02:24.108898 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5wcpl_c5482c47-b1ad-4526-b3f3-b0388ae47cc9/util/0.log" Jan 01 10:02:24 crc kubenswrapper[4867]: I0101 10:02:24.148590 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5wcpl_c5482c47-b1ad-4526-b3f3-b0388ae47cc9/pull/0.log" Jan 01 10:02:24 crc kubenswrapper[4867]: I0101 10:02:24.168240 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5wcpl_c5482c47-b1ad-4526-b3f3-b0388ae47cc9/extract/0.log" Jan 01 10:02:24 crc kubenswrapper[4867]: I0101 10:02:24.297641 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zkv2w_a970fbd3-2646-4156-9fe9-a7c33b86b488/util/0.log" Jan 01 10:02:24 crc kubenswrapper[4867]: I0101 10:02:24.503502 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zkv2w_a970fbd3-2646-4156-9fe9-a7c33b86b488/util/0.log" Jan 01 10:02:24 crc kubenswrapper[4867]: I0101 10:02:24.531301 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zkv2w_a970fbd3-2646-4156-9fe9-a7c33b86b488/pull/0.log" Jan 01 10:02:24 crc kubenswrapper[4867]: I0101 10:02:24.537131 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zkv2w_a970fbd3-2646-4156-9fe9-a7c33b86b488/pull/0.log" Jan 01 10:02:24 crc kubenswrapper[4867]: I0101 10:02:24.674506 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zkv2w_a970fbd3-2646-4156-9fe9-a7c33b86b488/util/0.log" Jan 01 10:02:24 crc kubenswrapper[4867]: I0101 10:02:24.709045 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zkv2w_a970fbd3-2646-4156-9fe9-a7c33b86b488/pull/0.log" Jan 01 10:02:24 crc kubenswrapper[4867]: I0101 10:02:24.715467 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zkv2w_a970fbd3-2646-4156-9fe9-a7c33b86b488/extract/0.log" Jan 01 10:02:24 crc kubenswrapper[4867]: I0101 10:02:24.968770 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jxdv9_c55b6c63-182a-4871-8b23-55a3edc099a6/util/0.log" Jan 01 10:02:25 crc kubenswrapper[4867]: I0101 10:02:25.168081 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jxdv9_c55b6c63-182a-4871-8b23-55a3edc099a6/util/0.log" Jan 01 10:02:25 crc kubenswrapper[4867]: I0101 10:02:25.207024 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jxdv9_c55b6c63-182a-4871-8b23-55a3edc099a6/pull/0.log" Jan 01 10:02:25 crc kubenswrapper[4867]: I0101 10:02:25.215763 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jxdv9_c55b6c63-182a-4871-8b23-55a3edc099a6/pull/0.log" Jan 01 10:02:25 crc kubenswrapper[4867]: I0101 10:02:25.379195 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jxdv9_c55b6c63-182a-4871-8b23-55a3edc099a6/pull/0.log" Jan 01 10:02:25 crc kubenswrapper[4867]: I0101 10:02:25.393151 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jxdv9_c55b6c63-182a-4871-8b23-55a3edc099a6/util/0.log" Jan 01 10:02:25 crc kubenswrapper[4867]: I0101 10:02:25.438297 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jxdv9_c55b6c63-182a-4871-8b23-55a3edc099a6/extract/0.log" Jan 01 10:02:25 crc kubenswrapper[4867]: I0101 10:02:25.544384 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nwn5j_6eda25f9-77e0-48e5-bcf0-007254ff593e/extract-utilities/0.log" Jan 01 10:02:25 crc kubenswrapper[4867]: I0101 10:02:25.713436 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nwn5j_6eda25f9-77e0-48e5-bcf0-007254ff593e/extract-content/0.log" Jan 01 10:02:25 crc kubenswrapper[4867]: I0101 10:02:25.722555 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nwn5j_6eda25f9-77e0-48e5-bcf0-007254ff593e/extract-content/0.log" Jan 01 10:02:25 crc kubenswrapper[4867]: I0101 10:02:25.747540 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nwn5j_6eda25f9-77e0-48e5-bcf0-007254ff593e/extract-utilities/0.log" Jan 01 10:02:25 crc kubenswrapper[4867]: I0101 10:02:25.876135 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nwn5j_6eda25f9-77e0-48e5-bcf0-007254ff593e/extract-content/0.log" Jan 01 10:02:25 crc kubenswrapper[4867]: I0101 10:02:25.959320 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nwn5j_6eda25f9-77e0-48e5-bcf0-007254ff593e/extract-utilities/0.log" Jan 01 10:02:26 crc kubenswrapper[4867]: I0101 10:02:26.113128 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nj5s2_d7514bc2-fed6-4888-ad51-5849e664cf35/extract-utilities/0.log" Jan 01 10:02:26 crc kubenswrapper[4867]: I0101 10:02:26.276692 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nj5s2_d7514bc2-fed6-4888-ad51-5849e664cf35/extract-utilities/0.log" Jan 01 10:02:26 crc kubenswrapper[4867]: I0101 10:02:26.298918 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nj5s2_d7514bc2-fed6-4888-ad51-5849e664cf35/extract-content/0.log" Jan 01 10:02:26 crc kubenswrapper[4867]: I0101 10:02:26.378811 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nwn5j_6eda25f9-77e0-48e5-bcf0-007254ff593e/registry-server/0.log" Jan 01 10:02:26 crc kubenswrapper[4867]: I0101 10:02:26.392457 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nj5s2_d7514bc2-fed6-4888-ad51-5849e664cf35/extract-content/0.log" Jan 01 10:02:26 crc kubenswrapper[4867]: I0101 10:02:26.477432 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nj5s2_d7514bc2-fed6-4888-ad51-5849e664cf35/extract-utilities/0.log" Jan 01 10:02:26 crc kubenswrapper[4867]: I0101 10:02:26.512596 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nj5s2_d7514bc2-fed6-4888-ad51-5849e664cf35/extract-content/0.log" Jan 01 10:02:26 crc kubenswrapper[4867]: I0101 10:02:26.643762 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-m7q68_74dd8dbf-6baa-483d-8228-1248a8e3b791/marketplace-operator/0.log" Jan 01 10:02:26 crc kubenswrapper[4867]: I0101 10:02:26.913747 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sf9dk_39bfdca6-5787-4eaa-bc02-41f54ae947ee/extract-utilities/0.log" Jan 01 10:02:27 crc kubenswrapper[4867]: I0101 10:02:27.052258 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sf9dk_39bfdca6-5787-4eaa-bc02-41f54ae947ee/extract-utilities/0.log" Jan 01 10:02:27 crc kubenswrapper[4867]: I0101 10:02:27.116658 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sf9dk_39bfdca6-5787-4eaa-bc02-41f54ae947ee/extract-content/0.log" Jan 01 10:02:27 crc kubenswrapper[4867]: I0101 10:02:27.158989 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sf9dk_39bfdca6-5787-4eaa-bc02-41f54ae947ee/extract-content/0.log" Jan 01 10:02:27 crc kubenswrapper[4867]: I0101 10:02:27.322420 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sf9dk_39bfdca6-5787-4eaa-bc02-41f54ae947ee/extract-content/0.log" Jan 01 10:02:27 crc kubenswrapper[4867]: I0101 10:02:27.322735 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sf9dk_39bfdca6-5787-4eaa-bc02-41f54ae947ee/extract-utilities/0.log" Jan 01 10:02:27 crc kubenswrapper[4867]: I0101 10:02:27.393215 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nj5s2_d7514bc2-fed6-4888-ad51-5849e664cf35/registry-server/0.log" Jan 01 10:02:27 crc kubenswrapper[4867]: I0101 10:02:27.467235 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wdh87_1f8112e9-ec27-48d7-8e3a-491aaa03daa9/extract-utilities/0.log" Jan 01 10:02:27 crc kubenswrapper[4867]: I0101 10:02:27.563557 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sf9dk_39bfdca6-5787-4eaa-bc02-41f54ae947ee/registry-server/0.log" Jan 01 10:02:27 crc kubenswrapper[4867]: I0101 10:02:27.663920 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wdh87_1f8112e9-ec27-48d7-8e3a-491aaa03daa9/extract-utilities/0.log" Jan 01 10:02:27 crc kubenswrapper[4867]: I0101 10:02:27.677641 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wdh87_1f8112e9-ec27-48d7-8e3a-491aaa03daa9/extract-content/0.log" Jan 01 10:02:27 crc kubenswrapper[4867]: I0101 10:02:27.707255 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wdh87_1f8112e9-ec27-48d7-8e3a-491aaa03daa9/extract-content/0.log" Jan 01 10:02:27 crc kubenswrapper[4867]: I0101 10:02:27.868513 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wdh87_1f8112e9-ec27-48d7-8e3a-491aaa03daa9/extract-utilities/0.log" Jan 01 10:02:27 crc kubenswrapper[4867]: I0101 10:02:27.879701 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wdh87_1f8112e9-ec27-48d7-8e3a-491aaa03daa9/extract-content/0.log" Jan 01 10:02:28 crc kubenswrapper[4867]: I0101 10:02:28.632099 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wdh87_1f8112e9-ec27-48d7-8e3a-491aaa03daa9/registry-server/0.log" Jan 01 10:02:32 crc kubenswrapper[4867]: I0101 10:02:32.129349 4867 scope.go:117] "RemoveContainer" containerID="4263e693e008f2e7c549f6d1a594c0a089e1f59ba3be2adb0d4b5d5430950a46" Jan 01 10:02:32 crc kubenswrapper[4867]: E0101 10:02:32.130169 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 10:02:42 crc kubenswrapper[4867]: I0101 10:02:42.113995 4867 scope.go:117] "RemoveContainer" containerID="3384bd80934c491a9f17feb7c4920f13a50a2d6bc000a32a889d8e5291248894" Jan 01 10:02:44 crc kubenswrapper[4867]: I0101 10:02:44.128740 4867 scope.go:117] "RemoveContainer" containerID="4263e693e008f2e7c549f6d1a594c0a089e1f59ba3be2adb0d4b5d5430950a46" Jan 01 10:02:44 crc kubenswrapper[4867]: E0101 10:02:44.129473 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 10:02:58 crc kubenswrapper[4867]: I0101 10:02:58.128884 4867 scope.go:117] "RemoveContainer" containerID="4263e693e008f2e7c549f6d1a594c0a089e1f59ba3be2adb0d4b5d5430950a46" Jan 01 10:02:58 crc kubenswrapper[4867]: E0101 10:02:58.129928 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 10:03:09 crc kubenswrapper[4867]: I0101 10:03:09.129987 4867 scope.go:117] "RemoveContainer" containerID="4263e693e008f2e7c549f6d1a594c0a089e1f59ba3be2adb0d4b5d5430950a46" Jan 01 10:03:09 crc kubenswrapper[4867]: E0101 10:03:09.137035 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 10:03:22 crc kubenswrapper[4867]: I0101 10:03:22.129343 4867 scope.go:117] "RemoveContainer" containerID="4263e693e008f2e7c549f6d1a594c0a089e1f59ba3be2adb0d4b5d5430950a46" Jan 01 10:03:22 crc kubenswrapper[4867]: E0101 10:03:22.130517 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 10:03:36 crc kubenswrapper[4867]: I0101 10:03:36.129167 4867 scope.go:117] "RemoveContainer" containerID="4263e693e008f2e7c549f6d1a594c0a089e1f59ba3be2adb0d4b5d5430950a46" Jan 01 10:03:36 crc kubenswrapper[4867]: E0101 10:03:36.131082 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 10:03:49 crc kubenswrapper[4867]: E0101 10:03:49.779979 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod456bdd85_1e92_4149_a408_d7bef1042b83.slice/crio-4c48ab831b665243d1d7aeec48b920e76f81154a1ad0d396ab2f8adfccb985a8.scope\": RecentStats: unable to find data in memory cache]" Jan 01 10:03:50 crc kubenswrapper[4867]: I0101 10:03:50.129323 4867 scope.go:117] "RemoveContainer" containerID="4263e693e008f2e7c549f6d1a594c0a089e1f59ba3be2adb0d4b5d5430950a46" Jan 01 10:03:50 crc kubenswrapper[4867]: E0101 10:03:50.130368 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 10:03:50 crc kubenswrapper[4867]: I0101 10:03:50.223817 4867 generic.go:334] "Generic (PLEG): container finished" podID="456bdd85-1e92-4149-a408-d7bef1042b83" containerID="4c48ab831b665243d1d7aeec48b920e76f81154a1ad0d396ab2f8adfccb985a8" exitCode=0 Jan 01 10:03:50 crc kubenswrapper[4867]: I0101 10:03:50.223912 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2k4nq/must-gather-94tz9" event={"ID":"456bdd85-1e92-4149-a408-d7bef1042b83","Type":"ContainerDied","Data":"4c48ab831b665243d1d7aeec48b920e76f81154a1ad0d396ab2f8adfccb985a8"} Jan 01 10:03:50 crc kubenswrapper[4867]: I0101 10:03:50.224738 4867 scope.go:117] "RemoveContainer" containerID="4c48ab831b665243d1d7aeec48b920e76f81154a1ad0d396ab2f8adfccb985a8" Jan 01 10:03:50 crc kubenswrapper[4867]: I0101 10:03:50.786727 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2k4nq_must-gather-94tz9_456bdd85-1e92-4149-a408-d7bef1042b83/gather/0.log" Jan 01 10:03:58 crc kubenswrapper[4867]: I0101 10:03:58.500453 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2k4nq/must-gather-94tz9"] Jan 01 10:03:58 crc kubenswrapper[4867]: I0101 10:03:58.501392 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-2k4nq/must-gather-94tz9" podUID="456bdd85-1e92-4149-a408-d7bef1042b83" containerName="copy" containerID="cri-o://fb5b1393037d09e361526f850f3df04cae305fd854e3fb80beca486da9d2f19f" gracePeriod=2 Jan 01 10:03:58 crc kubenswrapper[4867]: I0101 10:03:58.507251 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2k4nq/must-gather-94tz9"] Jan 01 10:03:58 crc kubenswrapper[4867]: I0101 10:03:58.947734 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2k4nq_must-gather-94tz9_456bdd85-1e92-4149-a408-d7bef1042b83/copy/0.log" Jan 01 10:03:58 crc kubenswrapper[4867]: I0101 10:03:58.948898 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2k4nq/must-gather-94tz9" Jan 01 10:03:59 crc kubenswrapper[4867]: I0101 10:03:59.035032 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/456bdd85-1e92-4149-a408-d7bef1042b83-must-gather-output\") pod \"456bdd85-1e92-4149-a408-d7bef1042b83\" (UID: \"456bdd85-1e92-4149-a408-d7bef1042b83\") " Jan 01 10:03:59 crc kubenswrapper[4867]: I0101 10:03:59.035178 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl5vm\" (UniqueName: \"kubernetes.io/projected/456bdd85-1e92-4149-a408-d7bef1042b83-kube-api-access-rl5vm\") pod \"456bdd85-1e92-4149-a408-d7bef1042b83\" (UID: \"456bdd85-1e92-4149-a408-d7bef1042b83\") " Jan 01 10:03:59 crc kubenswrapper[4867]: I0101 10:03:59.041738 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/456bdd85-1e92-4149-a408-d7bef1042b83-kube-api-access-rl5vm" (OuterVolumeSpecName: "kube-api-access-rl5vm") pod "456bdd85-1e92-4149-a408-d7bef1042b83" (UID: "456bdd85-1e92-4149-a408-d7bef1042b83"). InnerVolumeSpecName "kube-api-access-rl5vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 01 10:03:59 crc kubenswrapper[4867]: I0101 10:03:59.137625 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl5vm\" (UniqueName: \"kubernetes.io/projected/456bdd85-1e92-4149-a408-d7bef1042b83-kube-api-access-rl5vm\") on node \"crc\" DevicePath \"\"" Jan 01 10:03:59 crc kubenswrapper[4867]: I0101 10:03:59.162673 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/456bdd85-1e92-4149-a408-d7bef1042b83-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "456bdd85-1e92-4149-a408-d7bef1042b83" (UID: "456bdd85-1e92-4149-a408-d7bef1042b83"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 01 10:03:59 crc kubenswrapper[4867]: I0101 10:03:59.240384 4867 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/456bdd85-1e92-4149-a408-d7bef1042b83-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 01 10:03:59 crc kubenswrapper[4867]: I0101 10:03:59.317431 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2k4nq_must-gather-94tz9_456bdd85-1e92-4149-a408-d7bef1042b83/copy/0.log" Jan 01 10:03:59 crc kubenswrapper[4867]: I0101 10:03:59.317877 4867 generic.go:334] "Generic (PLEG): container finished" podID="456bdd85-1e92-4149-a408-d7bef1042b83" containerID="fb5b1393037d09e361526f850f3df04cae305fd854e3fb80beca486da9d2f19f" exitCode=143 Jan 01 10:03:59 crc kubenswrapper[4867]: I0101 10:03:59.317948 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2k4nq/must-gather-94tz9" Jan 01 10:03:59 crc kubenswrapper[4867]: I0101 10:03:59.317975 4867 scope.go:117] "RemoveContainer" containerID="fb5b1393037d09e361526f850f3df04cae305fd854e3fb80beca486da9d2f19f" Jan 01 10:03:59 crc kubenswrapper[4867]: I0101 10:03:59.336804 4867 scope.go:117] "RemoveContainer" containerID="4c48ab831b665243d1d7aeec48b920e76f81154a1ad0d396ab2f8adfccb985a8" Jan 01 10:03:59 crc kubenswrapper[4867]: I0101 10:03:59.389730 4867 scope.go:117] "RemoveContainer" containerID="fb5b1393037d09e361526f850f3df04cae305fd854e3fb80beca486da9d2f19f" Jan 01 10:03:59 crc kubenswrapper[4867]: E0101 10:03:59.390126 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb5b1393037d09e361526f850f3df04cae305fd854e3fb80beca486da9d2f19f\": container with ID starting with fb5b1393037d09e361526f850f3df04cae305fd854e3fb80beca486da9d2f19f not found: ID does not exist" containerID="fb5b1393037d09e361526f850f3df04cae305fd854e3fb80beca486da9d2f19f" Jan 01 10:03:59 crc kubenswrapper[4867]: I0101 10:03:59.390158 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb5b1393037d09e361526f850f3df04cae305fd854e3fb80beca486da9d2f19f"} err="failed to get container status \"fb5b1393037d09e361526f850f3df04cae305fd854e3fb80beca486da9d2f19f\": rpc error: code = NotFound desc = could not find container \"fb5b1393037d09e361526f850f3df04cae305fd854e3fb80beca486da9d2f19f\": container with ID starting with fb5b1393037d09e361526f850f3df04cae305fd854e3fb80beca486da9d2f19f not found: ID does not exist" Jan 01 10:03:59 crc kubenswrapper[4867]: I0101 10:03:59.390179 4867 scope.go:117] "RemoveContainer" containerID="4c48ab831b665243d1d7aeec48b920e76f81154a1ad0d396ab2f8adfccb985a8" Jan 01 10:03:59 crc kubenswrapper[4867]: E0101 10:03:59.390754 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c48ab831b665243d1d7aeec48b920e76f81154a1ad0d396ab2f8adfccb985a8\": container with ID starting with 4c48ab831b665243d1d7aeec48b920e76f81154a1ad0d396ab2f8adfccb985a8 not found: ID does not exist" containerID="4c48ab831b665243d1d7aeec48b920e76f81154a1ad0d396ab2f8adfccb985a8" Jan 01 10:03:59 crc kubenswrapper[4867]: I0101 10:03:59.390773 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c48ab831b665243d1d7aeec48b920e76f81154a1ad0d396ab2f8adfccb985a8"} err="failed to get container status \"4c48ab831b665243d1d7aeec48b920e76f81154a1ad0d396ab2f8adfccb985a8\": rpc error: code = NotFound desc = could not find container \"4c48ab831b665243d1d7aeec48b920e76f81154a1ad0d396ab2f8adfccb985a8\": container with ID starting with 4c48ab831b665243d1d7aeec48b920e76f81154a1ad0d396ab2f8adfccb985a8 not found: ID does not exist" Jan 01 10:04:01 crc kubenswrapper[4867]: I0101 10:04:01.137604 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="456bdd85-1e92-4149-a408-d7bef1042b83" path="/var/lib/kubelet/pods/456bdd85-1e92-4149-a408-d7bef1042b83/volumes" Jan 01 10:04:04 crc kubenswrapper[4867]: I0101 10:04:04.128303 4867 scope.go:117] "RemoveContainer" containerID="4263e693e008f2e7c549f6d1a594c0a089e1f59ba3be2adb0d4b5d5430950a46" Jan 01 10:04:04 crc kubenswrapper[4867]: E0101 10:04:04.129483 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 10:04:18 crc kubenswrapper[4867]: I0101 10:04:18.128444 4867 scope.go:117] "RemoveContainer" containerID="4263e693e008f2e7c549f6d1a594c0a089e1f59ba3be2adb0d4b5d5430950a46" Jan 01 10:04:18 crc kubenswrapper[4867]: E0101 10:04:18.129338 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 10:04:32 crc kubenswrapper[4867]: I0101 10:04:32.128673 4867 scope.go:117] "RemoveContainer" containerID="4263e693e008f2e7c549f6d1a594c0a089e1f59ba3be2adb0d4b5d5430950a46" Jan 01 10:04:32 crc kubenswrapper[4867]: E0101 10:04:32.129711 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 10:04:47 crc kubenswrapper[4867]: I0101 10:04:47.129734 4867 scope.go:117] "RemoveContainer" containerID="4263e693e008f2e7c549f6d1a594c0a089e1f59ba3be2adb0d4b5d5430950a46" Jan 01 10:04:47 crc kubenswrapper[4867]: E0101 10:04:47.131310 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 10:04:58 crc kubenswrapper[4867]: I0101 10:04:58.129301 4867 scope.go:117] "RemoveContainer" containerID="4263e693e008f2e7c549f6d1a594c0a089e1f59ba3be2adb0d4b5d5430950a46" Jan 01 10:04:58 crc kubenswrapper[4867]: E0101 10:04:58.130384 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 10:05:09 crc kubenswrapper[4867]: I0101 10:05:09.128852 4867 scope.go:117] "RemoveContainer" containerID="4263e693e008f2e7c549f6d1a594c0a089e1f59ba3be2adb0d4b5d5430950a46" Jan 01 10:05:09 crc kubenswrapper[4867]: E0101 10:05:09.129636 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 10:05:21 crc kubenswrapper[4867]: I0101 10:05:21.133559 4867 scope.go:117] "RemoveContainer" containerID="4263e693e008f2e7c549f6d1a594c0a089e1f59ba3be2adb0d4b5d5430950a46" Jan 01 10:05:21 crc kubenswrapper[4867]: E0101 10:05:21.134222 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 10:05:32 crc kubenswrapper[4867]: I0101 10:05:32.128573 4867 scope.go:117] "RemoveContainer" containerID="4263e693e008f2e7c549f6d1a594c0a089e1f59ba3be2adb0d4b5d5430950a46" Jan 01 10:05:32 crc kubenswrapper[4867]: E0101 10:05:32.129616 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 10:05:47 crc kubenswrapper[4867]: I0101 10:05:47.129806 4867 scope.go:117] "RemoveContainer" containerID="4263e693e008f2e7c549f6d1a594c0a089e1f59ba3be2adb0d4b5d5430950a46" Jan 01 10:05:47 crc kubenswrapper[4867]: E0101 10:05:47.131055 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 10:05:58 crc kubenswrapper[4867]: I0101 10:05:58.128910 4867 scope.go:117] "RemoveContainer" containerID="4263e693e008f2e7c549f6d1a594c0a089e1f59ba3be2adb0d4b5d5430950a46" Jan 01 10:05:58 crc kubenswrapper[4867]: E0101 10:05:58.130523 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 10:06:13 crc kubenswrapper[4867]: I0101 10:06:13.129372 4867 scope.go:117] "RemoveContainer" containerID="4263e693e008f2e7c549f6d1a594c0a089e1f59ba3be2adb0d4b5d5430950a46" Jan 01 10:06:13 crc kubenswrapper[4867]: E0101 10:06:13.130390 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11" Jan 01 10:06:17 crc kubenswrapper[4867]: I0101 10:06:17.067748 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-f125-account-create-update-jh6hd"] Jan 01 10:06:17 crc kubenswrapper[4867]: I0101 10:06:17.080738 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-xz5g7"] Jan 01 10:06:17 crc kubenswrapper[4867]: I0101 10:06:17.089154 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-f125-account-create-update-jh6hd"] Jan 01 10:06:17 crc kubenswrapper[4867]: I0101 10:06:17.096604 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-xz5g7"] Jan 01 10:06:17 crc kubenswrapper[4867]: I0101 10:06:17.147950 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f39adf8-53ff-43ed-8646-9bee0c5fad79" path="/var/lib/kubelet/pods/3f39adf8-53ff-43ed-8646-9bee0c5fad79/volumes" Jan 01 10:06:17 crc kubenswrapper[4867]: I0101 10:06:17.149503 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f374d6d6-c44d-4f8a-b4ed-ee985a92300f" path="/var/lib/kubelet/pods/f374d6d6-c44d-4f8a-b4ed-ee985a92300f/volumes" Jan 01 10:06:23 crc kubenswrapper[4867]: I0101 10:06:23.037562 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-4gdmp"] Jan 01 10:06:23 crc kubenswrapper[4867]: I0101 10:06:23.060457 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-4gdmp"] Jan 01 10:06:23 crc kubenswrapper[4867]: I0101 10:06:23.141420 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27def929-31ca-4a9c-af0e-5830dddceab9" path="/var/lib/kubelet/pods/27def929-31ca-4a9c-af0e-5830dddceab9/volumes" Jan 01 10:06:25 crc kubenswrapper[4867]: I0101 10:06:25.129482 4867 scope.go:117] "RemoveContainer" containerID="4263e693e008f2e7c549f6d1a594c0a089e1f59ba3be2adb0d4b5d5430950a46" Jan 01 10:06:25 crc kubenswrapper[4867]: E0101 10:06:25.129946 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-69jph_openshift-machine-config-operator(4608a141-23bd-4286-8607-ad4b16b5ee11)\"" pod="openshift-machine-config-operator/machine-config-daemon-69jph" podUID="4608a141-23bd-4286-8607-ad4b16b5ee11"